439 32 65MB
English Pages 1225 [1226] Year 2023
Noel Crespi Adam T. Drobot Roberto Minerva Editors
The Digital Twin
The Digital Twin
Noel Crespi • Adam T. Drobot Roberto Minerva Editors
The Digital Twin
Editors Noel Crespi Telecom SudParis Institut Polytechnique de Paris Palaiseau, France
Adam T. Drobot OpenTechWorks Inc. Wayne, PA, USA
Roberto Minerva Telecom SudParis Institut Polytechnique de Paris Palaiseau, France
ISBN 978-3-031-21342-7 ISBN 978-3-031-21343-4 (eBook) https://doi.org/10.1007/978-3-031-21343-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
Imagine the future of the global economy: imagine factories that produce experiences, with data flows replacing part supply. Imagine being able to represent and cure a human body as efficiently as we model and test a plane or a car. Imagine a clinical trial involving virtual patients and being evaluated using an AI engine. Digital twins – or virtual twin experiences as I rather name them – are unparalleled catalysts and enablers of this global transformation of the economy. Digital twins are representations of the world that combine modeling, simulation, and real-world evidence: in other words, they make it possible to understand the past while navigating the future. In todays’ economy, the value lies in the use of the product rather than in the product itself. This “experience economy” triggers new categories of expectations from citizens, patients, learners, and consumers. Mobility is no longer a matter of vehicles: it is a matter of desirable, sustainable mobility experiences. Healthcare is much more than therapeutics: it is about the patient journey and precision medicine. Cities are not only a collection of buildings, streets, and facilities: they’re about quality of life and quality of service. Plus, the ongoing transformation toward a sustainable economy, accelerated by the pandemic, will mark this century, as did steam and electric power for the nineteenth century. Digital twins were first made for manufactured objects – planes, cars, factories, and cities – we have now extended this capability to the most complex systems – like the human brain and heart or the city of Singapore – and to organizations. The knowledge acquired in the manufacturing domain is now applied into the life sciences and healthcare. Just like we created the virtual twin of the Boeing 777, we are now creating the virtual twin of the human body. There will be a “before” and an “after” the virtual twin for healthcare. Thanks to virtual technology, in 2020, the world came up with a vaccine against Covid-19 in only 1 year versus 10 years in the past. The Children's Hospital Institute in Boston uses digital twins to reconstruct the malformed hearts of newborns, and surgeons can train on this digital twin before an operation and see the consequences of their actions. We now have a 99% alignment rate between what a surgeon does in the operating theater and what was prepared as an option with our platform. The day we can create a virtual environment that is v
vi
Foreword
more precise than the in situ one, we open up a tremendous opportunity for standardized medical practices. We are not in the process of digitalization but rather virtualization, which opens up an incredible new field of possibilities. It will allow us to move towards precision medicine, before moving towards more and more personalized medicine. Virtual technology was born for sustainability: it was first used in the industry for virtual prototyping, for doing things right the first time while saving materials and resources. Sustainability is all about life cycle. The twentieth century was characterized by a global industry that ultimately relied on only a few options, particularly in chemistry and materials. Thanks to digital worlds, we can both imagine new approaches, test new solutions virtually, and ask ourselves the question of recycling materials right from the design and manufacturing phase. We can enable companies to draw up a total balance sheet to measure their impact on the planet according to the different choices they are likely to make. For example, Dassault Systèmes has demonstrated with Accenture the number of millions of gigatons of CO2 emitted that would no longer be emitted if companies were to do virtual modeling before making a product. It is about balancing what we take (footprint) from our planet and what we give (handprint) to our planet to upgrade our “eco-bill.” In addition to improving environmental footprint, the greatest power of the virtual worlds lies in unleashing the imagination, in enabling people to imagine differently, and growing our handprint – ultimately reinventing a more sustainable economy. To create, produce, and play experiences in a circular economy, innovators of today and tomorrow have to think in terms of systems of systems. As they provide a multi-scale, multi-discipline, generative, holistic, and inclusive approach to innovation, virtual twin experiences based on collaborative platforms provide an inspiration for new offerings. Improving global health requires an inclusive perspective across cities, food, and education. Developing global wealth in a sustainable manner involves new synergies between data and territories. Digital twins make it possible to collaborate at scale through immersive virtual reality. They provide a continuum across disciplines and organizations. They connect biology, material sciences, mechanics, electronics, and chemistry. They allow to improve use through data intelligence. This translates into continuous improvements towards more sustainable industrial processes, enhanced and customized treatments, and the development of new services from the lab to the hospital nearby or the street downstairs. Combining the real and the virtual world thus leads to new ways of inventing, producing, treating, and learning. Indeed, we see a true Industry Renaissance emerging worldwide: the new book is the virtual experience. They revolutionize our relationship with knowledge, science, and industry just like the printing press did in the fifteenth century. By removing the gap between experimentation and learning, they allow everyone to access knowledge and know-how. New economic laws arise, calling for new business models where the value of virtual assets largely exceeds the value of real assets. The large majority of investments is intangible, in the form of intellectual property, data, and knowledge. Even tangible physical investments, like bridges, factories, or
Foreword
vii
hospitals, come with their virtual twins, opening new possibilities for the operations of these assets through their full lifecycle. People at work can learn continuously and at scale. New jobs emerge. Traditional organizations are displaced towards more fluid collaboration models. Investing in virtual universes is the most valuable way to create sustainable paths for the future. Tomorrow’s industry leaders will not be those with the most automated production systems, but those who build a culture of knowledge and know- how to reveal and train the workforce of the future. This opens up new a world of possibility, creating additional value in spaces that were constrained by zero-sum games. Dealing with increasing pressure linked to resources scarcity and climate change, our societies can leverage the power of virtual twins to a more sustainable economy. The digital world is becoming less and less virtual and more and more real. The virtualization of society requires new scientific fundamentals to provide people with new experiential solutions based on the highest levels of trust and services. Once you agree to live in an environment that combines the real and the virtual worlds, you have to accept the legal conventions of the virtual world. Take radio and mobile phone frequencies, for example. These are invisible assets, but they’re governed by land, air, and maritime borders. So why not do the same thing with data? Take, for example, a private hire car operator. The customer says, “I am here, and I want to go there.” Even before the ride happens, everything is set up virtually. Back in the “old days,” we'd have to make a phone call. In today's world, something invisible represents exactly what's to be achieved. It's important to know who keeps a trace of this. Controlling digital sovereignty is not a technical problem, it is a question of political will. It is necessary to organize, under the responsibility of the state, a wireless world which is governed by concrete laws. In this digital world, our states must establish a legal and fiscal framework as they have done in the physical world. New ecosystems are emerging to tackle these challenges, crossing the public- private divide. Governments and industries will have to work together to jointly invent new ways of living. “Investing together” will be the keyword – as scale effects are at the core of the virtual economy. Indeed, this deep transformation of industry and society requires a shared understanding of digital twins, the associated technologies, and their consequences. Virtual experiences make the invisible become visible. Digital twins are the environment and infrastructure of the twenty- first century for political, scientific, and business leaders to make informed decisions for the future. Vice-Chairman and Chief Executive Officer Dassault Systèmes Vélizy-Villacoublay, France
Bernard Charlès
Contents of Volume I
Part I Introduction he Digital Twin: What and Why?�������������������������������������������������������������� 3 T Noel Crespi, Adam T. Drobot, and Roberto Minerva he Business of Digital Twins ���������������������������������������������������������������������� 21 T Larry Schmitt and David Copps he Dimension of Markets for the Digital Twin������������������������������������������ 65 T Max Blanchet igital Twins: Past, Present, and Future ���������������������������������������������������� 97 D Michael W. Grieves Part II Technologies igital Twin Architecture – An Introduction���������������������������������������������� 125 D Ernö Kovacs and Koya Mori chieving Scale Through Composable and Lean Digital Twins���������������� 153 A Pieter van Schalkwyk and Dan Isaacs The Role of Digital Twins for Trusted Networks in the “Production as a Service” Paradigm ������������������������������������������������ 181 Götz Philip Brasche, Josef Eichinger, and Juergen Grotepass I ntegration of Digital Twins & Internet of Things�������������������������������������� 205 Giancarlo Fortino and Claudio Savaglio Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage���������������������������������������������������������������������� 227 Tim Kinman and Dale Tutt ata and Data Management in the Context of Digital Twins�������������������� 253 D Tiziana Margaria and Stephen Ryan
ix
x
Contents of Volume I
ybrid Twin: An Intimate Alliance of Knowledge and Data�������������������� 279 H Francisco Chinesta, Fouad El Khaldi, and Elias Cueto Artificial Intelligence and the Digital Twin: An Essential Combination���������������������������������������������������������������������������� 299 Roberto Minerva, Noel Crespi, Reza Farahbakhsh, and Faraz M. Awan Graph-Based Cross-Vertical Digital Twin Platform A for Complex Cyber-Physical Systems���������������������������������������������������������� 337 Thierry Coupaye, Sébastien Bolle, Sylvie Derrien, Pauline Folz, Pierre Meye, Gilles Privat, and Philippe Raïpin-Parvedy Cybersecurity and Dependability for Digital Twins and the Internet of Things���������������������������������������������������������������������������� 365 Vartan Piroumian Infrastructure for Digital Twins: Data, Communications, Computing, and Storage�������������������������������������������������������������������������������� 395 Flavio Bonomi and Adam T. Drobot igital Twin for 5G Networks ���������������������������������������������������������������������� 433 D Marius Corici and Thomas Magedanz ugmented Reality Training in Manufacturing Sectors���������������������������� 447 A Marius Preda and Traian Lavric igital Twin Standards, Open Source, and Best Practices������������������������ 497 D JaeSeung Song and Franck Le Gall pen Source Practice and Implementation for the Digital Twin�������������� 531 O Stephen R. Walli, David McKee, and Said Tabet
Contents of Volume II
Part III The Digital Twin in Operation elcome to the Complex Systems Age: Digital Twins in Action �������������� 559 W Joseph J. Salvo hysics in a Digital Twin World�������������������������������������������������������������������� 577 P Jason Rios and Nathan Bolander perating Digital Twins Within an Enterprise Process ���������������������������� 599 O Kenneth M. Rosen and Krishna R. Pattipati The Digital Twin for Operations, Maintenance, Repair and Overhaul�������������������������������������������������������������������������������������������������� 661 Pascal Lünnemann, Carina Fresemann, and Friederike Richter igital Twins of Complex Projects �������������������������������������������������������������� 677 D Bryan R. Moser and William Grossmann The Role of the Digital Twin in Oil and Gas Projects and Operations ���������������������������������������������������������������������������������������������� 703 Steve Mustard and Øystein Stray Part IV Vertical Domains for Digital Twin Applications and Use Cases Digital Twins Across Manufacturing������������������������������������������������������������ 735 Eric Green Leading the Transformation in the Automotive Industry Through the Digital Twin������������������������������������������������������������������������������ 773 Nand Kochhar igital Twins in Shipbuilding and Ship Operation������������������������������������ 799 D Russ Hoffman, Paul Friedman, and Dave Wetherbee
xi
xii
Contents of Volume II
igital-Age Construction – Manufacturing Convergence�������������������������� 849 D Sir John Egan and Neculai C. Tutos hriving Smart Cities������������������������������������������������������������������������������������ 901 T Joel Myers, Victor Larios, and Oleg Missikoff igital Twins for Nuclear Power Plants and Facilities ������������������������������ 971 D David J. Kropaczek, Vittorio Badalassi, Prashant K. Jain, Pradeep Ramuhalli, and W. David Pointer igital Twin for Healthcare and Lifesciences���������������������������������������������� 1023 D Patrick Johnson, Steven Levine, Cécile Bonnard, Katja Schuerer, Nicolas Pécuchet, Nicolas Gazères, and Karl D’Souza he Digital Twin in Human Activities: The Personal Digital Twin ���������� 1045 T Chih-Lin I and Zhiming Zheng Digital Twin and Cultural Heritage – The Future of Society Built on History and Art�������������������������������������������������������������� 1081 Olivia Menaguale igital Twin and Education in Manufacturing ������������������������������������������ 1113 D Giacomo Barbieri, David Sanchez-Londoño, David Andres Gutierrez, Rafael Vigon, Elisa Negri, and Luca Fumagalli Part V Conclusion uture Evolution of Digital Twins���������������������������������������������������������������� 1137 F Roberto Saracco and Michael Lipka Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin���������������������������������������������������������������������������������������� 1167 Martin M. Zoltick and Jennifer B. Maisel he Digital Twin in Action and Directions for the Future ������������������������ 1201 T Noel Crespi, Adam T. Drobot, and Roberto Minerva Index���������������������������������������������������������������������������������������������������������������� 1219
Part I
Introduction
The Digital Twin: What and Why? Noel Crespi, Adam T. Drobot, and Roberto Minerva
Abstract The progress of communications, processing, storing, and sensing capabilities is increasingly enabling the Digital Twin approach as a means towards digital transformation. Different vertical industries are more and more implementing solutions that are Digital Twins or are inspired to it. These implementations are demonstrating how to move from a physical world into higher and more convenient levels of softwarization. It is important to describe what is the Digital Twin and why it is important as a technological solutions and as a business enabler. This chapter introduces some of the current trends in the usage of the Digital Twin, and what actors should be interested in its applications. The usage of the Digital Twin requires a considerable technological platform and related skills for a succesful implementation and usage. As such, there are risks and best practices to consider in order to realize the expected benefits of this approach. The chapter offers a perspective view on how to build Digital Twin based solutions and what are the steps to bring the Digital Twin in the mainstream of many industries. Keywords Artificial intelligence · Data management · Data modeling · Internet of things · Digital twin · Product life cycle
1 Introduction The Digital Twin is a concept that has emerged in several industrial contexts over several years. It takes on different shapes and it has been supported by various technologies and tools (from simulations to agents, from graphical representations to service platforms). There are many different definitions of the Digital Twin (DT) N. Crespi · R. Minerva (*) Telecom SudParis, Institut Polytechnique de Paris, Palaiseau, France e-mail: [email protected] A. T. Drobot OpenTechWorks Inc., Wayne, PA, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_1
3
4
N. Crespi et al.
(e.g., [1, 2]). A simple Google scholar search can return thousands of different definitions, conceptualizations, or models of this concept. One straightforward way to examine the very essence of the Digital Twin is in this pragmatic perspective: “if you have enough data, you can model a DT and then move from real to digital” (Michael Grieves). However, there is still a need to understand why it is so difficult to converge towards a generally accepted definition (or set of definitions). One initial consideration is that the “concept” of modelling a physical object by software is present in different application domains. Different modelling tools and best practices have supported them long before the term “Digital Twin” was coined. The goal of this chapter is not to sort out this debate; instead, it is to provide the sense of the importance and the benefits that a Digital Twin approach can bring to different industries and applications domains. These questions were codified by George Heilmeier, who was a director of DARPA in the late 1970’s, and who had an enviable record of promoting breakthrough technologies and in successfully managing important innovations. He was responsible for the discovery of Light Emitting Diodes (LEDs), the staple of today’s displays, ranging from large high-resolution TVs to the billions of smart phone screens that we take for granted. He also oversaw major efforts that created space-based lasers that we use today to map the Earth, strategically important stealth aircraft for the military, and many of the foundations of artificial intelligence that are blooming today [3, 4]. We start with a number of essential questions about the Digital Twin, beginning with the simple questions that George Heilmeier stated in his “catechism” (see the box to the right) because they were the filter that allowed him to see what is important and why. This (hopefully) explains why Digital Twins are worthy of attention and why they will change common practices in industry; in the public sphere; in academia and research; and for the consumer sector. Our bet is that the changes that Digital Twins bring are fundamental and long lasting.
2 What Is a Digital Twin and Why Is It Important? As a starting point, a Digital Twin, DT for short, is considered as a contextualized software model of a real world object. The real word object is typically intended as a physical object, but conceptual objects can also have their digital representation. A contextualized model means that the behavior of the physical object can be replicated in software. As such, it can be studied and analyzed, and its behavior predicted under the “rules” that govern the contextualized environment in which the object is operating. As an example, one can consider the representation of a satellite under the laws of physics and the knowledge of the other objects orbiting around the planet. Obviously, the contextualization and its modeling can be simple, or they can consider the many different factors that can affect the behavior of the physical
The Digital Twin: What and Why?
5
object. The relationship between the physical object and its software counterpart is determined and limited by the selected properties. The properties determine what specific aspects of the physical objects are predominant and important with respect to the context. The selection of properties can determine the closeness of the modelling of the physical object to its real behavior (i.e., the effectiveness of the modeling or how well the software model reflects the actual behavior of the physical object under varying circumstances). Things can easily and rapidly get more complex if the components of a physical object are taken into consideration. The objects as well their relationships and interdependencies are to be modelled and represented. These may be very complex, and the complexity can increase depending on the environment in which these objects operate. Consider, for example, the components of a building and how they relate to each other, and what are the constraints and laws that should be reflected by the context. This book, in different chapters and with a focus on several different application domains, provides examples and practical representations of properties and relationships between physical objects and their software counterparts, and between Digital Twins themselves. The contexts refer to specific application domains in which the DTs are represented and operate. The importance of the representation relies on the abstraction capabilities offered by the supporting modelling capabilities. If a model provides an accurate description of the physical object and its behavior in the environment, then the DT can “replace” the physical object in the design, testing and operational phases. This capability will provide a way to understand the behavior of a product/object under normal usage and in high-stress situations, identifying critical points in the design, in the materials, and for the intended usage. The more the modeling is effective and precise, the more the DT can be used to understand the behavior of the physical object in particular situations. This could lead to savings in the design (e.g., the experience from previous products can be capitalized by the modelling capabilities), in the testing phase (stressing the software representation is a matter of processing power) and in anticipating any unexpected behavior during the operational phase. In many industrial sectors, these capabilities are highly valuable and can lead to a successful integration of Digital Twins. The Digital Twin found its “nursery” and its initial applications in the manufacturing industry. However, the effective representation of physical objects by software has a paramount importance in other application domains. For instance, in education, the availability of Digital Twin capabilities may allow students or practitioners to perform experiments on logical representations that react and behave as if they were physical objects. Surgeons could practice, under specific simulations, on difficult operations without necessarily operating in the field, engineers-in-training could learn how to operate key systems without the risk a real impact. This book presents many different application domains in which the DT approach is relevant and successful. As stated earlier, the Digital Twin definition (except for the concept expressed by the availability of enough data to represent a physical object) is a subject of discussion within different communities of users and practitioners. Contextualized and specialized definitions of the Digital Twin will be introduced and considered in
6
N. Crespi et al.
specific chapters. These definitions will not necessarily converge towards a unified definition; this book is not about “the definition of the universal Digital Twin”. However, they are instrumental to the application and usage of certain DT properties and capabilities in relation to their specific application context. Instead, this book is intended as a sort of “guide” to readers to help them focus on relevant topics and technologies of interest and how the DT approach is successfully being used to resolve cogent problems. The goal of this book is to present the many concepts and technologies that are at the foundation of the concept of the digital twin. To do so, there is a need to analyze and understand the long technological processes and evolution paths that have led to the current effervescent activities and results related to the DT. On the other side, there is a need to reason about the possible evolution of this concept in the mid-long term. This approach offers both perspective and depth on the topics presented and discussed throughout the book.
3 How Is the Digital Twin Used Today, and What Are the Limits of Current Practice? The Digital Twin is finding applications in several fields, from manufacturing to cultural heritage exploitation. Figure 1 shows some of the problem domains directly addressed within the book. Others are possible and will likely emerge in the near future.
Production
• • •
Manufacturing Naval Shipbuilding Automotive • • •
Energy
Buildings
•
Construction
IT Applications
Education
Oil and Gas Nuclear Reactor Design Power Plants
• • •
• • •
Physics Education in Manufacturing Virtual reality
Health Sciences
Society
• •
Personal Digital Twin Cultural Heritage
Fig. 1 Digital application domains covered in this book
• •
Health Care Life Sciences
Internet of Things Systems Complex Cyber Physical Systems Smart Cities
The Digital Twin: What and Why?
7
Each domain makes use of the DT concept in different ways. Hence, the current approach to the use of DTs is extremely fragmented and very specialized. Many enterprises and research initiatives have used the DT approach as an enabler for reaching specific goals. DTs related to specific products or solutions have emerged and they have proved to be extremely useful for improving the life cycle of a product or for supporting the solution of very complicated situations of the real world. For example, the emergence of virtual patient solutions [5] is an indication of the possibility to apply the concept to life sciences and practices. The DT has found applications in the large domain of the Internet of Things (IoT), becoming an interesting solution for representing complex environments and sensing/actuation capabilities [6, 7]. For instance, the DT is used for the optimization of large sensor networks and in the representation of smart cities [8]. These and other initiatives in several domains have paved the way to a wide usage of the concept in many businesses and environments. Due to the large extent of its applicability, the DT approach is considered as a generally applicable solution for a range of problem domains. Despite its broad applicability, DT practice is still almost exclusively applied “vertically” to problem domains and it lacks a reusable framework for interactions among different domains and applications. The concept of “portability” remains to be addressed and proven. The DT approach provides the opportunity to collect and organize data around a physical object. This is an important aspect; historical data about the behavior of an object under different circumstances can be used to better understand and improve that product/object. The historical data also represents the “knowledge” and the choices made in creating and building a product. The analysis and the exploitation of this information can improve the evolution of artifacts. DTs are not only strongly associated with data, they are also an important tool for making advances in Artificial Intelligence. Prediction the behavior of physical objects by means of Deep/Machine Learning technologies is one obvious possibility. However, DTs can use additional AI-related technologies. DTs may need to understand the situation in which they are operating and to put in action intelligent choices and behaviors to reach their goals. This offers a perfect playground for “reasoning and understanding” techniques. The optimization of goals and results between cooperating DTs is another challenging issue for AI advances. Autonomous DTs require a high level of support from real-time AI in order to properly operate in an environment, especially when in touch with humans. More recently, the increasing interest in Virtual Reality and in the Metaverse has led to an integration of the Digital Twin [9]. However, alongside the similarities and analogies, one major difference must be considered: a Digital Twin is strongly associated (by means of data) to measurable characteristics of the physical world. For Virtual Reality, this link is not always necessarily true. This link to measurable characteristics has several relevant implications in real-life usage. A Digital Twin reflects what is happening in the real world (genuine and veritable), while Virtual Reality objects may represent plausible and credible events or facets of reality (verisimilitude). In-between these concepts, certain events and things may change. The
8
N. Crespi et al.
two distinct concepts may be useful, for instance, for representing physical events and objects in simulations. The book aims to identify and present the important and crucial properties or usage of the DT in order to generalize the definition/perspective and to present some common ground and jargon for to fully exploit the different facets of the DT.
4 What Is New in the Approach to Digital Twins and Why Do You Think It Will Be Successful? The Digital Twin concept is an enabler for the transition from “atoms to bits” and vice versa. It is a means for creating a continuum between the physical and virtual worlds. This is a further step towards the softwarization of many industries and domains. The ability to move back and forth from real to virtual will lead to new services and most likely to new business opportunities. The game-changing novelty of the Digital Twin concept is that it offers ability to transform physical objects into programmable entities. The importance and the success of the Digital Twin concept rely on the possibility of moving many aspects of products, objects and entities from the physical to a software domain. This will have at least two effects: to decrease the costs of development and testing while increasing the number of “situations” that can be considered, assessed, and evaluated; and to extend/augment physical objects with software capabilities that will substantially enrich them. In other words, the Digital Twin is an enabler for softwarization and servitization, which can have profound impact on several markets and problem domains. DTs can be programmed, controlled, and orchestrated towards desired behaviors, and they can be “augmented” to add new functions and features. Two types of DTs are possible: –– Passive representations of physical objects, i.e., data about real objects that supports the modelling and the representation by means of logical replicas. These form a sort of one-way link in which the physical object provides data representing its status and behavior, and the DT represents some aspects of it to the application. Passive representations are utilized to study and predict the working of a physical object under particular conditions; and –– Active representations of physical objects, i.e., there is two-way communication and the DT can change/affect the behavior of the physical object. These representations allow a DT to influence/drive the behavior of a physical object depending on optimization and improvements in the physical processes. The most common approach is to develop a DT (usually a passive DT) as an early means to understand and predict the reactions of a physical object within certain conditions. This approach is usually very specific, i.e., it is based on an analysis of the major features and characteristics of the physical object in a set of target
The Digital Twin: What and Why?
9
environments and conditions. The design and implementation consider two major aspects: the complexity of the applicability of the DT in a specific problem domain, and how software and hardware technologies can support the implementation of the concept. The success of the DT will then be evaluated with respect to two main objectives: how well the DT represents the salient features and specificities of a problem domain, and the possibility of cross-fertilizing the DT definition considering additional aspects of the problem domain or intersecting aspects of other problem domains. For instance, the DT of an artistic artifact can model the artifact from a physical perspective (e.g., for studying it and preserving it from physical elements like humidity, heat, etc.). This representation can be extended by introducing the security perspective (e.g., alarms sound if the artifact is touched or moved) or even a more articulated perspective that considers the artistic representation and its value (a statue of a king, in baroque style). Being able to have integrated representations/views of the same object with respect to complementary perspectives of different application domains is a major advantage offered by a DT. DTs can progressively aggregate data in order to represent different facets of the same object, thereby creating a holistic representation accessible to several applications. From this perspective, a DT is a means to collect and organize data related to a specific object under different conditions and from different viewpoints (or even better, ontologies) and which provides descriptions and interpretation of the data according to different needs. The DT can thus be an enabler for data interchanging and interoperability, offering the capability to describe the salient features of physical objects to different applications. This integrated representation will be instrumental to capturing the quintessential features, technologies, approaches and mechanisms underlying the successful implementation and operation of DT platforms and applications, offering a new approach in contrast to a compartmentalized/fragmentated effort. One of the reasons for its success is that it is a bottom-up approach that incrementally builds a common perspective. It does not “superimpose” a “meta-definition” of the DT, but instead leverages specific views and integrates them into a common collection of data. The communalities and strong points of the DT, as well as its specific successful features and any lacunae will emerge from the technical and, especially, the vertical use case chapters. In the long run, the emerging requirements, experiences, and applications in different sectors will help to shape and lead the generalization of the DT definition, aiming at interoperability and programmability. This concept will be used in different realms to program and predict the behavior of very complex and interrelated systems (from large factories to smart cities) in order to govern and optimize the usage of scarce physical resources. The possibility to move from costly physical implementation to efficient and low-cost softwarized systems will be a key aspect of the success of the DT on a large scale. The Digital Twin is based on two important aspects in relation to Artificial Intelligence (AI) evolution. On one side, a DT is based on a data representation of a physical object. This collection of data can be used and exploited to identify patterns and the expected behavior/actions/malfunctions of the physical object. On the
10
N. Crespi et al.
other side, a DT can represent the behavior and the goals of a physical object within a particular environment. As such, it can be extended with unique capabilities to reach its goals within its environment and to find an equilibrium with other DTs (and objects) acting on the same environment. The current AI developments can easily find applications in a DT system, and they can be reinvigorated by the DT approach itself. DT modelling and the description of the environment by means of rules, semantics and other techniques can reinforce the use of AI approaches in order to more fully understand the situation and optimize the operations of a DT (or groups of them) in an environment. A foreseeable evolution of the Digital Twin approach is the aggregation/disaggregation of its components (represented as smaller DTs). The composition/decomposition process must deal with the specific problem domain representation of the sub-components within the larger context of the aggregated object. An airplane can be subdivided into different subsystems, each of which have a specific reference environment (e.g., the hydraulic system, the electrical system, and others). Their specificities must be recomposed and aligned within the model of the entire object. This process also contributes to the ability of different DTs to work together within an environment and to proactively look for interactions and relationships with other objects populating the problem domain space. This sort of networking of DTs can help to gradually and automatically explore the problem domain and to tune and regulate the behavior of the present objects to optimize the usage of resources and the tasks executed within the problem domain. One example could be a smart city, represented with many different DTs (cars, buildings, energy infrastructure, people), that is looking for an optimal solution with respect to the CO2 emissions based on people’s needs for good, services, transportation and the associated costs the environment. Networks of DTs will move towards the representation of Complex Systems and the aggregation of autonomous DTs into large environments will provide the ability to better understand how to manage these large environments.
5 Who Cares About the Digital Twin? If DTs Are Successful, What Difference Will It Make? The application of the DT concept allows different industries to move, progress, or accelerate the transition towards softwarization. There will be economic gains and process improvements in terms of reductions of costs related to the creation and operation of physical objects, as well as the possibility to increase efficiency and optimization capabilities. On one side, softwarization will allow the optimization of the different processes around the design, testing and operation of products and objects. The overall quality of a design and its construction can improve by exploiting the experiences of the past and by testing products in new situations.
The Digital Twin: What and Why?
11
On the other side, users will be offered a high level of servitization of products. Different business models will promote and ensure the creation of highly personalized services on top of the basic products or objects. The DT will enable the possibility to better address the needs and the requirements of the users over time, and products/objects will be more and more customized to these requirements. This will bring value into the interaction between human and products and between business entities through very complex value chains. Such a generalized view of the DT is instrumental to understanding the merits and drawbacks with respect to specific and emerging application domains. All industries are potentially interested in the DT concept. However, there are specificities and interests that should be considered in each specific field of applications. Certain industry segments can be more mature and thus ready to embrace the Digital Twin path. The DT approach and its softwarization aspects require a strong background in software development and in the management of large complex projects. In addition, a DT implementation may require a substantial infrastructure to run the DT software as well as the processes and activities to manage the infrastructure and guarantee its stability and performance. Not all industries are able to do this, and so the DT approach will gradually take over in industry where Information Technologies are common and then it will progressively advance to reach less technological advanced contexts. Starting a DT project is a major endeavor and companies need to plan and be adequately prepared and determined to reach results. The development teams determine which are the major points to check in design (what to model) and implementation (how to realize it). Operational views and the possibility of exploitation of the DT within the companies are two aspects that should be studied and well-defined. The development teams also need to have a sense of how difficult an implementation and its management will be and what steps to consider for a successful result. An additional factor is the need to identify the missing or the weak points and to enhance the approach towards the construction of a fully-fledged and sustainable DT solution and platform. These activities may involve internal and external teams created by the enterprise, including platform developers and technology providers, ranging from AI experts to cloud providers and integrators. The DT approach and the identified solutions/features/operational steps can be of great help for enterprises, experts and practitioners that are looking for advanced solutions in the resolution of stringent business, technological and research issues. The book aims at providing such users with a methodological base with which to correctly evaluate and judge the advantages, the scope and the long-term perspective of using a DT-based solution for their cogent problems. When the DT approach has gained a general acceptance, more focus will then be placed on the interoperability/portability of the individual solutions. New applications bridging different industries will benefit from this, and it will offer the possibility of creating new compelling services. The DT for controlling a vehicle will also be usable within Smart City applications, increasing the fineness of a data and the awareness of the situation. These possibilities will support the optimization of the physical resources according to individual and societal goals. The individual DT
12
N. Crespi et al.
will be interoperable and it will offer interfaces and data to several applications and views. It will be fundamental to regulate the ownership of data and of the DT itself in order to preserve the consistency of the objects and the privacy of the owners and users. The difference between enterprises using a DT and those that will not lies essentially in the ability of better understand and evaluate their products with respect to their intended usage, and to their capabilities in extreme situations with respect to the customers’ expectations. These capabilities will also influence product evolution and the introduction of new products, and the experience gained via the DT could be exploited for the creation of even newer propositions.
6 What Are the Risks in Using the Digital Twin? The Digital Twin approach can bring several advantages to enterprises and communities; however, it comes with several risks. The digital twin is made to reflect the behavior of objects in the real world. As such, it may be a complex task to decide what crucial aspects of reality need to be represented by the software. The development and structuring of the approach within an enterprise, i.e., the alignment between the representation of the reality with internal processes and structures, may require a substantial amount of time and design before it becomes effective. Skills and best practices may be missing within the company, and they may not be easily or quickly available. Adequate time and the will to achieve an effective representation with its associated well-defined internal processes are essential. The more a company is involved in this effort the more complexity and difficulties that can be added. To be successful, the DT approach requires time and perseverance. The infrastructure needed to collect, manipulate, and represent the data may or may not be available. Competent IT teams, experts in the problem domains, and programmers are all needed in order to create an actionable infrastructure that puts together many different technologies (communications, IT systems, specific problem domains, the IoT and the like). Even though some general-purpose solutions are emerging in the cloud (Amazon, Azure and others), they require substantial activities and skills to build a valuable DT. If the DT approach is at the core of an enterprise, then to fully allocate highly-sensitive data, representations, and models in the cloud may not be the best option. On the other hand, creating a large infrastructure for managing all the needed data and procedures internally may be too great an effort for many companies. This issue, linked as it is to the previous one, may make a difference between a “go or no-go” decision. The DT is in principle applicable to a plethora of substantial problems. Not all of them can be efficiently and practically solved with a DT-based approach. This book sets out to provide a set of guidelines and steps for determining if a DT can be a viable and beneficial solution to the problems faced by a company in a specific context. The risks are that, due to the current fragmentation of the DT landscape,
The Digital Twin: What and Why?
13
general applicability is not yet emerging and thus the analysis carried out throughout the book is still premature for certain application domains and for a large-scale generalization of the problem and the concept of the DT. To mitigate the risk, the authors have combined different perspectives and approaches to specific problems. This approach will at least, provide a well-thought-out set of analyses with specific benefits. Generality may not be necessarily derived. At a security and privacy level, a DT represents and collects a wealth of information that have a great value for the owner of the physical object. In fact, the DT represents the behavior, i.e., the usage of the physical object, as well as the knowledge about how to build and operate it. All this information is valuable from a competitive standpoint as well as from the personal perspective. How a person uses an object brings a wealth of insights for producers and vendors of that object, and at the same time the usage information creates a deep profile of the users and their preferences and orientation. The definition of “ownership” of a DT as well as the collection of its behavioral aspects are issues that must be addressed in order to guarantee security and protection to users and owners. Software programmability and control is another hot topic, as the taking control of an object by non-authorized people/ organizations is a major risk that can create disruption and severe accidents affecting people and processes. It is easy to envisage the possibility of DTs being used for tampering with the ordinary life of people or specific environments. Moving DT usage outside of very controlled and confined environment must be accompanied by a solid and robust security framework able to detect malicious behaviors or dangerous actions initiated on behalf of owners or hackers.
7 What Resources Are Necessary to Develop Digital Twins to Maturity and to Use Them, and What Are the Benefits? The Digital Twin approach covers the entire lifecycle of products and objects, from the design to the dismissal phase. This approach needs to be supported by processes, data and Information Communications Technology, ICT and infrastructure throughout all the lifecycle phases. In addition, the DT approach should progressively extend to cover all the activities and knowledge of an enterprise. Such a holistic view is extremely complex and may be quite expensive to realize and support.
7.1 Building the Infrastructure Building a DT platform is costly in terms of resources and time. Many enterprises are building specialized platforms serving specific application domains or their own specific business. Building these platforms is risky and requires a constant effort to improve, update, and consolidate the platform, which implies long-term operational
14
N. Crespi et al.
and personnel investment. However, the specific knowledge of an enterprise could be better exploited and become an exceptional asset to leverage with respect to the competition. The building of these kinds of solutions aims at instilling the corporate knowledge into the components and processes of the DT platform in order to leverage them for production. The ability to reuse the lessons and experience of past productions can lead to better products and savings in the process of creating new products. On the other hand, some IT companies offer “general-purpose” platforms supporting some features and attributes of a generalized DT framework. These platforms must be personalized and tailored to the needs and expectation of the client enterprise. This approach frees the enterprise from the need to develop a platform and to constantly update its general components and mechanisms. However, specialization is still an important task to execute. This task can be slowed down by a platform’s lack of functions and components, and to create them can be difficult and cumbersome for the user. This approach can create a dependency on the part of the user on the platform vendor. The platform will evolve and improve according to the time schedule of the vendor and not the needs of the user. The obsolescence of these platforms is also another major drawback. A DT framework requires time to be developed, and most technological solutions tend to be obsolete in a short period due to rapid innovation in the software sector. These platforms must be constantly improved and upgraded to the state of the art of IT. In other words, an in-house solution needs to be supported by a strong IT team of developers and an equally-important team of software experts capable of transforming the enterprise knowledge into components and processes supported by the platform. The rise of generically reusable and effective DT platforms (typically based on IoT platforms) is still a major trend and the objective of many industries. However, the progress of IoT technologies and the consolidation of viable large platforms are revealing promising paths. There is the need to consolidate a set of usable functionalities and capabilities that can be extensively tailored and improved by users. While the time required to reach robustness, applicability and versatility of a DT solution varies, it typically takes several years and a major investment of resources to reach a satisfactory point. This explains why successful companies are not necessarily sharing their success stories, and even less their specific solutions. This proprietary behavior increases the entry costs for each new enterprise that begins a DT implementation process. The interoperability of DTs and the possibility to execute them on different platforms has not yet been considered, nor the possibility to create a universal DT platform (like a sort of extended web). However, the availability of tailored and general-purpose platforms has the distinct advantage of allowing the creation of enterprise solutions and the exploitation of corporate knowledge and best practices for creating products or solutions. This can in principle, allow the possibility of leveraging years of experience and past product design and testing to ease the creation of new products.
The Digital Twin: What and Why?
15
7.2 Collecting the Data Data are the essential enabler for a Digital Twin representation. Data must be collected, stored, and manipulated according to the requirements of the “representation” itself. Depending on the phase of the lifecycle, there may be changing requirements in how quickly the data should be available and how they should be processed. The processing of data for the purpose of representing and understanding the behavior of a physical object by means of a DT should be fast enough to provide time to adequately represent the object in the current situation (passive DT), and to give enough time to a “decision support system” and/or a human to decide and perhaps to intervene on the object itself (proactive DT). Especially in the case of proactive DTs, the timeliness issue is fundamental. Automatic intervention and correction must be executed accurately in order to identify the situation and the corrective actions to take that will optimize the processes to obtain the expected behavior and implement the decision(s). Executing this process quickly and efficiently poses a huge challenge from an infrastructural and software perspective (AI algorithms), one that is not always possible to resolve. Data processing speed: Collecting data means to curate, store and reuse data when needed. This is a major challenge, because “historical data” are an integral part of the DT approach, and it takes time and resources to accurately format, organize and store the data. Historical data will continuously increase in size; if a DT is well “represented”, the quantity of data generated per fraction of time can be considerable. Quantity of data processed and stored: Mechanisms must be created to efficiently curate and store the data and to retrieve them whenever needed. The quantity of data can grow exponentially, posing the issue of how much data to store and for how long. These decisions must be taken while considering the requirements of all the phases of a lifecycle. Dimension of the problem domain: The problems are typically complex, and the associated data can be overwhelming. An airplane may be constituted of hundreds of thousands of different pieces, each one studied, designed and operated with specific goals. Representing the entire problem domain data and storing the historical data may be very complex and could be impractical in some cases. Data and their processing and storage activities must be well-designed and implemented in order to make the DT approach feasible and rewarding. Not being able to process the real-time data in a reasonable time, not being able to store and retrieve data or having to discard still-needed data are common issues that could penalize the approach and its effectiveness. Scarcity of data is a major impediment to the realization of the DT approach, a large quantity of data could be a disaster either.
16
N. Crespi et al.
7.3 Building the Processes The Digital Twin approach has been defined in strong association with product lifecycles. Its relationship with the definition and execution of steps within an enterprise’s processes is clear, and it is obvious that the adoption of a DT approach deeply affects the internal and external processes of enterprises. The “measurability” of the entire process can be realized by considering the status of the Digital Twin. In the design phase, it may be possible to evaluate how many components are reused and how many new ones must be introduced. The choice of materials can be based on previous experience or on the possibility to exploit newly conceived ones or those recently acquired by the enterprise. At the operational level, the number of issues encountered and their types can help in improving and organizing teams with which to sort out the issues. Some suggestions about the design can be provided to the design team. All these possibilities need to be regulated and mapped out on processes that clearly exploit the capabilities of the DT approach. If the DT approach is pervasive in an enterprise, it can have a very large impact on the structure and the way of working within the enterprise. A careful design of the processes (considering the human aspects as well) should be considered.
7.4 Building the Attitude The Digital Twin approach also needs to be supported by a change of attitude in the enterprise. The focus is on the “product” and how the different actors (primarily the customers) deal and interact with it. The DT approach makes the product and its behavior central to each process, from design to customer support. The entire way of operating and thinking about the enterprise is likely to be affected. In addition, a change of focus should take place, from processes and functions to a more holistic view of what is going to be released to customers and how it will be used. This evolution can be somewhat of a culture shock, and “DT thinking” may be difficult at first. Such an approach encompasses the measuring of the actions to build the product as well as those to make it work properly, up to the reaction of users to its behavior and function. A quantification of these aspects needs to be supported by the top management and implemented by mid-level managers.
8 How Long Will It Take to Develop and Launch the Digital Twin Approach into Widescale Common Practice? The DT has a long history already, even if its general applicability is still in its infancy. The strengthening and the consolidation of DT approaches will continue through the entire decade and will be consolidated in the long term. Meanwhile,
The Digital Twin: What and Why?
17
successful DT-based solutions and supporting platforms will emerge and will play the role of catalysts for a wider adoption. The availability of solid and extensible general-purpose platforms will make the difference for this development. If the basic mechanisms (e.g., modelling, communications, and data collection/management) can be put in place, then new ecosystems for the development of consistent solutions can be shaped. These ecosystems will include the platform developers, the experts for the specialization to distinct industries, the data experts for collecting, formatting and managing the needed data, and the knowledge experts for the specialization of components, functions and data. Before arriving to this point, the “ad hoc solutions” will be the large majority. A strong resistance to the move to more advanced platforms is anticipated, as the effort to migrate the knowledge base and the functionalities to newer platforms will be considerable. Tor certain enterprises, there may be a risk of the ossification of the DT solutions due to the stratification of solutions built on ad hoc platforms and the increasing number of representations and amount of data supported by the platform itself. Moving to modern solutions could be very expensive and time-consuming. One of the long-term objective is to have interoperable platforms that can support the creation, design, improvement, and operation of DTs between different cooperation enterprises (e.g., between the enterprise that assembles the products and those that provide the sub-components/composable DTs). As presented above, it will take a long time, but the implementation of products/ objects as interoperable DTs will likely be a common practice by the end of the decade. The success of the DT approach within a company depends on the effort that the company invests in making it successful. A partial approach to the DT may lead to ineffective results at a huge cost. It is possible that some companies will not pass over the threshold and jump into a full DT attitude. Not fully embracing the DT approach may lead to dissatisfaction and a lack of perspective on its potential. However, companies that are heavily product- and process-oriented are already taking this path, and they will pursue it with appropriate adjustments in line with the technological evolution as well as to different cultural approaches to the enterprise “culture”.
9 What Are the Mid-Term and Final “Exams” That May Verify That Digital Twins Have Met Their Goals? In the mid-term, a few application domains will emerge as leading examples of DT applications. In particular, several Industry 4.0 initiatives will be strongly based on the notion of DT and they will promote the broad application of the DT concept along the entire lifecycle of products and factories. The additional envisaged leading application domains will be: e-health, in which a DT can find several fundamental applications (from prediction of the evolution of maladies to the education and
18
N. Crespi et al.
reinforcement of health workers up to the monitoring and caring of elders); the Internet of Things, with several specific applications ranging from smart homes and smart environments to large smart cities applications; and automotive aspects, which seems, from more than one perspective, to be an ideal application of the digital twin concept. Other specific verticals can emerge or consolidate their leadership in the usage of DT, including construction and building management. Another crucial aspect in the mid-term is the ability to decompose the DT into smaller components, each represented by a related DT. This composition/decomposition tends to represent the general behavior of a large and complex physical object (e.g., an airplane) as well as to monitor and control constituents of the entire ensemble (e.g., engines, electric systems, fuel systems and the like). The difficulty lies in combining all the components’ behaviors in such a way to reflect the actual behavior of the entire object. The mid-term objective in all of these application domains is to make available a composite DT representation within a complex context, and to provide robust and flexible support to the product lifecycle and compelling services to the users. The final exam will be to assure the development of the DTs of several products and to guarantee their interoperability between different platforms and their usability within different service offering. Once that has been achieved, it will allow the possibility to fully exploit the concept of “servitization”, i.e., the possibility of integrating a product with a set of special services that will determine its success with the users. Most likely, new business models based on the “pay-per-use” system will emerge for many products that currently are bought and directly by users. Autonomous vehicles are one example, where a DT-based application can offer the availability of “the vehicle” (perhaps shared with other users) on demand (in Europe, vehicles are stationary over 90% of the time, taking up a parking space). The DT will be an enabler of new user experiences and an improved use of devices and products. However, some organizations will not be able to fully exploit the DT approach because of lack of commitment by management or because of a resistance to doing things that are not the “old good way”. This resistance, combined with the significant investment in technologies and processes needed to move to the DT, will limit the successful implementation of the “switch”. However, DT usage is progressively increasing in several industries and it will likely continue to grow until it becomes the “norm”.
References 1. Grieves, M., & Vickers, J. (2017). Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems. In Transdisciplinary perspectives on complex systems (pp. 85–113). Springer. 2. Minerva, R., Lee, G. M., & Crespi, N. (2020). Digital twin in the IoT context: A survey on technical features, scenarios, and architectural models. Proceedings of the IEEE, 108(10), 1785–1824.
The Digital Twin: What and Why?
19
3. https://en.wikipedia.org/wiki/George_H._Heilmeier 4. https://www.darpa.mil/work-with-us/heilmeier-catechism 5. Kononowicz, A. A., Woodham, L. A., Edelbring, S., Stathakarou, N., Davies, D., Saxena, N., Car, L. T., Carlstedt-Duke, J., Car, J., & Zary, N. (2019). Virtual patient simulations in health professions education: Systematic review and meta-analysis by the digital health education collaboration. Journal of Medical Internet Research, 21(7), e14676. 6. Minerva, R., & Crespi, N. (2021). Digital twins: Properties, software frameworks, and application scenarios. IT Professional, 23(1), 51–55. 7. Minerva, R., Awan, F. M., & Crespi, N. (2021). Exploiting digital twin as enablers for synthetic sensing. IEEE Internet Computing. 8. Deng, T., Zhang, K., & Shen, Z. J. (2021). A systematic review of a digital twin city: A new pattern of urban governance toward smart cities. Journal of Management Science and Engineering, 6(2), 125–134. 9. Cheng, R., Wu, N., Chen, S., & Han, B. (2022). Will Metaverse be NextG internet? Vision, hype, and reality. arXiv preprint arXiv:2201.12894. Prof. Noel Crespi holds Masters degrees from the Universities of Orsay (Paris 11) and Kent (UK), a diplome d’ingénieur from Telecom Paris, and a Ph.D and an Habilitation from Sorbonne University. From 1993 he worked at CLIP, Bouygues Telecom and then at Orange Labs in 1995. In 1999, he joined Nortel Networks as telephony program manager, architecting core network products for the EMEA region. He joined Institut Mines- Telecom, Telecom SudParis in 2002 and is currently Professor and Program Director at Institut Polytechnique de Paris, leading the Data Intelligence and Communication Engineering Lab. He coordinates the standardization activities for Institut MinesTelecom at ITU-T and ETSI. He is also an adjunct professor at KAIST (South Korea), a guest researcher at the University of Goettingen (Germany) and an affiliate professor at Concordia University (Canada). His current research interests are in Sotwarisation, Artificial Intelligence and Internet of Things. http://noelcrespi.wp.tem-tsp.eu/.
Adam Drobot is an experienced technologist and manager. His activities are strategic consulting, start-ups, and industry associations. He is the Chairman of the Board of OpenTechWorks, Inc and serves on the boards of multiple companies and no-profit organizations. These include Avlino Inc., Stealth Software Technologies Inc., Advanced Green Computing Machines Ltd., Fames USA, and the University of Texas Department of Physics Advisory Council. In the past he was the Managing Director and CTO of 2M Companies, the President of Applied Technology Solutions, and the CTO of Telcordia Technologies (Bellcore). Previous to that, he managed the Advanced Technology Group at Science Applications International (SAIC/Leidos) and was the SAIC Senior Vice President for Science and Technology. Adam is a member of the FCC Technological Advisory Council, where he recently co-chaired the Working Group on Artificial Intelligence. In the past he was on the Board of the Telecommunications Industry Association (TIA) where he Chaired the Technology Committee; the Association for Telecommunications Industry Solutions (ATIS), the US Department of Transportation Intelligent Transportation Systems
20
N. Crespi et al. Program Advisory Committee, and the University of Michigan Transportation Research Institute (UMTRI) External Advisory Board. He has served in multiple capacities within IEEE, which include the Chair of the IEEE Employee Benefits and Compensation Committee, as a member of the IEEE Awards Board, and the IEEE Industry Engagement Committee. In 2017 and 2018 he chaired the IEEE Internet of Things Initiative Activities Board and has been a General Co-Chair for the IEEE World Forum on the Internet of Things since 2018. He has published over 150 journal articles and holds 27 patents. In his professional career he was responsible for the development of several major multi- disciplinary scientific modeling codes and also specialized in developing tools and techniques for the design, management, and operation of complex scientific facilities, discrete manufacturing systems, and large-scale industrial platforms, for both government and industry. His degrees include a BA in Engineering Physics from Cornell University and a PhD. in Plasma Physics from the University of Texas at Austin. Roberto Minerva, associate professor at Telecom SudParis, Institut Polytechnique de Paris, holds a Ph.D in Computer Science and Telecommunications, and a Master Degree in Computer Science. He currently conducts research in the area of Edge Computing, Digital Twin, Internet of Things and Artificial Intelligence applications. During 2014-16, he was the Chairman of the IEEE IoT Initiative. Roberto has been for several years in TIMLab as research manager for Advanced Software Architectures. He is authors of several papers published in international journals, conferences, and books.
The Business of Digital Twins Larry Schmitt and David Copps
Abstract Digital Twin adoption has reached an inflection point where their growth is now exponential. They will affect every individual and every enterprise in ways that are predictable and in ways that are unexpected. For an enterprise, their existence and use will affect all stakeholders: changing customers experiences, disrupting established business models, and transforming an enterprise’s operations, sales & marketing, R&D and innovation, strategy, governance, and everything that affects an enterprise’s success. Consider the following: –– The Digital Twin supply chain is rapidly expanding and evolving. Digital Twins are becoming much easier to create, use and integrate with startups playing a major role. –– The combination of digital twins and artificial intelligence is creating complex and dynamic twins that ‘understand’ the world in ways that humans alone cannot comprehend. –– Intelligent digital twins will self-modify and evolve independent of human intervention. The outcomes will be unexpected and significant. –– These intelligent Digital Twins will disrupt and transform every existing business model and make new ones possible that create new types of value. –– A digital first future could eventually emerge in which virtual Digital Twin and Metaverse assets are the primary source of value, and the physical/real world is secondary. This chapter discusses how Digital Twins, and the digital twin ecosystem, changes what we know about the creation of value. It discusses the changes that digital twins cause in value propositions, business models, artifacts, experiences and how enterprises and individuals influence each other. It covers how Digital L. Schmitt (*) Inovo Inc., Ann Arbor, MI, USA e-mail: [email protected]; https://www.theinovogroup.com D. Copps Worlds Inc., Dallas, TX, USA e-mail: [email protected]; https://www.worlds.io © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_2
21
22
L. Schmitt and D. Copps
Twins can transform markets and marketplaces, channels and supply chains, offerings, and operations. More importantly, this chapter discusses a future in which the relationship between the digital and physical worlds will fundamentally alter the value that individuals and enterprises create and consume. Keywords Artificial Intelligence (AI) · Business model · Digital twin · Disruption · Ecosystem · Innovation · Internet of Things (IoT) · Jobs-To-Be-Done (JTBD) · Metaverse · S-curve adoption · Supply chain · Value proposition
1 The Future Influence of Digital Twins In April 2019, a fire destroyed a substantial portion of the Notre Dame Cathedral. It was devastating. Thousands of people saw the structure go up in flames. There was universal concern that the entire structure would collapse and that this historical and cultural landmark would be lost forever. Luckily, several years before, there was a significant effort to scan the entire structure exactly as it existed in the real world to create a Digital Twin [10]. This has been invaluable in the restoration efforts. No longer are dubious old, incomplete, and inaccurate records and blueprints needed. The Digital Twin is being used to rebuild a cathedral that is not only precise in its fidelity to the ‘original’ cathedral, but it will also be ‘better’ (Fig. 1). In the age of the Digital Twin, engineers and architects were able to consult a digital model of the French cathedral (Notre Dame) — one far more detailed and interactive than any blueprint — which allowed them to stay true to the original structure while also incorporating new innovations in design and material.
Fig. 1 The digital and physical Notre Dame [37]
The Business of Digital Twins
23
Notre Dame has become a new type of asset–the physical, real-world cathedral united with its Digital Twin. The restored Notre Dame uses new materials, components, and construction methods to improve the structure. It will have integrated sensors for security and safety to insure, in part, that such a fire never happens again. There are many things that the enterprise that owns Notre Dame will now be able to do with this enhanced asset. Virtual tours will allow people to go places in the cathedral where previously they could never go. Multiple groups can simultaneously rent the cathedral for virtual events. Many new experiences will let people interact with the Digital Twin. It is, in fact, the most faithful representation of the cathedral that exists. The physical Notre Dame will still be an attraction in the real world. But the physical Notre Dame will now have a replica in the virtual world. It will have more visitors who will use it in ways that were never be possible with the physical. Notre Dame illustrates how Digital Twins expand beyond their traditional uses in manufacturing and industrial applications. It also illustrates a key aspect of Digital Twins that is explored more fully in this and other chapters of this book. Creators will intimately connect the real world and the corresponding virtual world entities that mirror them via ‘digital threads’ [46]. These threads convey data and information in real time between the real and virtual entities. The Digital Twins become ‘alive’ as they react to and influence their real-world counterpart. In the case of the Notre Dame Cathedral, the volume and frequency of this interaction will be relatively low when compared to more dynamic and complex entities. In many cases, the interactions will occur at volumes and speeds that are be beyond human capabilities befitting how the system needs to function. Today’s Digital Twin is as likely to correspond to a complex socio-economic system as it is to a distinct physical object. Many Digital Twins of physical systems like jet engines and manufacturing operations exist. Now people are building Digital Twins of a gambling floor in a Las Vegas casino, of a major logistics hub in Texas and even of an entire city (see chapter “Digital Twin and Cultural Heritage – The Future of Society Built on History and Art”). These are complex systems comprised not only of physical entities but complicated processes, human and artificial agents and natural and virtual entities and events. This rapid evolution and growth of the Digital Twin ecosystem gives this chapter its impetus. Digital Twin innovation isn’t just about new products and services or enhancing an enterprise’s operational efficiency. It is also about new business models, new experiences, and new ways of interacting and living in the real and virtual worlds. Digital Twins have as much, if not more, opportunity to enhance and alter non-tangible artifacts such as business models and customer experiences as they do to change an enterprises’ products and services or operations. The value that enterprises will derive from Digital Twins increases as the twin’s capabilities expand. In addition to an enterprise’ own products, services and operations, Digital Twins are now encompassing entire ecosystems that an enterprise operates in. These expanded ecosystem Digital Twins include the enterprise’s suppliers, channels, and customers and numerous complementary and competitive enterprises and even the effects of society, governments, and the natural world.
24
L. Schmitt and D. Copps
1.1 A Focus on the Future of the Enterprise This chapter is about the business of Digital Twins. The focus is on what enterprises, and the people that run them, supply them, and buy from them, want. It is about the possibilities that Digital Twins provide for giving people and enterprises what they want and about how they address needs and desires, both human and machine. Most importantly, this chapter is about the ‘jobs’ Digital Twins can do, their value propositions and the business models that are made possible because of them. Fundamentally, it is about the ways Digital Twins create value. There are serious technical issues to be considered when implementing and deploying Digital Twins. Issues such as security, privacy, reliability, interoperability, standardization, deployment, and architecture, among others, are critical and will require sophisticated technology-enabled solutions to address. Other chapters of this book discuss these issues. But technology is the means to an end, not the end itself. Economic returns come when technology allows people, and increasingly machines, to create something that people, and the enterprises they are part of, want, need, or desire. Therefore, this chapter does not focus on Digital Twin technology and the issues of implementation. These are solvable problems that are necessary, but not sufficient, for the full value of Digital Twins to be realized. This chapter is about the business of Digital Twins, and it assumes a well-functioning Digital Twin ecosystem with various levels of sophistication, complexity, and universality/locality. This chapter will focus on what Digital Twins can and need to do (their effects), rather than on their architecture or how they work. Key points discussed in this chapter are: –– Digital Twin adoption has reached an inflection point where their growth is now exponential. –– The Digital Twin supply chain will evolve in ways like other physical and digital supply chains have evolved. This will change the ways they are built and used. –– Increasing Digital Twin complexity and dynamism will propel twins beyond what humans can comprehend and this is where the most value is created. –– Digital Twins extend and disrupt current business models, creating new types of value. –– Digital Twins, and their real-world counterparts, will eventually become self- modifying and create a flywheel of continuous evolution. –– The future will be a digital first world in which the twin will be the source of value and the physical world will approximate the ideal embodied in the twin. This chapter is forward looking and describes what the future could plausibly look like. This involves informed speculation based on observable trends and existing implementations. Every one of the future stories is based on technologies, experiments, and nascent offerings that exist today in one form or another. Some stories and scenarios presented below may never happen, at least not the way they are portrayed. But they are still useful in ‘widening the lens’ of our vision of the future
The Business of Digital Twins
25
to see what may be possible. What is happening today is indeed reminiscent of William Gibson’s famous observation that “The future is already here; it’s just not very evenly distributed”.
1.2 At an Inflection Point The concept of a ‘twin’ has been around since the first flight simulator was conceived and patented by Edwin Link in the 1930s. His concept was to give pilots the experience of an actual aircraft using as exact a replica as was possible. NASA continued to develop the concept of a ‘twin’ in the 1960s with the Apollo spacecraft. With the growth of computing, twins naturally migrated to software implementations and then into hybrid real-world, virtual entities. Chapter “Digtal Twin Architecture – An Introduction” covers the historical evolution of Digital Twins and the technologies, companies and institutions that have advanced the state-of-the-art over the years. The growth and adoption of Digital Twins is following the S-shaped diffusion curve that was described by Rogers in his seminal work “The Diffusion of Innovation” [32]. All new, significant technological introductions follow this path. As others have noted [24], these S-curves are happening faster and faster [31]. Digital Twin technology is no exception (Fig. 2). Every current indicator (see the Size of the Prize section below) points to the fact that Digital Twin adoption is now out of its incubation phase. Growth has passed an inflection point and becomes exponential.
Fig. 2 Digital Twins are following the S-curve of technology adoption/diffusion and, in the early 2020s, are at an inflection point
26
L. Schmitt and D. Copps
One of the most powerful indicative signals of an exponential growth phase is to look at the number of startups that are working in the Digital Twin ecosystem. The Galaxy data analytic tool [43] from Growth Science identified commercial companies who are developing, supplying, or using Digital Twins and related technologies and products. A search of public and private company records shows that at in the early 2020s, over 10,000 companies, from startups to multinationals, were active in the Digital Twin space. One reason for this startup activity is that Digital Twins are moving beyond their origins in industrial manufacturing for physical products. They are now modeling more complex systems that humans and enterprises operate in and experience (as other chapters in this book will discuss). Complex systems comprising physical and process artifacts, agents and organizations, and the natural world can now be accurately and dynamically represented digitally. This is a testament to the rapid and ongoing pace of technology development. These days, the creation of an accurate and precise Digital Twin has never been easier. New technologies and tools, as well as a maturing Digital Twin infrastructure and supply chain, are making it easier and faster to both engineer a twin during the design process and to capture real-world artifacts that already exist. Companies such as Matterport [23] illustrate how this is happening. Using their products and services, anyone can map a physical space and create an accurate digital representation of it in a matter of minutes. This capability is migrating to cell phones that not only have cameras but lidar sensors as well [47]. With this technology, mobile devices carried by billions of people and intelligent software can scan every physical environment and create a live digital representation. Everyone with a phone anywhere will soon be able to create a ‘twin’ of the space they are in and contribute to the Digital Twin of the world. When Matterport went public in the summer of 2021, they had mapped over 5 million spaces, less than 1% of the total number of spaces that they claim can be digitized. This is most likely an underestimate. Another company creating Digital Twins of the world is Worlds [30]. The company’s software platform combines computer vision, IoT sensors and digital twins to create live, 4D (x, y, and z + time) simulations of real-world places. Organizations use these live twins to measure, analyze and build automation into critical real- world processes. At Worlds, Digital Twins form a nexus where cameras, sensors, transponders, people, processes, and geometries combine to create unified sensing and measurement of any environment.
1.3 The Maturation of Digital Twin Infrastructure and Supply Chains As Digital Twin technology improves, the Digital Twin value network will also transform to accommodate new technologies, new applications, and new business models. The late Clayton Christensen defined this reshuffling, consolidation, and
The Business of Digital Twins
27
modularization in his 2003 book [8] as the ‘Law of Conservation of Attractive Profits’. This ‘law’ describes the transformation of initial, highly specialized, hand- built solutions that characterize the initial stages of a new technology. What emerges over time is a sophisticated, modularized, supply chain with base-level technology, multiple intermediaries, application providers and service companies. Each entity has their own place in the ecosystem. In a mature supply chain, a few large, keystone enterprises supply full-service, full-stack, infrastructure, and ecosystem solutions. Numerous other companies support the infrastructure and create application specific solutions and adjacent capabilities. This is what the modern cloud and AI infrastructure ecosystems have evolved into with companies like AWS, Microsoft, and Google. The Digital Twin supply chain is also transforming and following this same path laid out by Christensen. This is causing the Digital Twin adoption inflection point. Seemingly overnight, Digital Twins are easier to build and operate than ever. No longer are they the bespoke creations that only governments and large enterprises can afford. Now, a Digital Twin is within the realm of almost anyone with a computer and a camera. This wonderful blossoming of Digital Twins has seemingly happened overnight but in fact, it has been preceded by decades of development across a wide range of technologies and applications. Specifically, the following technologies had reached a level of sophistication such that the Digital Twin supply chain could reorganize, and the adoption inflection point happen. –– Sensors and effectors to detect and change precise temporal and spatial state changes –– IoT to inform and influence how the physical world is functioning –– AI (machine learning, predictive analytics, natural language etc.) to make the digital ‘smart’ –– Cloud storage and data structuring to aggregate and organize diverse forms of data –– Communication that covers the globe with high-speed transmission –– AR/VR to view and interact with digital artifacts –– Vision the universal sensor and reality capture to render the physical world –– 3D/Additive Manufacturing to directly create physical manifestations of digital concepts –– Automation and robots to interact with the physical world –– Game technology for rendering, physics engines, and massive multi-agent environments –– Blockchain and NFTs to authenticate ownership of digital assets The advancement of these, and other related technologies, have driven the emergence of the Digital Twin infrastructure business. Companies like Autodesk, Unity, NetApp, and many others provide infrastructure and tools that dramatically lower the effort and costs for creating and using a Digital Twin. Other companies like Matterport and Worlds mentioned above, add capabilities to the infrastructure to broaden its usage in specific areas and create complete, functioning digital twins for
28
L. Schmitt and D. Copps
a variety of applications. Even newer companies are implementing complementary support functions such as IoT-Blockchain integration in the form of Oracles [7]. This maturation of the Digital Supply chain is as exciting as it is inevitable. Keystone companies will supply the infrastructure upon which Digital Twins will be built and operate. They will be complemented by providers of tools, middleware, supporting capabilities and many others that specialize in various applications, industries, and ecosystems. In the early 2020s, the transformation of this supply chain is still in flux, with many new developments and participants yet to find their place. But this is how Digital Twins will be able to exponentially expand into the world and become ubiquitous. New applications will not only be purposely researched and built according to known customer needs, but, as in the case of the Las Vegas casino described below, will also be stumbled upon. New and existing enterprises will think of new ways Digital Twins can be applied to things that we cannot yet imagine. This will only be possible if Digital Twins are quick and easy to build so that users can experiment and iterate on their deployment. These experiments will shape the Digital Twin ecosystem, how Digital Twin technology is being applied, and the new value and business models it can create. They are the driving force behind how people and businesses will view and interact with the world in the future.
1.4 Enabling ‘Magnificent Powers of Perception’ As Digital Twins become more perceptive, more intelligent, more affordable, and more ‘alive’, they will change the way people and enterprises see, experience, and interact with their physical and virtual worlds. Digital Twins make it possible to see things that were never previously visible and interact with things that could never be interacted with. These are the new ‘magnificent powers of perception’ that change how we experience our world and how enterprises do business. Digital Twins will ultimately enhance the world itself and provide an interface to a new reality that creates value for both people and enterprises, for both human and artificial agents (Fig. 3). Consider a Digital Twin of a complex operation like an airport. A person observing the scene could be a trained expert, but they would only see and comprehend a fraction of the full reality. With a live Digital Twin, not only does a person see what is going on, but an AI can ‘see’ it as well. A person sees a specific airplane and perhaps knows its identity, where it came from and is going. The Digital Twin knows when the plane was last serviced and for what, it’s complete flight history, who is on the plane, the status of every aircraft system, and on and on. A person sees a driver and a truck. The twin can tell what exactly is on the truck, if the person and truck are where they should be, and what they should be doing. Moreover, the twin can intervene, sending data, information, and instructions back to the physical entities. People are limited by their biology, but when human understanding is combined with Digital Twins, it enables powers of perception at a previously unimaginable scale.
The Business of Digital Twins
29
Fig. 3 Digital Twins deliver ‘magnificent powers of perception’ [49]
The technology driving Digital Twin development is revealing new possibilities, some of which people and organizations do not realize they want until they see them. What will be wanted in the future is the breeding ground for innovation. Many new startups are being formed to explore some of the less obvious, but perhaps more valuable, aspects of Digital Twins. The startup company Worlds is working with NRT Technology, a provider of guest services technology for integrated casino resort operators, to create a live Digital Twin of a blackjack table. As gameplay is captured with cameras, it is re- expressed inside a live Digital Twin where chips are recognized and counted, and bets are valued and verified. The software has potential future use as a virtual pit boss monitoring critical chip exchanges and assuring that “high rollers” are being recognized and receive the attention they deserve from the casino. This software will continually explore how a Digital Twin of a game of chance can be used to create significant ongoing value for casinos and radically improve the gameplay experience of their customers. The use of Digital Twins in the context of table games was ‘stumbled upon’ through a chance encounter between Worlds executives and the team at NRT. It was not extensively researched by business development or an advanced marketing team, but has become a fascinating partnership to explore and create something that has never existed before.
1.5 A Journey in a Digital Twin Future Digital Twins can enhance and transform what a person and an enterprise can do and experience. To understand how this happens, it is useful to imagine a future world where new ways to create value exist. These imagined futures reveal new desires, new technologies, new business models, and new ecosystems. Until these
30
L. Schmitt and D. Copps
futures happen, we can get a glimpse of what may be possible by using foresight to imagine plausible scenarios that can inform where we are headed. The customer journey [14] is a method that many companies employ to “understand the series of connected experiences that customers desire and need”. What follows is an abbreviated adaptation of this methodology applied to how users experience a Digital Twin as they are building it, using it, and evolving it. It is useful for thinking about how future enterprises and people behave when engaging with the complex systems that Digital Twins will represent. In this case, the journey described includes both the individual perspective and the perspective of the different enterprises throughout the value chain. This includes service providers and manufacturers, enterprises that are consumer focused and those farther down in the supply chain. It also covers the creation of a new entity – in this case a Ground Transportation Control System (GTCS). Although many of the elements and activities described below exist, the following journey is fictional. This journey is an amalgam of what different companies are thinking of and experimenting with. Any resemblance to a specific company, person or actual event is purely coincidental. A Digital Twin Based Ground Transportation Control System (GTCS) It is 2030 and the logistics and ground transportation ecosystem are about to undergo radical transformation. As a senior executive at an enterprise with a major stake in this ecosystem, you can see the writing on the wall. Innovative technologies are transforming production, retail, warehousing, trucking, rail, and last mile delivery in ways that are disrupting the industry. You see an opportunity to build a radically new ‘air traffic control system’ for ground transport in North America, a Ground Transport Control System (GTCS). The goal is to improve efficiency of the entire system by 15% or more. This will require improved coordination, safety and interoperability between the hundreds of enterprises involved and to prepare for the world of autonomy in which vehicle drivers and warehouse workers will no longer be human. The various parties involved in this effort come together in a coalition in which the real and virtual collaboration of hundreds of independent entities and thousands of activities are planned and coordinated. An intelligent, Digital Twin of the entire ecosystem plays a critical role in meeting the stated goals. Creating the twin is one of the goals itself. Designing, building, and operating this Digital Twin ecosystem will be critical to its success. (Fig. 4). Here’s the journey. 1. Form a collaborative, open, coalition – Building a comprehensive GTCS involves hundreds of diverse stakeholders with different perspectives, motivations, and incentives. One way to have all voices heard and all stakeholders contributing is to put an AI-enabled Digital Twin (the GTCS twin) at the center of the effort. A (drastically) simplified diagram (Fig. 4) depicts both physical and digital asset holders whose agreed upon participation, contribution, and ownership can be verified using digital contracts built on blockchain technology.
The Business of Digital Twins
31
Fig. 4 Distributed coalition to create and run a new Ground Transportation Control System (GTCS)
The GTCS Twin is a collaborative human-AI entity with the AI doing the heavy lifting. All stakeholders, from FedEx size behemoths to mom & pop delivery services, can interact virtually and make contributions to the GTCS Twin. It aggregates information from thousands of diverse sources to analyze alternatives and generate design options. It participates in every aspect of the design experience and can discern and harmonize the often-conflicting functionality the various members of the coalition want. As the GTCS Twin supervises, the coalition together builds a comprehensive, live, Digital Twin. The parties get together virtually with each person donning AR/VR glasses that let them see and interact with each other and the GTCS twin in its current state. 2. Capture current state and iterate the design – The first task is to make sure the GTCS twin models what currently exists. Using third-party data aggregators and various scanning devices and platforms, the GTCS twin directs the scanning and digitization of existing infrastructure, physical assets, and operations. It creates twins of the various artifacts that currently exist. Virtual ‘fly-over and fly-into’ give a full 3D 360-degree look at every aspect of a piece of land, a facility, or an operation. These are in much greater detail than would be possible with a real-world visit. This allows the coalition members to visualize where the problems are and how to transform the existing system. Anyone can, at any time, virtually occupy, operate, and modify any aspect of the GTCS twin to experience or predict what it might be like to deal with potential situations. Access to multiple marketplaces of virtual and physical artifacts provides ready-made solutions to meet many of the design objectives. The ability to visually build, tear-down, modify and run new types of systems in rapid succession gives designers unprecedented freedom in exploring possibilities that would pre-
32
L. Schmitt and D. Copps
viously have never been considered. Each virtual interaction a coalition member makes is noted and processed to add detail and intelligence to the Digital Twin being constructed. The GTCS Twin starts becoming smarter as it is being built. It rapidly becomes a learning system that can understand and predict the unmet needs and desires of the multiple parties involved. It starts making suggestions and informs the coalition of alternatives. The GTCS twin aggregates real-time and historical data from many sources. As the design is created, it is constantly analyzed to accurately estimate the total cost of ownership and a running score of efficiency and sustainability. As the GTCS twin grows to be more comprehensive and intelligent, it can explore different business models. The Multi-sided Market, Aggregation, and Twin-as-a-Service (TaaS) business models, described below, are all feasible alternatives the twin supports. 3. Build to the Twin – As the GTCS twin is being built, its real-world manifestation is being realized. Using an enhanced agile process, minimally viable products are tested, prototypes are tried out, pilot projects pressure test new designs and new releases of both physical artifacts and operational processes are introduced as they are ready. Behind this are the coalition members. An entire network of enterprises, old and new, build, operate and maintain the constantly evolving physical/digital ecosystem the GTCS twin is directing. The design process is never complete, it merely ebbs and flows with periods of relative stasis punctuated with periods of rapid change. The GTCS twin is the standard to which the real-world implementation aspires to. As design decisions are made based on the evolving GTCS twin, construction and manufacturing enterprises build the real-world counterpart. The twin ‘supervises’ the manufacturing process to which it provides necessary information and services. For physical infrastructure, automated production translates the digital to the physical. Automated construction vehicles prepare sites. Machines build structures using large-scale 3D printing methods and prefabricated modules with integrated heating & cooling, lighting, power, security, energy storage, wireless communication, displays and more. Advanced facilities manufacture these using automated processes that have their own Digital Twins to optimize operations. They are assembled on-site and finished and furnished to match the physical to the GTCS twin as closely as possible. The suppliers of the equipment and services themselves have their own Digital Twins and employ business models that Digital Twins enable. Virtual inspections are completed, and digital occupancy permits are issued. All paperwork is completed through electronic documentation. The GTCS Digital Twin interacts with the various local and municipal entities that have jurisdiction. 4. A flywheel of continuous evolution – The GTCS twin evolves as it incorporates existing infrastructure that is built upon and enhanced. All this happens as the ever-evolving GTCS twin supervises everything happening. As the real-world
The Business of Digital Twins
33
GTCS operates, it is affected by the people who are physically and virtually present to do things that are not automated. Natural phenomenon and unanticipated situations also affect it. The GCTS twin constantly learns how things are really working and adapts its design and physical reality to become more congruent with reality. Simultaneously, it is also shaping reality to be more congruent with it. The GTCS twin provides services to the coalition partners and suppliers that use and improve the GTCS over the years. It is involved in maintenance or upgrades, security, and automation, educational, and virtually anything having to do with how the GTCS operates in the real world. As the GTCS operates, it also evolves. The GTCS twin is continuously updating itself based on direct feedback from the daily operation of the real-world system. Lines between real-world artifacts and digital representations blur. The twin comes up with non-obvious types and configurations of new artifacts and agents that result in the constant rejuvenation of the real-world GTCS. Constant upgrading and replacement of equipment and processes, and the maintenance and replacement of parts of the structure itself, becomes a continuous process. The GTCS is constantly adapting its structure and function. New types of businesses operating with new business models emerge to service this constantly evolving organism. The GTCS twin at the center of it all is serving multiple purposes. It is first, a multi- sided platform bringing together various buyers and sellers and providing value to both. In addition, it is acting as an aggregator, bringing in information from multiple, disparate sources and processing and shaping that information to supply access and intelligence. Third, the twin itself is a service to the various constituencies providing answers to questions, information about specifics, and virtual experiences that individuals and enterprises can interact with. These three models of value creation–multi-sided platform, aggregation, and twin-as-a-service–are business models that are radically transformed by having a Digital Twin involved (Fig. 5). Perhaps most significantly, in this scenario, the GTCS twin serves as a source of innovation. As it evolves and becomes more encompassing and complex, it reveals new possibilities that were not previously visible. For example, new physical configurations, infrastructure, and business models will be possible when trucks and warehouses no longer have humans as agents. What will these look like? The twin can model these ‘what-if’ scenarios and, with a human innovator (or perhaps by itself), come up with possibilities that were unimaginable before the twin existed. This may seem farfetched, but all the technologies and business models necessary to make this journey real exist in the early 2020s, even if several are in a nascent state. Companies are already creating and exploring these types of long- term scenarios. These allow them to ‘see around corners’ and ‘widen the lens’ to perceive what the future could bring and how they should adapt their long-term strategy. This is necessary if they are to be the disruptor rather than the disrupted.
34
L. Schmitt and D. Copps
Fig. 5 A flywheel of real-world – virtual evolution
2 A Framework for the Business Side of Digital Twins What value does a Digital Twin create that would cause an enterprise to make significant investments in its creation, operation, maintenance, and enhancement? What attributes or characteristics of a Digital Twin affect the value it can create? Can value be calculated and measured? How is value delivered? Answers to these and other questions determine how a Digital Twin affects future business. Enterprises engage in business to create value for their customers, be they individuals or other enterprises. There are different modes of value creation, but the ones of most interest to commercial enterprises are ones that create monetary value that allows the enterprise to grow and thrive. Recently, other types of enterprise value have emerged and have taken on increasing importance (e.g., Environment Social Governance (ESG) [18]). A four-part framework can assess how a Digital Twin creates value, and that value is distributed, 1. Jobs-to-be-done (JTBD) – The outcomes and experiences a user of the Digital Twin wants the twin to deliver. These jobs determine how much impact the Digital Twin will have on the world, an enterprise, and its customers. 2. Design – A Digital Twin’s design determines how well the twin does the jobs it is hired to do. How effective and extensive is the twin’s influence? How well does a twin satisfy its desired outcomes and experiences? 3. Business Model – The means whereby the twins’ jobs are done, and value is delivered. An enterprise’s business model both affects, and is affected by, the Digital Twins they use. 4. Adoption – How widely is the Digital Twin deployed and what influence does it have on the ecosystem(s) it is a part of?
The Business of Digital Twins
35
2.1 A Digital Twin’s Jobs to Be Done In the GTCS twin journey presented earlier, the Digital Twin does jobs that the coalition partners want to get done. These include, but are not limited to: –– –– –– –– –– –– ––
Enforce blockchain contracts agreed by the participating parties Apportion work, assess contributions, value creation and compensation Ensure consistency of the twin and coordinate actions to insure consistency Facilitate collaborative twin development among disparate parties Allow individuals and groups to directly experience the twin (using AR/VR) Provide access to digital artifacts (e.g., BIM objects) to foster design imagination Support real-world modular manufacturing, on-site assembly, furnishing and provisioning –– Present a comprehensive view of all aspects of the entire GTCS experience –– And many other jobs… Jobs-to-be-Done (JTBD) is a formal, and widely used, mechanism for describing and analyzing value creation. It was first introduced by Theodore Levitt, a Harvard business school professor, famous for his statement “People don’t want quarter-inch drills. They want quarter-inch holes.”. Clayton Christensen popularized and expanded the concept in his many writings on innovation [9]. In brief, JTBD is based on the notion that people (and enterprises) ‘hire’ a product or service that does a ‘job’ they want done, even if they do not consciously realize it. As Tony Ulwick, a leading practitioner and promoter of the JTBD theory and method, defines it: A JOB-TO-BE-DONE is a statement that describes, with precision, what a group of people are trying to achieve or accomplish in a given situation. A job-to-be-done could be a task that people are trying to accomplish, a goal or objective they are trying to achieve, a problem they are trying to resolve, something they are trying to avoid, or anything else they are trying to accomplish.
The advantage of the JTBD framework is that it specifically reveals the needs and desires of individuals and enterprises and how those needs, and desires are currently being satisfied. If done right, creating a list of jobs to be done reveals jobs that the individuals and enterprises didn’t even realize that they wanted done (their tacit, unmet needs). Jobs can be defined without ever having to delve into specific technological details. From an innovation and business perspective, this separation of what is wanted from what is possible fosters creative solutions. There are always several ways to achieve a specific, desired outcome or experience (if they are indeed possible). But often what is wanted gets confounded with how to solve the problem. JTBD gets around this issue. Digital Twins are hired to do the following types of jobs. • Efficiency Jobs – The jobs that make existing outcomes better. This is the most common type of job and involves the Digital Twin improving existing products and processes. It includes making equipment run more efficiently, automating
36
L. Schmitt and D. Copps
manual processes, predicting, and preventing failure and other ways to make things run better. The Fuel Terminal case presented later in this chapter is an example. • Experience Jobs – The jobs that create experiences people want. These types of jobs focus on the emotional responses of people and enterprises and include things like customer satisfaction, sense of immersion, excitement, or anticipation, etc. The example of the Notre Dame Cathedral presented above, or the Las Vegas gaming business case presented below, are examples where experiential jobs predominate. • Innovation Jobs – The jobs that result in outcomes and experiences that did not previously exist. These types of jobs are ones that reveal gaps and opportunities in how things are being done, gaps and opportunities that lead to the invention of something new and different. In the GTCS example above, the twin could help come up with new types and configurations of artifacts and agents that were not obvious to the humans involved. These types of jobs have the potential to be the most revolutionary aspect of the Digital Twin in the future. The ability to supercharge innovation through continuous virtual and real-world change. These three distinct types of jobs are not mutually exclusive. Nor are they the only way of categorizing JTBD. They do, however, allow people who conceive of and design Digital Twins for specific purposes to broaden their perspective on what the Digital Twins they are creating can and should do.
2.2 The Design of Digital Twins A Digital Twin is designed to satisfy a set of jobs, but how well it is doing these jobs is of critical importance. Not every Digital Twin is created equal. A Digital Twin that operates at a low level of resolution and accuracy is not as valuable as one that improves either of those dimensions. One aspect that affects the value created by a Digital Twin is its fidelity – how closely it mimics, sees, reacts to, and controls relevant state changes in the real world. This is true up to a point. Fidelity is tied to what the Digital Twin will be used for, and it must match meaningful, real-world effects. It makes no sense to have a Digital Twin of the molecular level when what is important is how fast a turbine blade is spinning. But it is also the case that the higher fidelity a Digital Twin has to the real-world artifact it is modeling, the more states it can ‘see’ and ‘control’ and potentially the more valuable it is likely to be. The general trend is that finer and more precise temporal and spatial resolutions are the direction that users want, and that Digital Twins will need to deliver. Scope and fidelity are examples of relevant attributes that an assessment of a Digital Twin should account for. There are other attributes that are important as well. The two dimensions of Complexity and Dynamism can create a framework for relevant Digital Twin attributes.
The Business of Digital Twins
37
–– Complexity – A measure of the structure of a Digital Twin. The size, intricacy, and ‘complicatedness’ of a Digital Twin. Three factors contribute to a Digital Twin’s complexity. • Components – The number and variability of the real-world artifacts the twin represents. Does the twin represent both physical and process artifacts? Does the twin account for agents, human or otherwise? Are natural elements, physical and environmental, included? What variety of each type of component does the twin represent? • Scope and Composition – The size of the real-world system the twin represents. Is it a compressor inside a jet engine or a complete air transportation ecosystem? Does the twin represent an atomic, real-world entity, or is it a composition of other twins? The more extensive scope a Digital Twin embodies, the more valuable it is likely to be. • Fidelity – How precisely and accurately the Digital Twin matches its real- world counterpart – from an approximate abstraction to a precise, real-world correspondence. Does it get limited feeds from isolated sensors or is there a representation of the entire physical form and its function? What is the resolution of the twin – the size of the irreducible components of the model? –– Dynamism: A measure of how a Digital Twin functions as it changes state in response to the real-world and how the real-world changes state in response to the twin. Three factors contribute to a Digital Twin’s dynamism. • Frequency – What is the time scale of state changes and what percentage of state changes in the real world are captured or ‘seen’ by the twin? How fast can the twin respond? This can range from a purely static model (in which case it is not really a twin at all) to the super-physical, where the twin can change states faster than the physical object. This later situation could be used, for example, to run ‘what-if’ scenarios in real time. • Interaction – What level of connectedness does the twin have with its real- world and other twin counterparts? What effect on these counterparts does a twin have and how is it affected by them? Interaction can range from a twin and its real-world counterpart being isolated from any other physical or virtual entity to one that is fully integrated with many other twins and their real- world counterparts. • Impact – To what extent can the twin affect its real-world and other twin counterparts? Is it just aware of what is happening, or can it diagnose, predict, or even prescribe real-world changes? How extensive and relevant are the effects a twin has on the real world and the other twins it interacts with? Digital Twins can be mapped onto a rudimentary canvas constructed from the dimensions of complexity and dynamism. Figure 6 shows a preliminary version of the Digital Twin Value Canvas upon which various representative types of Digital Twins are plotted. In this framework, the weather has an enormous amount of dynamism but little variability in the types of components it’s composed of. Nevertheless, its scope and
38
L. Schmitt and D. Copps
Fig. 6 A Digital Twin Value canvas organized along complexity and dynamism dimensions
impact are enormous. A Las Vegas gaming floor, on the other hand, has a significant amount of variability in the types of entities it must account for, but has a more limited scope and less frequent state changes. Such a canvas can be a basis for comparing various Digital Twin implementations regardless of what they represent in the real world. Digital Twins to the right and higher on the canvas have the potential to be more valuable. They model larger ecosystems with more and varied types of components at higher levels of relevant fidelity and therefore will have more value to individuals, enterprises, societies, and governments. Note that there are no scales on the axes of the canvas and the various twins are placed based on subjective judgement. Even so, the canvas is useful in identifying how a specific Digital Twin implementation compared to other twin implementations. This allows the relative value creation potential of different twins, ones that exist and ones that are contemplated, to be gauged. It also lets Digital Twin designers and builders to determine what could be done to enhance their value. This is a preliminary and rough approximation of a set of attributes by which Digital Twins can be measured and compared. Others more versed in the science and engineering of Digital Twins can build upon, refine, or even replace this framework. It is clear, however, that some type of clear categorization and assessment of Digital Twins is useful. More mature technologies have well developed metrics for assessing and comparing implementations along relevant dimensions. This remains
The Business of Digital Twins
39
to be done for Digital Twins where, in the early 2020s, it is difficult to even agree upon a definition (see chapter “The Digital Twin: Past, Present, and Future”). Once a twin’s jobs are identified and its design parameters determined, attention can turn to how a Digital Twin affects and is affected by the business models it supports.
3 Digital Twins and Business Models The term ‘business model’ was first used in the late 1930s. Osterwalder and Pigneur popularized the business model canvas in their 2010 book, Business Model Generation [28]. Since then, companies seeking to innovate and gain a competitive advantage for the products and services they offer have identified and developed numerous business models. A business model is the way a company realizes value from the offerings they create. It defines the ‘rules’ a company operates by to create value. The Wikipedia definition of business model states: A business model describes the rationale of how an organization creates, delivers, and captures value in economic, social, cultural, or other contexts. The term business model is used for a broad range of informal and formal descriptions to represent core aspects of a business ...
It is more common for a company to adopt a known business model, perhaps with slight variations, than for a new type of business model to be created. An example is the Freemium business model first named by Andrew Fluegelman, an editor at PC Magazine, in 1982 [5]. Thousands of companies have since adopted and modified the Freemium model. Many diverse types of business models have been identified and described over the years. There are lists of dozens of different types of business models in use by companies today [4, 12, 29, 35]. Compiling, merging, and harmonizing these various lists results in over 70 distinct types of identified business models. Although digital aspects are a part of many of the business models, a Digital Twin adds new capabilities that have profound effects on all 70+ business models in use today. In addition, Digital Twins will make it possible to invent new types of business models that would be otherwise impossible. The effort to build 70+ Digital Twin enhanced business models will occupy enterprises for decades to come. Every one of the 70+ identified business models has a version in which a Digital Twin plays a significant role. For example, the Fractionalization business model (one real-world artifact has multiple owners/users) could have a fractionally owned Digital Twin as its basis. In another example, the Razor and Blade business model has a Digital Twin enhanced version in which the twin plays the part of either the ‘razor’ or the ‘blade’. It works to mediate both realworld and virtual continuing revenue streams. Three business models have been chosen (Fig. 7) to illustrate how they can be transformed by a Digital Twin.
40
L. Schmitt and D. Copps
Fig. 7 Three business models chosen to show the effects of a Digital Twin
3.1 Multi-sided Platform/Market Business Model A model that connects multiple parties to accomplish a task to the mutual benefit of all parties. Multiple suppliers and customers are brought together to complete a transaction or an ongoing relationship. The platform takes a commission on the value transferred between the parties. An example is Uber Eats that connects restaurants with delivery drivers with home diners.
In this model, a Digital Twin can be a participant or a market maker. The twin can use other multi-sided platforms to accomplish the jobs it needs to do, either as buyer or seller. Alternatively, an intelligent twin can run a multi-sided platform (i.e., be the market maker) and be the entity that is connecting buyers and sellers and taking a commission. Digital Twins are both assets themselves and participants, through their real- world counterparts, in the exchange of assets that range from commodities to bespoke experiences. Like all other types of assets, digital ones are increasingly being bought, sold, leased, and otherwise accessed through multi-sided platforms that offer services to multiple parties. In the GTCS example above, the architects, engineers, and builders of the GTCS twin use multi-sided markets to access various digital objects that go into their creation. For example, the BimObjects platform [20] has digital objects that represent structures, components, and subsystems of physical infrastructure. Designers and engineers choose these to be integrated into the design of the twin and ultimately the real-world artifact. With the advance of AI and blockchain technologies, the world of digital objects is expanding to include a wide variety of services and experiences. Items such as digital security, operational and automation AIs are also assets that can be purchased or leased through platforms that connect sources and users.
The Business of Digital Twins
41
Not only will Digital Twins use multi-sided platforms to access things they need, but they will also operate their own multi-sided platform to connect the constituencies that use the twin. The Digital Twin can be a platform itself. The evolution of these multi-sided platforms run by Digital Twins provides makers and users of digital assets, including the Digital Twins themselves, increasingly varied options for connecting. In the GTCS example, the GTCS twin can run a platform connecting autonomous carriers with autonomous warehouses. Or it could allow a retail customer to connect directly with a last-mile drone service. The users of the twin will be able to imagine any number of new services not initially thought of by the designers. The development of multi-sided platforms that accommodate digital assets is well underway. Platforms, like BIMObjects mentioned above, allow designers, owners, and builders to connect. Other, newer platforms like OpenSea [26] connect artists, entertainers, athletes, and other creators with collectors and speculators. The proliferation and scope of the digital asset platforms and markets will continue to accelerate, and Digital Twins will play a key role.
3.2 Aggregator Business Model A model that collects substantial amounts of data, information and knowledge from disparate sources, packages it and sells access to it under its own brand. The information or other assets collected are usually reformatted, analyzed, and massaged in ways that make it easier for customers to use. An example is Zillow who uses the aggregation business model to create its own brand value.
When an enterprise collects diverse data and information from multiple sources and makes the collection available to others to use (for a fee), they are using an aggregator business model. The value derives from the act of collection and organization, the aggregator does not need to be the original source of the data. An aggregator can be as mundane as A-1 Rental or as sophisticated as Zillow. The value of aggregation is that it alleviates the overhead for finding, accessing, and interpreting multiple information sources. In the GTCS twin journey presented above, there are instances of the coalition partners and suppliers making use of an aggregation service. The GTCS twin aggregated information about lots and land from many governmental and real estate sources. The twin aggregates information from various designers and manufacturers about design options, materials, equipment, furnishings, etc. A buyer can compare and select from among alternatives using a common aggregation platform that provides information to make informed decisions. This form of aggregation of information is already widely available. A Digital Twin can exploit the aggregation business model in other ways. In a diverse ecosystem such as the GTCS, the twin itself can be an aggregator. In this role, it gathers information from a wide and diverse set of sources, analyzed, packaged, and provided to users for a fee. All the data threads within the Digital Twin
42
L. Schmitt and D. Copps
that are created in real time by the various IoT sensors and effectors can be woven into a cohesive ‘fabric.’ Aggregation can take place both vertically and horizontally. Vertical aggregation happens when all elements of a system, from the lowest level component (e.g., a valve) through the highest-level system (e.g., an airplane) are integrated into a cohesive fabric that can be accessed at any level. Horizontal aggregation takes place when the fabric aggregates the threads generated by all the specific instance of a twin (e.g., the twin of each Tesla automobile sold). These are threads from what could well be millions of other instances of the same type of twin whose physical counterparts are in various locations. As the twin expands and encompasses more of its ecosystem, it incorporates threads from other types of twins (e.g., the other types of automobiles and roads they drive on). This creates a dynamic picture of an ecosystem that can comprise millions of instances of diverse types of entities. Aggregation is taking place enabled by Sensor Fusion [19], and Data Lake [39] technologies and solutions. They provide the ability to aggregate multiple forms of structured and unstructured data, organize and make sense of it and provide easy and consistent access in real time. This will be critical to the ultimate integration of disparate Digital Twins created by multiple parties using different tools and used by different enterprises. A common standard everyone uses can address (and constrain) the ways a set of highly diverse and variable Digital Twin implementations to talk to each other and inter-operate. In this case, various protocols, formats, structures, and functions would need to be agreed to by all the various parties involved. Organizations are attempting to do just this. An alternative is to not impose a predetermined structure on the types of inputs and outputs a Digital Twin uses and produces. Instead, Artificial Intelligence (AI) tools and methods can make sense of disparate and unstructured data. The meta- system would accommodate unstructured, different, and constantly evolving forms of digital threads and allow any set of twins to inter-operate. The data interface between twins would be more like the unstructured world of MongoDB than the structured world of an Oracle database. It is likely a hybrid of the two approaches will eventually emerge.
3.3 Twin-as-a-Service (TaaS) Business Model A model that sells the service rather than a specific product or other real-world artifact. The users themselves never own the artifact but rather pay the enterprise that owns and maintains it for its use.
The Twin-as-a-Service (TaaS) business model puts the twin at the center of a service offering. TaaS is the logical extension to the Product-as-a-Service [41] or Software- as-a-Service (SaaS) [6] model being deployed by software providers today. The ‘as a service’ model works on the principle of regular subscription fees, often at various
The Business of Digital Twins
43
service levels, on a per person, per enterprise, or per usage basis. The user has the advantage of not having to be concerned with maintaining and enhancing the artifact providing the service. Many ‘as a service’ business models have been proposed [1]. The Twin-as-a-Service business model is one in which the real-world artifact along with its twin are providing services that are valuable to the users of both the real and the virtual worlds. A Digital Twin is an optimal vehicle for providing a variety of services, especially when it is ‘intelligent’ and can diagnose, predict, prevent, and rejuvenate its real-world counterpart. In the GTCS journey presented above, there were several instances of TaaS. During design, the GTCS twin offered design alternatives and suggestions for various parts of the system. When the physical structures are assembled, commissioned, and occupied, the system twin becomes the central source and coordinator of all services provided to its operators, human or otherwise. Digital Twins will become an asset for any and every real-world artifact, the more complex and dynamic the better. Because of this, ownership of the twin will be increasingly important. Questions about who builds and ‘runs’ the twins will need to be negotiated as they are built and used. Real-world artifacts have supply chains, as will Digital Twins. The partitioning of cash flows derived from Digital Twins will be complicated and needs to be negotiated among the diverse stakeholders. In the GTCS example, one can imagine that the users (e.g., carriers, warehouses, retailers etc.) pay a monthly fee to have their access to all the services they need and use. This fee will need to be allocated to potentially hundreds of entities that contributed to the ultimate Digital Twin design and instantiations that the users make use of. As Digital Twins built and used by commercial, governmental, and societal enterprises become increasingly complex, the necessary agreements between the parties will also become more complex. In the case of the GTCS, an enterprise can be a (fractional) owner of the Digital Twin and a user. Digital contracts on a blockchain like Ethereum will most likely be used to implement these complex commercial agreements.
3.4 New Business Model Possibilities The adaptation of existing business models to the world of Digital Twins will cause an amazing diversity of new business possibilities. But even more intriguing are new business models that would not be possible without a Digital Twin. These are harder to envision because they do not yet exist, and no one has yet experienced them. But certain early signals appear to give indications about future possibilities and the direction things could go. –– Synthetic data – New business models will use enormous amounts of synthetic data that Digital Twins make possible. The twin creates synthetic data that does not actually exist in the real world. The rapid development of AI solutions is due,
44
L. Schmitt and D. Copps
in no small part, to the availability of massive amounts of training data gathered from the real-world. Digital Twins can dramatically augment the amount and diversity of data used to train AIs. Synthetic data solves the lack of data problem and the diversity of data problem for AI training. A realistic Digital Twin can generate ‘realistic’ data for situations that, in the physical world, are either exceedingly rare or have not occurred yet. The value of such data, and the way this data can be monetized, has yet to be fully explored. –– Digital first – These are business models that make the Digital Twin the predominant source of value, with the physical world secondary. In a digital first business model, the Digital Twin would be the ideal to which the physical manifestation constantly aspires. In one conception, enterprises pay a subscription for a Digital Twin of their choice and the real-world manifestation of the twin is provided as part of the deal. The real-world manifestation would be constantly updated and refreshed over time as the twin evolved and the physical instance fell behind or degraded. –– Virtual Economy – This type of business model is based on the development of comprehensive and immersive Digital Twins that bring about the possibility of entire economies that exist solely in the virtual world. In these ‘Metaverses’, there would be no real-world counterpart other than the human experience of ‘being’ in the world. This is happening today in MMORPGs (Massively Multi- player Online Role-Playing Games) and the various Metaverses being created [45]. Within these virtual environments, currency is being earned (or bought with ‘real’ currency) and spent on virtual artifacts. The emergence of non-fungible tokens (NFTs) [48] provides the means to verify ownership of digital assets. In their early days, NFTs are being used primarily for digital art or other types of creative expressions or unique events. But an NFT can verify ownership of any digital asset, including a Digital Twin and its components. NFTs will make business models based on Digital Twin ownership possible. In the early 2020s, these models and others are the focus of early experiments and field trials. Their adoption, however, has not been widespread. These will lead to many new and unanticipated business models as Digital Twins become more capable and ubiquitous and evolve to represent entire enterprises and ecosystems.
4 Business Cases – Digital Twin Adoption and Adaptation An enterprise is a complex system that operates on multiple levels, from the most basic engineering, marketing, sales, operational and other activities to its web of interactions with its supply chains, channels, customers, competitors, regulators, and society. And the complexity of the enterprise ecosystem is increasing–rapidly. In addition, the artifacts an enterprise produces–its products, services, business models and processes–are themselves reaching extra levels of complexity. No
The Business of Digital Twins
45
matter how seemingly simple or isolated, an enterprise cannot create a new product or service without consideration of the ecosystem it will exist in. The ecosystems that exist today are often beyond the unaided comprehension of individuals or teams and are becoming more so. Digital Twins are both a cause of and a solution to this increasing complexity. The following business cases illustrate how Digital Twins both manage and contribute to the increasing enterprise and artifact complexity, the ways they can be used, and the value that is derived from them. 1. Product Twin – A representation of a real-world artifact that is created by one enterprise to be used by another enterprise or person. The artifact can be simple and atomic or complex, with multiple interacting components and subsystems from multiple enterprises. 2. Operational Twin – A representation of the operations of an enterprise, both the physical equipment and processes, as well as the business process. This includes the actions of agents–both human and artificial. 3. Behavioral/Experiential Twin – A representation of the complex behavior of interacting agents, human or artificial, in an environment where the objective is understanding and influencing the behavior of the humans. This includes what people experience as the result of their interaction with physical and digital artifacts. 4. Innovation Twin – Any type of twin that self-modifies. This includes the real- world artifacts it mirrors to cause virtual and real-world artifacts to evolve new functions and capabilities. A side-effect is increasing the complexity of the system. Each of these cases reveals a unique perspective on the role a Digital Twin plays, and the jobs it is doing, for the enterprises and individuals that rely on it.
4.1 Product Twin One of the most common uses of Digital Twins is to represent a physical product, be it a simple valve, a complex jet engine or an entire aircraft. Every product produced has a supply chain. It starts with materials, and value is added through the creation of components, sub-systems, assemblies, systems, and ultimately, the product that an end-customer buys and uses. These, often complex, supply chains create unique Digital Twin opportunities, and unique Digital Twin issues, for the enterprises that take part. Each vendor, no matter where they are in the supply chain, has claims on the value that a Digital Twin for their component or subsystem creates. This results in a hierarchical and complex network of interacting twins from multiple manufacturers, each of which has their own edge sensors, their own digital threads, their own clouds, and their own analytic tools. Every one of these companies has legitimate reasons for having a Digital Twin of their own product and for wanting to extract value from that data.
46
L. Schmitt and D. Copps
Consider, for example, a manufacturer of industrial components such as valves used in refrigeration equipment. When sensors are embedded in these valves, the manufacturer has the potential to access data across all their operating valves, no matter where they are being used. Some valves go into cooling subsystems that have other components (e.g., compressors, controllers, etc.) from other companies in the supply chain. These components are used to build cold storage units which are installed in restaurants. These restaurants are, in turn, serviced by food service companies that deliver supplies directly to the cold storage units installed in the restaurants. Figure 8 provides an example of how complex the web of interacting twins rapidly becomes. The possibilities that come from having a connected sensor on a simple valve embedded deep inside a cold storage unit in a restaurant go beyond mere valve performance monitoring, diagnosis, and maintenance. Having continuous and historical valve operating data from thousands or millions of installed valves makes it possible to see complex patterns of operation. This can allow the valve manufacturer to make predictions that go far beyond what is possible with just a single valve twin. The valve manufacturer naturally wants to gather information from the hundreds of thousands of valves they have installed, not just in cold storage units, but in other products operating in thousands of other installations. This horizontal opportunity of Digital Twins relies on the aggregation of information about every valve everywhere. The jobs that the valve Digital Twin needs to do in this case are to provide the valve manufacturer with predictive capability and performance information for future valve designs and innovations. In addition, each valve is part of a functioning system comprising dozens of other components and subsystems, each with their own Digital Twin which needs to communicate and coordinate with the others. This disparate collection of digital threads from distinct providers needs to be woven together to create a comprehensive twin of the entire system. This is the vertical opportunity of Digital Twins.
Fig. 8 An intersecting network of Digital Twin ecosystems [50]
The Business of Digital Twins
47
There are many jobs that the valve Digital Twin needs to do. The most basic is to monitor the exact current state of a specific valve in an operating cold storage unit. In addition, it needs to exchange data at the appropriate time with other Digital Twins of the other components of the system and with the twin of the cold storage unit itself. Moreover, the twin of the lowly valve could conceivably interact directly with the twins of the restaurant and its supply chain.
4.2 Operational Twin The situation gets more complex and interesting as various distinct product twins come together in an operating enterprise. This is when the Digital Twin goes beyond mirroring a distinct product to mirroring an operating enterprise. As Fig. 8 shows, there are product and service hierarchies and intersecting ecosystems of both the real-world artifacts and their Digital Twins. These intersecting hierarchies and ecosystems have their own, sometimes competing, objectives. As a hypothetical example, take a restaurant chain that operates 150 restaurants throughout North America. Like most other chains, they have standards for building and operating their restaurants. As part of this standard, they install cold storage units in each restaurant. The cold storage unit (and its Digital Twin) become part of the operating system for a specific restaurant instance and for the entire restaurant chain. The operating system itself has a Digital Twin with its own digital threads, cloud storage, AI, etc. This twin needs to interact with the operating systems and Digital Twins of other companies that supply food staples to the restaurant chain. The complex web of intersecting Digital Twins opens new and interesting possibilities. Imagine coordinating food supply with the operation of the cold storage unit. When a food delivery truck is within 30 min of a delivery to a specific restaurant, the cold storage (and the valve embedded deep within it) would know this. The unit could ‘pre-cool’ in anticipation of the door being opened for delivery, thus maintaining a more even temperature (and prevent spoilage). The Digital Twin of the valve, buried deep in the bowels of the cold storage unit, learns over time to predict when food supply delivery will take place. As the above scenario shows, the operations of any enterprise will involve many complex interactions between multiple physical and business process systems. An operational twin must therefore support additional capabilities beyond what a self- contained system needs, no matter how complex it is. These include. –– The modeling of multiple physical systems that each have their own Digital Twin. –– The merging of physical twins with business process twins. –– Bringing human (or other) agents into the twin. When this happens, an operational Digital Twin can also incorporate Intelligent Process Automation (IPA) activities. Enterprises have deployed IPA in back-office functions to automate tasks such as accounting, order entry, customer support and other tasks that have traditionally been done by humans. Any repetitive business
48
L. Schmitt and D. Copps
task, no matter how complicated, is now the target of software automation using IPA. Companies have seen productivity and efficiency improvements as high as 80–100%. A Digital Twin creates the possibility of a physical equivalent of IPA. An AI-enabled Digital Twin automates processes for the physical world and its interaction with the business process world. It becomes the IPA equivalent for an entire enterprise’s activities, both real and virtual, physical, and digital, product and service. When the real world is a live information fabric, an enterprise can understand exactly what’s happening everywhere, all the time and it can affect what’s happening in real time. An example of this is a fuel terminal. This is where tanker trucks go to load up with fuel to transport to gas stations and other fuel users. The operation of these fuel terminals is quite complex, with efficiency and safety both primary concerns (Fig. 9). A fuel terminal may occupy a physical layout of approximately 5 acres. At the center of this physical layout are five lanes where trucks can enter and be loaded with fuel. Each lane has a complex fueling apparatus with tanks, hoses, pumps, gages, safety equipment, etc. As each truck drives up to be refueled, the driver must follow a ten-step standard operating procedure (SOP) to (a) ensure safety and (b) have the tanker loaded and leaving within the allotted time. The jobs the Digital Twin needs to do are to ensure conformance to the operational procedures. This includes flagging any anomaly, halting operation in case of a safety issue,
Fig. 9 Digital Twin of a fuel terminal operations [49]
The Business of Digital Twins
49
suggesting corrective action, communicating with the involved parties, and validating identity and completion so that the business side of things (delivery, billing, etc.) has accurate information. When a truck arrives at the fuel loading station, there are several things it is important to know–Is the driver is wearing a hard hat? Is the Scully cord that protects against static electricity connected? Is the vapor hose connected? Was the pump down process completed? What is the riser time? When does the truck depart? Is there another truck ready to enter the station? A level of fidelity is needed so that, for example, the twin can distinguish between the different hoses that hang down or identify the Scully cord no matter how it is shaped laying on the ground. The Digital Twin of the operation needs to take all these things into account. There are two ways of building such a Digital Twin. One way would be to install sensors on every piece of physical infrastructure–the trucks, the drivers, the pumps, the hoses, the lanes, the entries and exits to the property, etc. These would be used to build a twin that mimics the space, the mobile entities (trucks, people etc.) and the rules governing the operation. A second way to build a Digital Twin of the operation is to put cameras in strategic locations (and to use the cameras already installed for security) to create the Digital Twin. A camera (this can be any type of imaging device–visible, lidar, infrared, etc.) – becomes a virtual sensor and can monitor all types of interactions in the physical environment. Point a camera at a chair to determine if it is occupied or unoccupied, a glass to determine if it is full or empty or even a gauge cluster to determine if an unsafe condition is unfolding. With the right software interpreting what is being looked at, the physical thing the camera is observing becomes part of the Internet of Things without it needing to be redesigned with embedded sensors and communication infrastructure. Such a thing is happening today with startups like Worlds and Matterport that create and use Digital Twins using imagery from inexpensive and ubiquitous cameras that are available today. The case described above is a pilot project that Worlds is testing. In a world of ubiquitous imaging, a Digital Twin for every physical artifact, agent, and natural object becomes much closer to being realized.
4.3 Behavioral/Experiential Twin Once agents become part of the Digital Twin universe, new worlds of possibilities open. Now physical artifacts can represent and connect to their Digital Twins with their digital threads. Digital threads can also represent and connect people to their Digital Twins as they interact with the physical world and with others. They are all components of a behavioral twin that considers and predicts the often irrational, or even purposely nefarious, behaviors of humans. A Las Vegas casino recognizes this possibility and is experimenting with using Digital Twins to enhance gameplay, to prevent cheating and money laundering and
50
L. Schmitt and D. Copps
Fig. 10 A Digital Twin of a Blackjack Table [49]
to manage the gaming floor. These are some of the primary Jobs-to-be-done in a proof of concept being carried out in 2021 (Fig. 10). Vegas is all about gameplay, the complex set of interactions between the dealer, the players, the cards, and the chips that are repeated over and over but is never the same. One job of a Digital Twin is to recognize behaviors of interest that involve human behavior. These behaviors can be remarkably complex because humans behave in complex, and not always logical or anticipated ways. Creating an accurate Digital Twin of the entire casino floor, all the tables, every chip, the chip machines, the dealers, the pit bosses, the players, etc., would allow the following jobs to be done. Bet Recognition – How many chips of what value is each player betting on each hand by each player? The simplest use is to recognize a ‘whale’–a person who is wagering substantial amounts. Whales are of special interest to a casino and recognizing a new one early can be very lucrative. Cash and Chip Management – How much money and how many chips of what type a player has and how cash and chips are exchanged? In Vegas, the exchange of cash for chips at the table is very structured. It involves how bills are counted, how chips are stacked and ‘fanned out’ to show the amount, etc. The exchange is not just a strict check at the end of the exchange, it is making sure the entire exchange process takes place properly so that all parties are comfortable.
The Business of Digital Twins
51
Cheating – Cheating often involves complex forms of behavior that indicate non- gaming motivations. It often involves recognizing meta-behaviors to detect coordination between individuals. One example is when two cohorts sit at a blackjack table in the one and three positions. The person in the one position will play in a way that improves the odds of the person in the three position. Money laundering – A person with a large amount of cash uses casino machines that change cash into coupons that can, in turn, be exchanged for chips. They play one or two hands and then cash out their chips. The (currently) untraceable chain of cash to chips to cash makes it exceedingly difficult to detect. Gaming floor dynamics – The floor operation is a complex, improvisational, continuous dance. How many players are at each table and when should tables be opened or closed? How is play proceeding at each table and what is the table turnover? Where do individual players go, how long do they play, when do they bet, when do they convert cash to chips, and vice versa? Managing the entire floor is a complex operation and a Digital Twin can produce a persistent digital memory of all actions that occur on the casino floor. The only way for a non-Digital Twin Vegas to recognize these situations (and others) is for the dealer, pit boss and people monitoring the livestream images (cameras are capturing everything), to notice the behavior and take the appropriate action. Things are inevitably missed. Now it is possible to use live video images as digital threads that connect the physical table and gaming floor to its Digital Twin. Doing so dramatically improves how each of the jobs listed above is done. By creating a digitized blackjack table and Digital Twins of the chips and the money, it becomes possible to create synthetic data that can be used to train an AI to detect behaviors of interest. Since AIs require large training sets, a Digital Twin can create the training sets synthetically instead of using captured video. In one test, in just 2 h, 170,000 virtual chip stack combinations were synthetically generated to produce the equivalent of 140 days of simulated game play. These synthetic images were then used to train an AI to recognize stacks of chips (how many, what value) in live video images. For AI training, synthetic data becomes the great equalizer. It allows startups to compete with the likes of Google and Facebook who have all the ‘real’ data they need to train their AIs. If a company can create their own training data, they do not need to be Google. By training the blackjack AI on only synthetic data for about 2 h, an 86% accuracy rate was achieved on an actual table. The gaming example shows the potential for a Digital Twin to not only improve efficiencies but also to improve experiences. Imagine a smart Digital Twin that could detect when a player was ready to cash out their chips. The dealer or pit boss on the floor could be informed so they could immediately interact with the player to improve the player’s experience. Or suppose the twin could detect behaviors that indicate frustration, joy, despair, trepidation, or many other conditions? It could be possible to program an appropriate response to make experience of both the player, and those around them, more fulfilling. The human experience is complex and
52
L. Schmitt and D. Copps
multi-faceted and the potential for enhancing those experiences is possible, but only if behaviors are made visible and acted upon.
4.4 Innovation Twin An innovation twin is a Digital Twin that can modify itself over time in ways that create both new virtual functions and new real-world artifacts. There are early indications that his will become a significant trend in the future. AIs are already being used to enhance creativity and invention [2, 17]. Automated 3D printing can directly transform a digital concept to a physical reality. Imagine a Digital Twin, by itself or in collaboration with a person, coming up with new concepts that do jobs that are not being done and directly creating the real-world artifacts that are required. This self-improving Digital Twin and its corresponding real-world artifacts are the natural outcome of the increasing intelligence being embedded in Digital Twins and their increasing complexity and influence.
4.5 The Digital Twins of Enterprises and Ecosystems Ultimately, there will be Digital Twins that encompass an entire enterprise’s operations, products and services and business models. It will make no sense for a company to have separate Digital Twins of the products and services they sell, their manufacturing processes, their customer relationship processes, their supply chain processes, and their general business operations. These will merge into a unified Digital Twin that represents the entire enterprise and includes products and services (both consumed and created), artifacts and agents, processes, and experiences. Going even farther, an enterprise twin can expand to include all the stakeholders of the enterprise. This includes employees and colleagues, customers, suppliers, partners, and competitors, as well as communities the enterprises operate in, the environment, government, and society. These stakeholders and their twins become part of an entire ecosystem twin. The merging of the physical and the digital will become more comprehensive and valuable as it becomes a holistic twin of an entire ecosystem. To get a sense of how this might unfold, imagine a Digital Twin of the entire Amazon ecosystem. It would comprise all the physical items (warehouses, robots, trucks, etc.) as well as all the humans involved. This includes not only the over 600,000 employees but the millions of affiliated partners and customers. It would include the myriad of business processes required to keep things running continuously and smoothly. Such a twin would look very much like a Massively Multiplayer Online Role-Playing game (a MMORPG such as Fortnight without the battles). In this conception or the enterprise, there is no barrier between the digital representation of the enterprise and the enterprise itself. The physical entities within the
The Business of Digital Twins
53
ecosystem are all connected with two-way digital threads to the virtual construct that both show what is happening and control what is happening. Changes are made first in the virtual system and are then reflected in the physical system. This is the future of digital first.
5 Digital First – A Twin Is Forever Every 20 years, locals tear down the Ise Jingu grand shrine in Mie Prefecture, Japan, only to rebuild it anew. They have been doing this for around 1,300 years. Some records indicate the Shinto shrine is up to 2,000-years old. The process of rebuilding the wooden structure every couple of decades helped to preserve the original architect’s design against the otherwise eroding effects of time. … This is an important national event. Its underlying concept — that repeated rebuilding renders sanctuaries eternal — is unique in the world. [27]
The Ise Jingu Shrine has a twin, but it is not a twin embodied digitally (although at some future time it may be). The ‘twin’ of the shrine is embodied both in the constantly decaying physical structure and in the minds of humans’ who envision the ideal shrine. This illustrates an underlying principle true not only of the Ise Jingu Shrine but also of Digital Twins. It is the twin that is the constant and permanent ideal to which the physical world is striving but can never match.
5.1 The Persistence of the Twin A Digital Twin lasts forever, its physical manifestation – not so much. The Ise Jingu shrine has evolved throughout the years in its reconstruction with the addition of gold and copper adornments and Suedama (Buddhist orbs seen on various religious structures). It has evolved in form and structure as people’s conception of it has evolved. It is sturdier and safer, requires less maintenance, is less prone to fire or damage than it was in 1300. Even though the materials, construction methods and ornamentation have changed, it is still the shrine that people revere and hold in their minds as authentic. Imagine a future in which the Digital Twin is what creates and accrues value, and its physical manifestation is secondary (and temporary). After all, it is the digital that will live forever, whereas the physical counterpart wears down and decays. The value equation flips – the digital is ‘real’ and perfectly preserved and improved forever. The physical is just the temporary instantiation of the digital reality – an imperfect and constantly decaying one at that. This situation implies that the relative balance between the value conferred to the real-world, physical artifact and the virtual, Digital Twin will shift. People and enterprises will increasingly pay for the Digital Twin that is used to shape and control their physical world. The movement of atoms and bits, under the direction of a Digital Twin, will become ‘on-demand’. The physical manifestation will hue as
54
L. Schmitt and D. Copps
closely as possible to the digital and constantly strive to keep up with what the twin says it should be even as the twin evolves. Physical artifacts and agents will constantly be upgraded and rebuilt as both the twin advances and the technology to create, on-demand, ever more complex artifacts at ever more precise resolution advances. As the Digital Twin evolves, so does the artifact. The digital is the permanent artifact–its physical manifestation is temporary and constantly in need of attention and work.
5.2 The Digital Possibility and Promise In early 2021, a ‘one-of-a-kind’ home designed by artist Krista Kim sold for $515,000 worth of Ethereum cryptocurrency. The unique thing about the sale is that the house only exists digitally [11]. The house comes with exquisitely designed spaces, furniture and views designed to be ‘therapeutic’ and even comes with a background soundtrack by Jeff Schroeder of Smashing Pumpkins fame. Moreover, the furniture in the home can be produced in real life by a company in Italy. Perhaps most interestingly, however, is that the home is intended to become part of the new owner’s personal ‘Metaverse’. It can be placed and used in this virtual world as a unique design that cannot be replicated. These restrictions are enforced through a non-fungible token (NFT) contract. This world of digital assets, blockchain technology and NFTs [44] is intersecting with the world of Digital Twins in ways that provide new value creation opportunities and new businesses. The digital first experience is taking hold in the commercial world as well. Examples of non-industrial Digital Twins illustrate the directions in which digital representations of the real world are heading. The world is evolving from ‘simple’ IoT sensors that track specific real-world conditions to full-blown Digital Twins that mirror complex products and services. Full-blown enterprise and ecosystem twins are being built that weave together multiple twins to mirror complex, multi-party systems. Ultimately, a comprehensive Metaverse (or Metaverses) will encompass a wide variety of disparate ecosystem Digital Twins. The journey is just beginning.
5.3 A Wave of Digital Twin Driven Innovation Will Create a Metaverse The metaverse is where we will create the future and transform how the world’s biggest industries operate… In the future, the digital world… will be thousands of times bigger than the physical world. There will be a new New York City. There’ll be a new Shanghai. Every single factory and every single building will have a Digital Twin that will simulate and track the physical version of it… what’s going to happen is pieces of the digital world will be temporarily, or even semi-permanently, augmenting our physical world. It’s ultimately about the fusion of the virtual world and the physical world. – Jensen Hwang, CEO Nvidia [36]
The Business of Digital Twins
55
The business of Digital Twins must acknowledge the astonishing growth in activities and discussion about the Metaverse and Non-fungible tokens (NFTs) made possible by blockchain technology. Conversations about the Metaverse have been happening ever since Neal Stephenson introduced the concept in his 1992 novel Snow Crash [40]. In the early 2020s, these conversations, and the level of interest and activity related to the Metaverse and related concepts, reached a fever pitch. This is exemplified by two companies, Nvidia and Meta Platforms (formerly Facebook) and the high-profile speeches each of their CEOs have given on the topic. Despite the obvious hype being generated by Metaverse and NFT proponents, there are definite indications that the realms of IoT, Digital Twins and the Metaverse will intersect and potentially merge in the future. The early experiments are being done. Many will fail, but that is the nature of all new things. Looking at the business of Digital Twins, it is useful to divide the Metaverse into two overlapping forms. 1. The first form of the Metaverse is about virtual, immersive worlds that are not necessarily attached to real-world physical objects and processes. It is the realm where virtual reality reigns and it takes its impetus and many of its concepts from the world of gaming. 2. The second form of the Metaverse (sometimes called the Omniverse) is about merging the virtual and the physical world, augmenting one with the other. In this Metaverse, the virtual world is attached to, and extends, real-life artifacts and vice versa. It is the realm where augmented reality reigns and takes its impetus and concepts from the world of IoT, Industry 4.0 and other industrial and business initiatives. This is an overly simple distinction between two forms of the Multiverse that are being discussed and promoted in the early 2020s. It does, however, provide a first order approximation of a distinction that makes a difference. It provides a convenient means to discuss the role and evolution of Digital Twins and focuses attention on the Metaverse, especially the second form described above, as the apex of Digital Twin evolution. In its most complete and comprehensive conceptualization, the Metaverse tics all the value boxes. –– It supercharges the efficiency of real-world artifacts –– It creates new experiences –– It accelerates innovation to create new artifacts that never existed before Eventually, there will be a complete, live twin of the world – a twin that both mirrors and influences the physical world of artifacts, agents, and natural objects. They will be heterogeneous compositions, built from Digital Twins that millions of individuals and enterprises create and use. They will be live entities that persist over time, combining the physical and virtual in a seamless ‘4D’ experience. This will allow the real world to be measured in way that companies like Google, Facebook, Amazon, and many others are measuring their informational worlds. Because everything is measurable in space and time, awareness, diagnostics, prediction, prevention, and restoration in the real world will be normal. Whatever
56
L. Schmitt and D. Copps
Fig. 11 A Metaverse composed of Digital Twin Ecosystems [15]
happens in the real world will be expressed in a 4D twin. Whatever the 4D twin determines should happen will be expressed in the physical world. When physics engines are embedded within every Digital Twin, physical attributes like distance, space, time, velocity, direction, altitude, and depth will not only be understood, but will aid in the understanding and ultimately the measurement of everything (Fig. 11). Increasingly sophisticated Digital Twins, up to and including the Metaverse that represents the world, will become more prevalent. Increasingly, more of the physical and digital worlds will be represented and controlled more precisely. In the ultimate expression, there will be Digital Twins that are smart virtual counterparts to every physical entity, be they agents, artifacts, or natural objects. They will take part in making up a Metaverse that spans everything from the most minute detail to the most global ecosystem. In addition, there will be ‘extra physical’ aspects of Digital Twins and the Metaverse – things that have no counterpart in the physical world but are nevertheless valued for what they do and the experiences they provide. The technology of Digital Twins will evolve, and the digital threads and fabrics of the physical and digital universe will become more numerous, stronger, and more woven together. The various digital ecosystems that exist as a result will become more interconnected and interoperable and evolve into the Metaverse that many today are envisioning. Each ecosystem, and the Metaverse they together comprise, will become smarter and begin to understand what is happening in the real world. They will incorporate better physics engines and represent entities with increasing precision and accuracy. The line between the physical and the digital will start to blur and true ‘reality’ will become a hybrid of the two. This hybrid world will be an interface to a digital-physical reality. This is how we achieve the ‘magnificent powers of perception’ with which people and enterprises can measure, optimize, and improve real life processes and experiences. It changes
The Business of Digital Twins
57
how people experience the world and fundamentally changes how business is done and it will create tremendous value. This is the game that every enterprise will need to play.
5.4 The Size of the Prize The business models and business cases described above illustrate just a small sampling of the wide variety of ways a Digital Twin can create value. Consensus opinion is that the joint futures of IoT, Digital Twins and the Multiverse will generate significant economic value. It is not difficult to find market research reports for IoT, Virtual Reality, Augmented Reality, Digital Twins, Intelligent Process Automation and the Metaverse that predict market size and growth rate for the next few years. These reports have overlapping and diverse ways of defining and segmenting the market for the virtual representation of real-world entities. They range in estimates from single digit billions at the beginning of the 2020s to multiple hundreds of billions by the end of the decade with annual growth rates from 10% to 60+%. Various market research reports published in 2021 give estimates for the size and growth rates of the IoT, Digital Twin and Multiverse markets. • IoT – between $800B and $1.4 trillion by 2030 with a CAGR of between 25% and 70% [13, 25, 38] • Digital Twin – between $48B and $78B by 2026–2027 with a CAGR between 40% and 60% [16, 22] • Metaverse – between $210B and $600B by 2026–2027 with a CAGR around 43% [3, 21] The variability in the market size and growth rate estimates results from natural uncertainty about the future and the fuzzy nature of how the boundaries and segments of these markets are defined. This later aspect is probably not going to change. In fact, it will probably get worse as IoT applications merge into Digital Twin applications, which look a lot like parts of the Metaverse. As IoT technology advances, it will become more ubiquitous and intelligent. As this happens, IoT implementations will cease to be independent of the Digital Twins they are part of. Similarly, as Digital Twin technology and implementations get more sophisticated, they will represent a broader scope and higher level of the real- world and will start to look like subsets of a comprehensive Metaverse. When the virtual worlds of the Metaverse start to mingle with physical reality, the boundaries between the three ‘markets’ listed above will become meaningless. For these reasons, a more accurate, first order approximation of the total available market would be to add up the above estimates and subtract 20% for ‘overlap’. This gives an overall size of approximately $1.6 trillion by 2027, growing at approximately a 40% year-over-year rate. This may be a significant underestimate.
58
L. Schmitt and D. Copps
Chapter “The Digital Twin: Past, Present, and Future” explores in more detail the nature and growth of the Digital Twin market.
6 The Democratization of Digital Twins and the Metaverse When we create a Digital Twin, we are dematerializing real world artifacts so that they are represented as information that can be analyzed, manipulated, and used to affect their originating source. When the real world is captured and re-played, live, inside a smart, Digital Twin, real-life becomes a coherent tapestry of live data threads enterprises can measure, learn, optimize, and improve upon. Frictionless, regenerative, digital objects invite enterprises to iterate and take chances where the consequences can be seen before they occur in the real world. As twins evolved from their physical beginnings in the 1960s with flight simulators, they have steadily become more widespread and accessible. From the early days of computers, where Digital Twins could only be afforded by major national projects, they soon were adopted by large corporations that could afford bespoke implementations. Then, with the growth of the Internet of Things (IoT) and initiatives like Industry 4.0, more enterprises began experimenting with and using Digital Twins to enhance their engineering and operations. Now we are seeing the emergence of new tools and methods that let both individuals and enterprises create Digital Twins in a fraction of the time with more functionality. They are becoming a natural part of the everyday business landscape. This trend toward democratization will continue. No longer will they be the sole province of governments and enterprises that can afford custom, bespoke implementations. The democratization of Digital Twins will allow them to be within reach of virtually anyone. The trend towards democratization is accompanied by other trends that will shape the future growth of Digital Twins as they permeate the lives of individuals and enterprises. The following lists a few of the trends that will shape the Digital Twin Future. System Trends – The changing complex system dynamics – Political, Economic, Social, Technological, Environmental, & Legal (PESTEL) • There will be an increasing economic impact from the adoption of Digital Twins. Entire economies will emerge based on the value being created and transferred in the virtual world. • Enterprises will become increasingly reliant on Digital Twins for their operations and their business models. The value of Digital Twins will grow as their complexity and dynamism expand to encompass more of the real world. Companies that do not keep up will be risking their continued existence. • Digital Twins will become increasingly important to how governments and society function and will therefore be under increasing scrutiny. Political and legal factors will play an increasingly significant role in the Digital Twin future.
The Business of Digital Twins
59
Demand Trends – The changing needs and desires of individuals and enterprises: what is wanted • Individuals and enterprises will be increasingly comfortable interacting in the virtual realm. They will come to insist on virtual interaction as a key part of their work and private lives. They will be increasingly networked and increasingly sophisticated in what they demand of Digital Twins. • The human and machine interfaces between the virtual and real world will become increasingly sophisticated and more ‘natural’. Interacting with a Digital Twin, or in the Metaverse, will become more like real-world interactions. • Individuals and enterprises will want all the digital threads they encounter woven together into a comprehensible fabric. They want to access their virtual view of the digital world naturally. They will not want to hassle with multiple, incompatible, and non-integrated twins. Design Trends – The changing technologies and solutions, products, and platforms: what is possible • The supporting technologies underlying Digital Twins will become increasing capable at an accelerating pace. The ability to create twins using low-code or no-code approaches will make twins more accessible to a wider constituency. • The complexity and sophistication of twins and multi-twin systems will increase. They will become more intelligent and will begin to self-modify, creating an accelerating flywheel of more complex systems and more real- world effects. • Digital Twin supply chains will transform to become more organized and efficient as Clayton Christensen’s law of ‘Conservation of Attractive Profits’ [42] takes hold. The digitization of virtually everything is now inevitable. We have crossed a threshold that will drive waves of innovation and present both opportunities and threats to every enterprise.
6.1 With Democratization Comes Responsibility Digital Twins, and the Metaverses they are a part of, will become one of the most influential and valuable entities ever created. They will be central to humanity’s ability to live in the increasingly complex, distributed and heterogenous worlds we build, both physical and virtual. It will become more difficult, if not impossible, to build real-world systems of desired complexity without a corresponding Digital Twin. AI-enabled Digital Twins will increasingly manage and influence more of our real-world outcomes and experiences.
60
L. Schmitt and D. Copps
But with the ever-increasing influence of Digital Twins and the value that they create, key questions remain. How will the value they create be distributed? Who will own them and reap their benefits? Democratization implies not only equality of access but fair distribution of benefits. The Digital Twin and Metaverse ecosystems will have failed if they are the sole province of an elite few who own and control the infrastructure and supply chains and reap the lions-share of the rewards. It is possible that a small collection of companies such as Google, Facebook, Apple, Microsoft, Nvidia or a few other massive enterprises will end up dominating the space. It’s also possible that more equalizing structures will emerge – perhaps something like Distributed Autonomous Organizations [33] – that can equitably distribute the tremendous value that is created. Whichever trajectory is followed, it will be incumbent on the participants to seriously consider the long-term benefits to society. In the early 2020s, governments, society, investors, and enterprises, have realized that there needs to be a balance between growth at all costs mindset and purpose- driven outcomes that promote equality and benefits for all. Movements such as Corporate Social Responsibility (CSR), Enlightened Shareholder Value (ESV), Triple Bottom Line (PPP), Environmental, Social Governance (ESG), Sustainability, etc. have caused enterprises to consider their long-term effects on the world. Purpose-driven innovation [34] requires that those who create transformational change consider more than just the bottom line. As you read the following chapters ask these questions and think about how Digital Twins will affect your own life, that of your friends and colleagues, and that of the enterprises you are part of: 1. What new business models are possible and beneficial? What new jobs need to be done? 2. What would a digital-first world look like for the application being considered? 3. How can the Digital Twin be a part of a larger ecosystem and Metaverse? 4. How can the value that is created be distributed in ways that promote equality? 5. How will the Digital Twin evolve over time? How can the complexity and dynamism of the Digital Twin be taken to the next level? 6. How could the Digital Twin be appropriated by others, for both good and bad effects? 7. What unexpected ways could the Digital Twin be used and what are potential unintended consequences? 8. How will your (and the enterprises) reputation be affected? What regulations and policies will be affected or will affect its use? Any perspective on the business of Digital Twins must grapple with the issue of equality. Equality of access, equality of opportunity, and fair distribution of economic outcomes must all be considered. The disruptive transformations that Digital Twins and the Metaverse will cause have the potential to be either great equalizers or to create great disparities. The answers to the questions posed above, among others, will ultimately determine how Digital Twins influence the future worlds we inhabit–for good or otherwise.
The Business of Digital Twins
61
References 1. Arkan, Ç. (2021). How to get started with three innovative business models enabled by digital twins [Online]. Available at: https://www.forbes.com/sites/forbestechcouncil/2021/06/08/ how-to-get-started-with-three-innovative-business-models-enabled-by-digital-twins/ 2. Autodesk. (2021). Generative design [Online]. Available at: https://www.autodesk.com/ solutions/generative-design 3. Sawant, V. (2021). Metaverse market size, share, companies & trends analysis report; Brand Essence. 4. Business Strategy Hub. (2021). 50 types of business models (2021) – The best examples of companies using it [Online]. Available at: https://bstrategyhub. com/50-types-of-business-models-the-best-examples-of-companies-using-it/ 5. Campbell, P. (2019). The birth of freemium [Online]. Available at: https://www.profitwell.com/ recur/all/birth-of-freemium 6. Carey, S. (2021). What is SaaS? Software as a service defined [Online]. Available at: https:// www.infoworld.com/article/3226386/what-is-saas-software-as-a-service-defined.html 7. Chainlink. (2021). What is a blockchain oracle? [Online]. Available at: https://chain.link/ education/blockchain-oracles 8. Christensen, C. & Raynor, M. (2013). The innovator’s solution: Creating and sustaining successful growth. Harvard Business Review Press. 9. Christensen, C., Hall, T., Dillon, K. & Duncan, D. (2016). Know your customers’ “jobs to be done”. Harvard Business review. 10. Condon, S. (2019). In the wake of the Notre Dame Cathedral fire, digital scans offer hope for restoration [Online]. Available at: https://www.zdnet.com/article/ in-the-wake-of-the-notre-dame-cathedral-fire-digital-scans-offer-hope-for-restoration/ 11. Corberr, K. & Jackson, J. (2021). The worlds first NFT digital house is now open to the public [Online]. Available at: https://www.housebeautiful.com/lifestyle/a35863587/ mars-house-digital-nft-home-krista-kim/ 12. Cuofano, G. (2020). What is a business model? 70+ successful types of business models you need to know [Online]. Available at: https://fourweekmba.com/what-is-a-business-model/ 13. Fortune Business Insights. (2021). Internet of Things (IoT) market size, share & covid-19 impact analysis. 14. Gartner. (2021). Customer journey [Online]. Available at: https://www.gartner.com/en/ marketing/glossary/customer-journey 15. Metamorworks from iStock by Getty Images. (2021). 16. Grand View Research. (2021). Digital twin market size, share & trends analysis report. 17. IBM. (2021). The quest for AI creativity [Online]. Available at: https://www.ibm.com/watson/ advantage-reports/future-of-artificial-intelligence/ai-creativity.html 18. Investopedia. (2021). Environmental, Social, and Governance (ESG) criteria [Online]. Available at: https://www.investopedia.com/terms/e/environmental-social-and-governance- esg-criteria.asp 19. Karimi, K., 2013. The role of sensor fusion and Remote Emotive Computing (REC) in the Internet of Things [Online]. Available at: https://www.nxp.com/docs/en/white-paper/ SENFEIOTLFWP.pdf 20. LetsBuild. (2019). What are BIM objects and how can you benefit in construction management? [Online]. Available at: https://www.letsbuild.com/blog/bim-objects 21. Market Research Future. (2021). Metaverse market research report. 22. Markets and Markets. (2021). Digital twin market – Global forecast to 2026 [Online]. Available at: https://www.marketsandmarkets.com/Market-Reports/digital-twin- market-225269522.html 23. Matterport. (2021). Matterport celebrates 10 years of innovation, growth and industry firsts [Online]. Available at: https://matterport.com/news/ matterport-celebrates-10-years-innovation-growth-and-industry-firsts
62
L. Schmitt and D. Copps
24. McGrath, R. H. (2019). The pace of technology adoption is speeding up. Harvard Business Review, 25 September. 25. Mordor Intelligence. (2021). Internet of Things (IoT) Market – Growth, trends, covid-19 impact, and forecasts (2021–2026). 26. NFTExplained. (2021). The complete OpenSea guide (What it is & how to use it) [Online]. Available at: https://nftexplained.info/the-complete-opensea-guide-what-it-is-how-to-use-it/ 27. Nuwer, R. (2013). This Japanese Shrine has been torn down and rebuilt every 20 years for the past millennium [Online]. Available at: https://www.smithsonianmag.com/smart- news/this-japanese-shrine-has-been-torn-down-and-rebuilt-every-20-years-for-the-past- millennium-575558/ 28. Osterwalder, A., & Pigneur, Y. (2010). Business model generation; A handbook for visionaries, game changers and challengers. Wiley. 29. Pahwa, A. (2021). What is a business model? The 30 types explained [Online]. Available at: https://www.feedough.com/what-is-a-business-model/ 30. Preston, Q. (2020). Worlds raises $10 M in series a funding as dave Copps’ AI startup launches from stealth [Online]. Available at: https://dallasinnovates.com/ worlds-io-raises-10m-series-a-funding-as-it-launches-from-stealth/ 31. Ritchie, H. & Roser, M. (2017). Technology adoption. Our world in data. 32. Rogers, E. (2003). Diffusion of innovaitons (5th ed.). Free Press. 33. Schecter, B. (2021). The future of work is not corporate – It’s DAOs and crypto networks [Online]. Available at: https://future.a16z.com/the-future-of-work-daos-crypto-networks/ 34. Schmitt, L. (2020). Purpose-driven Innovation. In A. Mills (Ed.), The other side of growth: An innovator’s responsibilities in an emerging world (pp. 11–44). Global Inovation Institute. 35. Scrum Institute. (2018). A comprehensive list of business models to accelerate you and your business [Online]. Available at: https://www.scrum-institute.org/ blog/A-Comprehensive-List-of-Business-Models-To-Accelerate-You-and-Your-Business 36. Shapiro, E. (2021). The metaverse is coming. Nvidia CEO Jensen Huang on the fusion of virtual and physical worlds [Online]. Available at: https://time.com/5955412/ artificial-intelligence-nvidia-jensen-huang/ 37. Tallon, A., Salan, L., UlyssePixel, from Shutterstock. (2021). 38. Statista. (2021). Internet of Things (IoT) total annual revenue worldwide from 2019 to 2030 [Online]. Available at: https://www.statista.com/statistics/1194709/iot-revenue-worldwide/ 39. Stedman, C. & Lutkevich, B. (2021). What is a data lake? [Online]. Available at: https:// searchdatamanagement.techtarget.com/definition/data-lake 40. Stephensen, N. (1992). Snowc crash: A novel. Bantam Books. 41. Tasker, G. J. O. (2020). Get teady for the product-as-a-service revolution [Online]. Available at: https://www.forbes.com/sites/servicenow/2020/10/15/ get-ready-for-the-product-as-a-service-revolution/ 42. Thompson, B. (2021). Netflix and the conservation of attractive profits [Online]. Available at: https://stratechery.com/2015/netflix-and-the-conservation-of-attractive-profits/ 43. Thurston, T. (2021). Galaxies Unknown: How discovering the true scale of markets is changing what we thought we knew about business [Online]. Available at: https://blog.growthsci.com/ galaxies-unknown-how-discovering-the-true-scale-of-markets-is-changing-what-we-thought- we-knew-about-business/ 44. Vincent, M. (2021). What are digital assets and how does blockchain work? [Online]. Available at: https://www.ft.com/content/2691366f-d381-40cd-a769-6559779151c2 45. Wells, C. & Egkolfopoulou, M. (2021). The metaverse: Where cryptocurrency, gaming and capitalism co-exist [Online]. Available at: https://www.thenationalnews.com/business/ money/2021/11/30/the-metaverse-where-cryptocurrency-gaming-and-capitalism-co-exist/ 46. White, N. (2020). Digital twin vs. digital thread: Defining the concepts [Online]. Available at: https://www.ptc.com/en/blogs/corporate/digital-twin-digital-thread 47. Whitney, L. (2021). The cest LiDAR apps for your iPhone and iPad [Online]. Available at: https://www.pcmag.com/how-to/the-best-lidar-apps-for-your-iphone-12-pro-or-ipad-pro
The Business of Digital Twins
63
48. Wikipedia. (2021). Non-fungible token [Online]. Available at: https://en.wikipedia.org/wiki/ Non-fungible_token 49. Copps. D.; Worlds. (2021). 50. Fotogrin, Michael, V., & Jonathan, W., (2021). Shutterstock. Larry Schmitt is a Founder and Managing Partner of The Inovo Group, a provider of strategic innovation consulting services. Since co-founding Inovo in 2001, Larry has led their growth to become a successful, recognized leader in the field of innovation. Larry draws upon years of experience in both large corporations and startups that wrestled with how to innovate. He has worked closely with Inovo’s clients including Cargill, Dow Chemical, Honeywell, Corning, United Health Group, ExxonMobil, Saint-Gobain, and DuPont among many others, all of whom draw upon the frameworks, methods, and tools he and others at Inovo have developed over the years to improve strategic innovation portfolios and capabilities. Larry regularly writes and speaks on innovation and related topics. He holds a PhD in computer science from the University of Wisconsin and a BS from the University of Michigan.
David Copps is known locally and internationally as a futurist, technologist and visionary on the role that emerging technologies will play in transforming markets and the world. Dave has founded, launched and sold two technology companies that have placed machine learning and artificial intelligence in hundreds of companies around the world. In 2017 Dave was recognized as Emerging Company CEO of the Year in Texas while serving as CEO of Brainspace Corporation which was acquired by Cyxtera in 2017 Dave is currently the CEO of Worlds, a Spatial AI company that is re-inventing how organizations see and understand their worlds. Dave received his BA from the University of North Texas. He is an invited member of the Aspen Institute’s Roundtable on AI, a frequent speaker at MIT’s EmTech conferences as well as universities and technology incubators all over the world including Capital Factory, Health Wildcatters and the Dallas Entrepreneurs Center (The DEC) in his home town of Dallas Texas. When Dave is not being a geek he enjoys collecting custom guitars, exotic cars, brewing craft beer, ocean sailing and family time at his home on the island of Bequia in the Caribbean.
The Dimension of Markets for the Digital Twin Max Blanchet
Abstract The market for digital twins is promising, as interest in this transformative technology continues to grow. However, research shows that actual use of digital twins is sparse, with adoption rates struggling to surpass 10% in most industries. Arguably the biggest reason for this lack of traction is a similar lack of sufficient digitization across companies and industries. The fact is, organizations’ slow pace in digital transformation means the digital foundation necessary for digital twins to thrive is still lacking in most companies and industries, although front-runners and laggards are beginning to emerge. Simply put, digital maturity must increase substantially for companies to fully benefit from them. When this happens, digital twin adoption should accelerate rapidly, and companies will begin to see how digital twins can play a crucial role in their efforts to create greater operational resilience; optimize supply chain networks, processes, and inventory; and foster bigger strides toward sustainability. Keywords Digital Twin architecture · Product lifecycle management · Digital Twin prototype · Digital Twin continuity · Multi-actor distributed Digital Twins · Society 5.0 · Context management The commercial benefits of using digital twins are wide-ranging and transformational. Chief among these are richer design options and rapid prototyping; significant production process efficiency and quality improvement; enhanced asset operational performance and life extension; supply chain scenario planning and resilience; and effective decommissioning planning and execution. Such benefits no doubt have played a big role in the decision by 100% of the world’s top EV manufacturers and 90% of the top drug and healthcare laboratories to adopt digital twin solutions. However, despite the promise, the vast majority of private and public organizations globally have yet to pilot and scale such solutions [1], with current digital twin adoption rates average 8–10% across industries [2]. M. Blanchet (*) Accenture, Global Supplychain Strategy Lead, Paris, France © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_3
65
66
M. Blanchet
Fig. 1 Key digital twin capabilities and observed impacts on business (non-exhaustive)
That said, growth is on the horizon. In 2020, the global digital twin market was worth an estimated USD $5.4 billion and is projected to grow at a 36% CAGR over the next 5 years [3]. Future growth is expected to be led by the Transportation industry, where the baseline is already relatively high (30–40% current adoption rates for EV startups, and 60–0% for best-in-class OEMs at present). Significant growth is also anticipated in four additional industries: • • • •
Construction (current adoption rate of about 1%) Electrical and Electronics (current adoption of less than 5%), Consumer Packaged Goods (current adoption of about 3–5%) Life Sciences (currently at about 5–10% based on the Pharmaceuticals sector)
Given the growing ubiquity of digital twins over the next decade, there is a disruptive potential to drive significant and sustainable change if harnessed responsibly. Figure 1 illustrates some of the most prominent digital twin capabilities and their observed and potential impacts on a business.
1 Greater Digital Maturity Is Needed to Drive Digital Twin Adoption Although significant growth in digital twin use is projected, the fact remains that digital twins haven’t made significant inroads yet. Why not? A big reason for the limited adoption is that overall digital operations maturity in the market—a prerequisite for digital twin deployment and use—remains low, as an Accenture research study of 600 manufacturers around the world found. This study reveals that the average digital maturity of manufacturers’ end-to-end operations overall is only 39%, on a scale where 100% indicates all capabilities are
The Dimension of Markets for the Digital Twin
67
deployed and rolled out. Most surveyed companies are past the proof-of-concept stage and are now in actual pilots with partial scaling up. The study identified a number of key differences among the industries and countries included in the study’s scope. For instance, the average digital maturity of upstream oil and gas, aerospace and defense, chemicals, and high-tech companies far outpaces that of enterprises in the life sciences and consumer goods and services industries. Not surprisingly, those mature industries are also more likely to be spending more to scale up their digital operations capabilities and seeing a greater return on their investment. We found similar disparities among countries, with the United States and Singapore being the most mature on average, spending the most, and generating the greatest value. The assessment also identified a small group of manufacturers that have separated from the pack. These companies, representing a cross-section of industries, illustrate how a deep commitment to digital technologies and solutions, coupled with significant investment and an unwavering focus on key enablers, can create substantial and sustainable value. Their experiences not only show that a substantial commitment to and investment in digital is worth it, but also offer other manufacturers guidance on what it takes to accelerate their digital transformation and begin reaping the benefits of a more connected and intelligent enterprise. In this section, we explore in more detail how mature manufacturers’ digital operations capabilities are, what manufacturers have been spending on digital and the returns they’re getting, and what enablers must be in place to successfully transform.
1.1 Digital Maturity Index: From PoCs to Scaling Up To gauge end-to-end digital operations maturity, Accenture evaluated manufacturers on the extent to which they’ve deployed 40 key digital capabilities across their operations using a digital maturity index (see note 1 on Fig. 2). As mentioned, the average digital maturity across the research panel is 39% (with 100% indicating full deployment), which means companies have completed pilots for some capabilities and have begun to scale up. The capabilities in bold in Fig. 3 are the 11 that are being deployed most rapidly. Digital maturity varies widely by industry (Fig. 4). The most mature capabilities tend to be found where digital or data-driven solutions are critical to industry performance, or where there’s significant potential for productivity increases. Digital maturity lags in industries where there’s little potential for productivity gains because operations are already lean, or where digital for operations isn’t as high a priority as other concerns. For instance, highly efficient automotive OEMs today seem more focused on developing connected and electric cars than on digitizing their already automated assembly lines. Another factor inhibiting digital deployment in certain industries is when the overall product cost represents a relatively small share of sales, thus reducing the
68
M. Blanchet
Fig. 2 The Accenture digital maturity index
impact of cost reduction on the P&L. This is the case for consumer goods or life sciences, where digital operations are more aimed at improving flexibility, personalization, and time to market than pure cost efficiency. Individual countries also seem to be progressing at different speeds (Fig. 5). Digital maturity has accelerated in countries in which government programs backing Industry 4.0 were launched early, such as Germany, the United States, and other major European countries (France, Italy, the United Kingdom, and Spain). In these countries, which rolled out national programs in the early 2010s, there’s a relatively low industrial base, which allows faster movement, or the speed of digital adoption in general is high. The converse is also true: Countries with a large industrial base and established assets geared toward Industry 3.0 or late government support for Industry 4.0 are most likely to be lagging. Japan’s relative lack of digital maturity may be explained by the entrenched large and automated manufacturing base with excellent practices, which is likely slow to transform. Moreover, Japan’s Industry 4.0 program wasn’t truly visible until after 2015, which is nearly 5 years later than Germany. In China, lower digital maturity spread stems from a traditional labor-intensive manufacturing base with simple, low-feature products and globalized firms making adopting automation and lean practices a priority. That said, all countries have enacted post-COVID-19 programs to support their industries (which typically includes initiatives to spur digital transformation), although numerous disparities exist in the extent of that support across countries.
The Dimension of Markets for the Digital Twin
69
Fig. 3 The 40 key digital capabilities for operations
1.2 Investment and ROI: More Begets More When it comes to the financial dimension, size truly does matter. The Accenture study found a strong, inescapable correlation between the amount manufacturers invested in digital capabilities—both infrastructure and platforms—and the resulting impact on their operating income.
70
M. Blanchet
Fig. 4 Digital maturity by industry
As shown in Fig. 6, the scale effect on digital investments conveys an advantage to large companies, with those with the highest annual revenues enjoying a slightly larger average return. However, even smaller companies that invest substantially in digital (as a percentage of their overall sales) realize significant short- and mid-term savings, with an average mid-term impact of 4% lift on operating income (as a percentage of their overall sales). Figure 6 outlines the planned level of annual investment in digital operations (covering digital platforms and infrastructure and all digital solutions) and the expected yearly savings. Short term denotes within the next 3–5 years and midterm beyond 5 years. Digital operations is considered to be a portfolio of discrete measures that generate savings. A good balance is when a specific year’s savings fund the investments of the next set of solutions. In other words, done right, digital generates a rolling ROI of about 1 year, which makes the economics more attractive than classic IT programs with a long payback horizon.
The Dimension of Markets for the Digital Twin
71
Fig. 5 Digital maturity by country
We also found a significant relationship between current digital operations maturity and planned level of investment or expected savings when considering the research panel’s industry (Fig. 7) and country. Those spending the most have the most mature digital capabilities and were most likely to reap greater benefits. These are, respectively, the upstream oil and gas, aerospace and defense, and high-tech industries; and the United States, Singapore, and India. The overall economics provide a compelling argument in favor of a major commitment to digital. An average annual investment in digital representing 2.1% of sales in the short term and 3.4% in the medium term translates into massive boosts in operating income, asset utilization, and product digitization, and an equally impressive increase in workforce productivity (Fig. 8). Furthermore, most players investing heavily in digital are seeing a significant top-line impact in the form of greater sales from connected services (such as connected products, connected services around the product lifecycle, and digital logistics offerings). Overall, by boosting asset utilization, workforce productivity, and service, digital transformation significantly improves ROCE through higher EBIT (from greater
72
Fig. 6 Digital investments and ROI by company size
M. Blanchet
The Dimension of Markets for the Digital Twin
73
Fig. 7 More mature industries are seeing higher returns as measured by operating income
cost efficiency and better pricing of digital offerings) and reduced capital employed (from greater asset utilization and lower inventory). This is why digital transformation of operations can be a major creator of shareholder value. The bottom line: A complete digital transformation of operations requires a lot of money—but the payoff is equally substantial.
74
M. Blanchet
Fig. 8 Value creation from an end-to-end digital transformation
1.3 Change and Enablers: Skills, Leadership, and Governance Although it generates significant value, digital transformation is a big financial commitment and, frankly, isn’t easy. Successfully embracing digital operations transformation at scale requires significant attention to the three key enablers that contribute to digital readiness and, ultimately, significant ROI. 1. Skills: Digital capabilities must be staffed with the right number of people with the required skills (see note 2 on Fig. 9). The study indicates that these resources should represent around 1% of a company’s staff in the mid-term and up to 1.8% in the future. As could be expected, larger companies in the study have larger teams with digital skills, on average, than smaller companies. However, the latter look to be aggressive in building such skills, with plans to boost the average size of its digitally skilled team by 84% in the next 5 years—a much bigger increase than what’s expected by the largest companies in our study (64%). Overall, companies plan to increase their digital capabilities teams by more than 75% (Fig. 9). 2. Leadership: Digital transformation can’t be scaled if leaders don’t embed it in their leadership role. Why? Because if leaders aren’t fluent in using data-driven analysis to make decisions, digital transformations will fail to live up to their potential. Here, companies in the study have a lot of work still to do. Only 13% of
The Dimension of Markets for the Digital Twin
75
Fig. 9 Transformation readiness across key skills enablers
c ompanies have more than half of their leaders trained to use analytics to help drive decisions (Fig. 10). 3. Governance: Digital transformation must be driven at the highest level of governance possible, but that’s not common among a majority of companies in the study. In only 39% of companies is digital transformation led by the executive committee or board (Fig. 10).
1.4 Value Makers Versus Traditionalists Just like industries and countries, individual companies aren’t progressing at the same pace or getting the same results. The study identified a small group of companies—17% of the survey panel—that have highly mature digital operations capabilities that drive significant value (as measured by revenue and productivity).
76
M. Blanchet
Fig. 10 Transformation readiness across leadership and governance enablers
These companies, which the study dubbed Value Makers, have invested extensively in digital platforms and infrastructure (especially, as noted in Fig. 11, those involving advanced digital capabilities) and, consequently, have enjoyed a substantial, positive impact on their operating income. In contrast, a group called Traditionalists, which comprises 39% of the panel, have weak digital capabilities (including basic, foundational ones, per Fig. 11)—in large part, because they’ve invested comparatively less in them—and, accordingly, have seen scant improvement in operating income. As Fig. 11 indicates, Value Makers are winning the race for digital operations. Traditionalist still struggle to implement even rather fundamental capabilities, whereas Value Makers have moved beyond this stage and are now deploying more advanced capabilities. In other words, the race for digital operations transformation is creating greater polarization—and a big competitive gap—between Value Makers and Traditionalists. It’s also finding Value Makers in a far better position to reap the benefits of digital twins at scale.
The Dimension of Markets for the Digital Twin
77
Fig. 11 The differences between Value Makers and Traditionalists in deploying advanced and foundational capabilities
Value Makers are a great example of how betting big leads to outsized returns. Value Makers are investing in digital, on average, at 3.1% of sales ($139 million) in the short term and 4.3% of sales ($440 million) in the midterm. Conversely, those figures for Traditionalists are 1.2% ($55 million) and 2.8% ($196 million), respectively. What have Value Makers gotten for their money? Plenty. Value Makers lead Traditionalists by a large margin in improvements, both current and projected, in their overall operating income, Overall Equipment Effectiveness, and indirect headcount productivity. They’re also far more likely to expect new digital services for customers will account for more than half of their sales in the next 5 years. When it comes to digital readiness, Value Makers also set the bar. They’re far more likely than Traditionalists to have large teams of people with key digital skills,
78
M. Blanchet
be reskilling a majority of their employees in new domains, and have a structured digital academy in place for all employees (including leaders). Furthermore, most of Value Makers’ leaders are trained to use analytics in their decision making, and the vast majority of Value Makers govern and steer their digital investments and transformation at the CXO level or above. The upshot: While the proper investment is critical, no amount of money spent on digital solutions and technologies can overcome a substantial weakness in the right skills, leadership, and governance.
1.5 The Time for Experimenting Is Over As the study shows, progress toward the vision of Industry 4.0 remains slow in most companies despite nearly a full decade having passed since its launch—and that’s a big reason why digital twins haven’t experienced widespread adoption. Only a select group of manufacturers, the Value Makers, have forged ambitiously ahead and they’re in the pole position to lead their respective industries for years to come. They are most likely better prepared to deliver extreme customer centricity and usage-based offerings, customizing locally to be in a better position to meet the customers’ needs precisely. They can be extremely agile and masters of execution, continually capitalizing on opportunities to help boost their bottom and top lines. And they most likely have a strong and open culture that could enable them to attract and retain the best and brightest people. In fact, the gap between Value Makers and Traditionalists is so large that, without immediate and significant action and investment, many Traditionalists may never be able to catch up. The question is, why are so many manufacturers still working on proofs of concept or pilots, when the value of full-scale digital transformation is known? Three key factors are inhibiting greater progress. First, the vision for which solutions are best to apply within the different operations domains (e.g., product design, manufacturing, and supply chain) has taken time to emerge as manufacturers explored and experimented with various tools and trends. One year, cobotics was all the rage, then augmented reality, then IoT, then AI. The good news is that after years of learning about these technologies and associated solutions, industrial players now have a much better understanding of which are most relevant to their business. They also know the limits of those solutions. In other words, the dreaming phase is over. Second, adopting new digital solutions across a real-world industrial enterprise is just very difficult. Time and again, we see examples of solutions deployed but no one using them. Why? Scaling up digital solutions requires instilling the accompanying new ways of working (meaning new standards), adapting solutions to the local environment (e.g., bug free and user friendly), and applying strong management attention to make it happen. It also requires a different scaling approach from what’s commonly used.
The Dimension of Markets for the Digital Twin
79
Very often, manufacturers deploy specific digital solutions across sites or entities on a piece-by-piece basis—for example, implementing an analytics tool across all of an enterprise’s factories. But in doing so, a company can’t come close to seeing the full value of that isolated solution. To create the required change and value, manufacturers should concentrate on scaling up asset by asset—implementing a full digital solution set in one site. That engenders a true transformation of an asset, which not only creates far greater value, but also demonstrates the kind of results a company could achieve when it similarly transforms the enterprise as a whole. Third, the reality is that Industry 4.0 still competes every day with “Industry 3.0.” Why should manufacturers implement a sophisticated capability if they lack basic lean manufacturing practices, if capacity doesn’t match demand, if more automation is needed, if basic operational excellence standards aren’t used? This is the case for many industrial players, which can generate greater short-term savings by readjusting an asset, boosting automation, or right-sizing some teams than by implementing sophisticated digital solutions. An additional constraint is money—the investment companies can afford to fuel a transformation. As opposed to business investments—for example, those that generate direct sales—digital investments are about modernization and capability upgrades. When a business isn’t exactly firing on all cylinders, modernization takes a back seat to improving the immediate business outlook. That can’t continue. Manufacturers should continue to set aside funds for digital transformation, just as they do for innovation, to prepare the company for the future. This decision must come from the executive committee or board, which must preserve this investment despite current business constraints or budget pressures. The race for digital operations is truly well under way, and manufacturers that take too long to enter it are taking a big risk in the long term. This race will create winners out of companies that tackled it and losers out of those that didn’t. It’s time for manufacturers to stop experimenting and begin scaling so they’re not left behind.
2 Key Digital Twin Use Cases to Drive Resilience, Optimization, and Sustainability As the preceding study illustrates, the overall market is still far below the digital maturity needed to support widespread adoption and use of digital twins. However, the technology continues to gain momentum among companies that have aggressively digitized their operations, thus creating the foundation for digital twins to create a wide range of value. In this section, we explore some of the leading use cases for digital twins, how some leading companies are deploying them, and the potential value twins can generate in terms of resilience, optimization, and sustainability.
80
M. Blanchet
2.1 Stress-Testing for Greater Resilience Everyone saw how COVID-19 threw global supply chains into chaos in the early days of the pandemic. Supply chain leaders simply weren’t prepared for a disruption of this magnitude. They lacked an understanding of how significant disruptions could affect their operations and, more important, how to best respond to minimize the impact on the business. According to research [4] from the Institute for Supply Management (ISM), COVID-19 disrupted 75% of supply chains around the world— and was especially damaging because ISM also found [5] that 44% of companies didn’t have contingency plans for China supply disruptions. Digital twins are helping companies build greater resilience in their operations by serving as the foundation for supply chain stress tests that can help companies assess potential operational and financial risks and impacts created by major market disruptions, disasters, or other catastrophic events. Inspired by the banking industry stress test developed in the wake of the 2008 financial crisis, these stress tests can help companies quantify supply chain resilience to catastrophic events in a single Resilience Score. With such tests, organizations can identify potential points of failure in their supply chain, assess their related financial exposure, and define appropriate mitigation strategies and actions. Key to the stress test is the creation of a digital twin of an organization’s supply chain. This enables the subsequent modeling of various low-probability, but high- impact scenarios that would significantly disrupt the organization’s ability to serve customers, shareholders, employees, and society. Such scenarios could include sudden spikes or drops in demand, shutdown of a major supplier or facility, scarcity of a critical raw material, or disruption of a key port. Stress tests can identify both the time it would take for a particular node in the supply chain to be restored to full functionality after a disruption (i.e., “time to recover”) and the maximum duration the supply chain can match supply with demand after a disruption (i.e., “time to survive”). The insights provided by stress test can help leaders anticipate where trouble may strike and make quick decisions about how to respond—for instance, ensuring critical supply, shifting production to different facilities to meet fast-changing demand, or focusing the product portfolio on the most crucial items.
2.2 Optimizing Key Areas of the Business But contingency planning to mute the impact of uncertainty is certainly not the only use for a digital twin. Digital twins are also being used to dramatically improve supply chain performance through network, process, and inventory optimization. Typical benefits of such optimization are illustrated in Fig. 12.
The Dimension of Markets for the Digital Twin
81
Fig. 12 General benefits of digital twin-driven optimization
2.2.1 Network Optimization: Balance Between Service and Cost Even in “normal” times, companies can encounter unexpected changes in demand or supply, and if their network’s not prepared to accommodate these, supply chain and overall business performance can suffer. We define “network” as the combination of physical locations and supporting systems to deliver products and services to end customers. With a digital twin, a company can assess how certain changes in demand and supply would affect the network. This can help to determine if the right facilities and transportation capabilities are in the right places to, for example, effectively handle a surge in sales of a certain product or deal with a product shortage from a key supplier. If a company is launching a new product and needs to add nodes to its network, a digital twin also can help model what nodes should be added and where to get this new offering to market. And, it can even help determine how to downsize the network—e.g., figure out which facilities can be closed without negatively affecting the business—to respond to slackening demand or a need to reduce network costs. Importantly, a digital twin also is valuable in helping a company understand the impact of changes to the network on customer service—to balance cost and service levels in a way that still meets customer expectations. And, it can play a big role in helping to model how network design principles influence your carbon footprint and CO2 emissions so the company can meet its sustainability targets without compromising cost and service. 2.2.2 Process Optimization: Higher Efficiency and Productivity In many companies, processes have become increasingly complex and, therefore, less efficient and more costly. A digital twin can help a company take a deep look at key processes to understand where bottlenecks, time, waste, and inefficiencies are bogging down work, and model the outcome of specific targeted improvement interventions. This could include such things as eliminating certain steps on a manufacturing production line, adjusting product formulations to reduce the cost and improve the utility of a product, or redesigning pick-and-pack activities to minimize package handling. The result is greater efficiency, productivity, and capital utilization and lower operating costs.
82
M. Blanchet
2.2.3 Inventory Optimization: Right Goods in the Right Place at the Right Cost Inventory is an evergreen challenge for most companies: figuring out how much of what to keep and where—all in a way that enables a company to maximize customer service at the lowest possible total cost. But juggling all the factors that go into that calculation is a complex exercise—especially for companies with hundreds of thousands of SKUs and customers spread across many locations and geographies. And the more variable demand is, the more inventory needed to meet required service levels. A digital twin is uniquely fit to help here as well. It can enable a company to address a “single-echelon” challenge (optimizing inventory in a single warehouse) as well as a “multi-echelon” challenge (optimizing inventory across the entire network), taking into account demand forecasts to improve replenishment policies and modify inventory levels according to demand to avoid stockouts while minimizing overall costs. 2.2.4 Optimization in Action A growing number of companies are using digital twins to optimize key areas of their supply chains and improve decision making. Here are three examples. One is a European postal company, which used a digital twin to inform the development of its 10-year business strategy designed to address a number of critical challenges the company faced, including decreasing letter volumes, rising parcel volumes, and an increasingly unfit network of processing sites, delivery offices, and vehicles. With a digital twin of its current situation, the company was able to iteratively test improvements (virtually) to the network until it found an optimal solution. The digital twin also enabled the company to create a new virtual Regional Delivery Office, with all its internal processes, employees, and shipping logic. Doing so gave the company a way to understand the significant cost savings it could get by consolidating delivery offices into the regional facility, and the optimal sequencing of that consolidation, as well as how consolidation would eliminate most of the vehicle constraints in the delivery network to provide a strong foundation for growth. In a similar application, a capital city’s port authority was looking to boost its ability to predict how an increase in cargo flow would impact the overall system. The authority created a digital twin that considers different and related pathways for customs processes, 13 different types of customs inspections, and expected cargo volumes with actual ferry arrival times and freight data. Leveraging this powerful tool, the port authority was then able to model proposed future processes, conduct stress-testing to assess cargo increases on port performance, and develop what-if scenarios to identify potential process enhancements. As a result, the port authority can now more intelligently develop strategies to eliminate bottlenecks and
The Dimension of Markets for the Digital Twin
83
inefficiencies, reduce costs by optimizing resources, and assess the feasibility of new investments before they’re even made. A third example is the digital twin a player in the jet fuel supply chain built to help the company respond to the COVID-19 disruption and resulting plunge in demand (not to mention jet fuel prices, which sank more than 30%). Facing such an unprecedented turn of events in the market and uncertainty around future price direction, the company used the digital twin to simulate the impact of changes in jet fuel demand on supply chain inventory, logistics, profit and loss, the balance sheet, and cash flow—and identify the moves it should make to increase total profit while reducing risk and cost.
2.3 Fostering Big Strides in Sustainability While a digital twin’s value in driving resilience and optimization may be intuitive, twins also are playing an increasingly important role in an area where their use may not be so well understood: driving more sustainable operations, which that has gotten increasingly urgent in the past decade. The environmental degradation inherent in our current models of production and consumption has reached critical levels. Continuing on this path is incredibly risky, as it could trigger non-linear, abrupt environmental change within planetary systems. In fact, we are already feeling the effects across all our ecosystems (Fig. 13). To address this crisis, companies need to radically transform their systems of production and consumption. There are now less than 10 years left to achieve the UN Sustainable Development Goals (SDGs, also known as the Global Goals) set out by governments, businesses, and other stakeholders, and we are woefully off track. Incremental change is no longer an option.
Fig. 13 Key examples of environmental degradation
84
M. Blanchet
SDG progress reports have revealed that despite improvement in a number of areas on some of the Goals, progress has been slow or even reversed, particularly following COVID-19. While we still stand a chance to change this trajectory, we need to understand that the next decade is critical. CEOs stand at the ready to ensure business plays its role in our collective response. In the fifth UN Global Compact-Accenture Strategy CEO Study on Sustainability from 2019 [6], nearly half of the participating CEOs said business would be the most important actor in the achievement of the Goals. Yet, only 21% of CEOs stated they believe that business is already fulfilling that potential by contributing to the Goals. Not content with that status quo, CEOs agree the business community should be making a far greater contribution to achieving a significantly more sustainable world by 2030. To accelerate this sustainability transformation toward more circular models, business experimentation with new digital, physical, and biological technologies has flourished in recent years. Some of the technologies have already matured considerably. The Internet of Things (IoT), for example, has become the new standard for devices and equipment. However, there has been less experimentation to date with the technology used to design, manufacture, and build most complex goods today. This technology is known as product lifecycle management (PLM), which has evolved significantly in recent years with the advent of production innovation platforms. Digital twin technologies stand on the foundation provided by PLM, but enable much more disruptive forms of innovation. Digital twins are used to model complex systems, from cars to cities to human hearts, and simulate their functioning with an accuracy that allows the user to go directly from a virtual model to creation, without spending the years it normally takes to prototype and incrementally improve on existing designs. This time-to- market speed and risk-reduction of complex projects explains why digital twin technologies have been used in the development of 85% of the world’s electric vehicles, more than 75% of global wind power, and breakthrough sustainability pilots such as electric furnaces, the world’s first solar airplane, and new bio-materials [1]. Virtual universes allow users to design, test, and model disruptive new sustainable products and processes in record time. The smart industrialization agenda has breathed new life and potential into the digital twin concept. Digital (or “virtual”) twins provide a real time virtual representation of a product, process, or a whole system used to model, visualize, predict, and provide feedback on properties and performance, and is based on an underlying digital thread. The latter is the interconnected network of process and digital capabilities that create, communicate and transact product information throughout the product lifecycle. This allows the virtual model to be continuously updated across the lifecycle of the physical asset (or across the parameters of production processes), with additional data gathered from real-world interactions (Fig. 14).
The Dimension of Markets for the Digital Twin
85
Fig. 14 How digital or virtual twins interact with the real world
Fig. 15 Focus use cases for the study across the in-scope industries
When used in this way, digital twins are a major driver of significant change toward greater sustainability. How significant? Analysis done by Accenture and Dassault Systemes found that five use cases alone can unlock combined additional benefits of USD $1.3 trillion in economic value and 7.5 Gt CO2e reductions by 2030. Although this value is significant, it’s also important to note that this analysis is only related to this limited number of use cases. The total impact of the scaled deployment of digital twins across global economic systems is likely to deliver far greater upside potential. Furthermore, these benefits are most likely an underestimate, because key assumptions and sensitive parameters used in the analysis were based on the lower, conservative end of observed ranges. Figure 15 provides an overview of the five use cases analyzed quantitatively by industry. A more detailed discussion of these use cases follows.
86
M. Blanchet
2.4 Construction and Cities: Building Operational Efficiency Optimization Enabled by Digital Twin Technologies The construction industry is estimated to be worth $8 trillion worldwide, or 10% of global GDP, and is one of the largest sectors globally [7]. Additionally, it’s a key source of demand for materials and resources, which creates significant environmental strain and reliance [8]. From a sustainability perspective, commercial and residential buildings currently use about 40% of global energy demand (60% of the world’s electricity), account for 25% of our global water usage, and are responsible for approximately one-third of global GHG emissions [7]. And these demands are only set to increase. Current estimates suggest that by 2030, there will be 706 cities with a least one million inhabitants—up nearly 30% from 2018 [9]. Despite these challenges, spatial concentration of people and economic activities has potential upsides, as it facilitates at-scale deployment of solutions. For example, urban buildings offer significant potential for achieving substantial GHG emission reductions globally. Energy consumption in buildings can be reduced by 30 –to 80% using proven and commercially available digital twin technologies, often within the broader framework of smart cities.1 In this context, a digital twin of a physical building behaves like its real-world twin, connecting buildings with energy and transport systems. 3D simulation and modeling software, real-time data, and analytics enable the optimization of a building’s operational performance and sustainability throughout its lifecycle. Digital twins are also a data resource that can improve the design of new assets, specify as-is asset condition, and run “what if” scenarios. Advanced twins use two- way digital-physical interactions, allowing for remote and even autonomous asset control. While twins can be created for existing and future buildings, this use case models the potential of implementing digital twins in new construction globally between 2020 and 2030. For this use case, we have prioritized and focused on two value drivers where impact is most evident2: building operating cost reductions and improved energy management. Analysis revealed this use case could drive USD $288 billion in incremental savings by reducing building operating costs through lower energy consumption as well as lower maintenance, planning, and commissioning costs. It also can cut building CO2e operations emissions by 6.9 Gt as a result of improved energy management (12,032 TWh of savings).
Implementing technology solutions at the systems level to improve urban energy efficiency, transport, and public services. 2 The analysis is based on a comparison between a business-as-usual (what is likely to happen anyway) and an accelerated scenario where we have increased technology adoption rates from 9 to 30% by 2030, based on geography, building type and age; scope of analysis is global, inclusive of commercial and residential new construction and existing building stock, using current building operative energy intensity averages as a baseline; cumulative output over 10-year period. 1
The Dimension of Markets for the Digital Twin
87
2.4.1 Case Study: Aden, China, Facility Aden is a leading integrated facility management service provider that has expanded from traditional facility management services to asset management and energy services. It recognized that digital twins and analytics are critical to this transformation. Aden has created a digital twin for one of the commercial centers in Chengdu, China. The twin monitors, aggregates, and understands data to plan and execute inspection, maintenance, and repair activities. 3D simulations to model and simulate the behavior of the building systems are used to predict and optimize energy consumption under different operating conditions. Expected benefits from this project include reduced annual energy consumption by 20%, lower water usage and waste generated, and improved health and safety performance.
2.5 Consumer Packaged Goods: Sustainable Product Development Supported by LCA-Based 3D Modeling and Simulation The Consumer Packaged Goods (CPG) industry currently accounts for two-thirds of international trade volumes and represents 10% of the national GDP in the United States [10], and is closely tied to many others, such as agriculture, chemicals, oil and gas, and mining and natural resources. Due to its size, the industry also faces significant sustainability challenges. Agriculture (including crop and livestock production), forestry and land use account for nearly one-quarter of global GHG emissions, and one-third of global food production is wasted across the value chain [11]. Digital twin technologies offer the potential to limit resource use and enable cross-functional collaboration from R&D to Marketing and back, helping establish the base for a new way to approach sustainability by design—which is critical, as design decisions can be linked to 80% of a product’s environmental impact [12]. 3D modeling and simulation technologies can also help enable the sustainable design and manufacture of products by incorporating lifecycle footprint data and visibility. Here, we concentrate on product development and the integration of lifecycle assessment (LCA) footprint data3 within 3D modeling and simulation tools. This use case focuses on the analytical value of digital twins and how the technology enables the integration of sustainability objectives at the start of the product lifecycle.
Data on the environmental impacts associated with all the stages of the lifecycle of a product and/ or the raw materials used to make it, process or service. 3
88
M. Blanchet
A significant portion of a product’s environmental impacts is determined by the decisions made in the early stages of design; it’s also where 60–85% of the product’s cost is determined [13]. Different eco-design tools exist using the LCA principle to support product and service decisions; however, those are usually very complex and time-consuming. More importantly, existing tools can only be used after concept development and design have significantly advanced already, therefore limiting the “menu of options” available to decision-makers. Virtual prototyping also allows for faster design iterations and reduces the need for physical tests, driving significant CO2 benefits. For this use case, we have prioritized three value drivers where the impact is most evident: raw material cost reduction; product development cost reduction; and reduced embedded carbon footprint. The result: a reduction of USD $131 billion in raw material usage costs and $6 billion in product development costs, and 281 Mt. CO2e emission reductions from better LCA output visibility and improved decision-making. This analysis is based on the global CPG industry and a comparison between a business-as-usual (what’s likely to happen anyway) and an accelerated scenario where we have increased use case deployment rates to the maximum feasible level by 2030. Publicly available case studies do not yet exist, as this is a unique solution combining sustainability impact analysis with 3D modelling and design tools in the early days of testing.
2.6 Transportation and Mobility: Product Design, Prototyping, and Testing with Digital Twin Technologies In many developed countries, transportation accounts for 6–12% of GDP, whereas logistics can account for up to 6–25% of GDP alone [14]. But emissions from transport, broadly comprising road, rail, air, and marine, accounted for about 25% of global CO2 emissions in 2016 [15] and they are also projected to grow faster than any other sector’s. This poses a key challenge for efforts to decarbonize the global economy. Research suggests that zero-emission and autonomous vehicles both have a critical role to play if we are to achieve global GHG reduction targets [16], and digital twins have a long history in automotive applications [17]. It’s estimated that by the end of 2020, 65% of automotive manufacturers would use simulation and virtual twins to operate products and assets [18]. Case studies suggest that digital twin technologies accelerate time-to-market and help bring costs down for new drivetrains, lightweight body designs, and EV batteries [19], and are indispensable in the development of autonomous transportation [20]. Here, we focus on avoiding prototyping and physical testing by using 3D modeling and virtual simulation technologies when new vehicles are designed, prototyped, and tested. With a digital twin, an OEM can test multiple designs and features,
The Dimension of Markets for the Digital Twin
89
eliminating many aspects of prototype testing at the part and vehicle level to help determine how the design measures against relevant policies, standards, and regulations. Typically, OEMs use hundreds of test vehicles per model across several models each year (depending on whether the design changes are major or minor). These can be drastically reduced if a digital twin is used during the early product development stage, leading to significant avoidance of waste—both in terms of materials and product development costs. In addition, the use of digital twins in new vehicle development can drive other production costs down and shorten overall time to market considerably. It’s even being linked to reduction in costly vehicle recalls [21]. Finally, digital twins are helping to enable faster development of autonomous vehicles with a significantly reduced carbon footprint by substituting a big portion of the total test mileage required with simulations. For this use case, we have prioritized and focused on four value drivers where the impact is most evident4: AV development cost avoidance; product development cost reduction; AV development emissions avoidance; and emission reduction through decreased physical testing. Analysis found that digital twins in this use case can result in $429 billion in cost avoidance in autonomous vehicle development via simulation and USD $261 billion in product development costs. They also can avoid or reduce CO2e emissions by 230 Mt. in autonomous vehicle development and physical prototypes and test vehicles. 2.6.1 Case Study: Large European OEM, Virtual Design and Verification Automotive companies are under constant pressure to produce better cars that meet increasingly stringent legal requirements for safety and environmental sustainability, as well as growing consumer demands, and to bring them to market at speed and scale. As a result, OEMs’ vehicle design and development processes have evolved from one that previously incorporated key milestones involving physical prototypes, to one that seeks to largely eliminate physical prototypes and associated physical tests. As part of these tests, crash simulation software can now accurately predict detailed behaviors that are known to influence passive safety criteria. A large European OEM has achieved the following improvements based on virtual design and verification using digital twin technologies: • Reduced product development time by months
The analysis is based on quantifying the emissions avoidance contributions from virtualizing conventional passenger vehicle development and testing (within the limits defined by local regulation), and the use of simulated driving for the development of autonomous passenger vehicles, globally and out to 2030. The cost and emissions savings for simulating autonomous driving have been estimated using EVs as a reference point. The business impact is a significant value and about 60% of it is a cost avoidance attributed to physical AV testing. 4
90
M. Blanchet
• Ability to accurately predict localized effects like material and connection failure leading to improved quality • For vehicle models with limited design updates, a 70–100% reduction in physical prototypes • For some models, physical prototypes were eliminated altogether
2.7 Life Sciences: Manufacturing Plant Optimization for Pharmaceutical Products with Process Virtual Twins Life sciences is an umbrella term for organizations that work to improve life [22], and broadly encompasses pharmaceuticals, biopharmaceuticals, biotech, and medtech. It’s also linked to a considerable part of venture capital flows [23] and is characterized by one of the most significant levels of R&D spending as a proportion of revenues in the private sector [24]. There’s a growing recognition that the pharmaceutical industry, considered to be a medium-impact sector,5 must do more to improve its sustainability performance [25]. Data suggests that the pharmaceutical industry’s GHG emissions are increasing, despite efforts to decarbonize, due to increasing drug demand globally [26]. Moreover, analysis of emissions per million dollars of revenue finds that the global pharmaceutical industry is approximately 55% more emissions intensive than the automotive industry [27]. Digital twin applications in production plants can drive benefits for the environment. For example, botanical pharmaceuticals manufacturing can achieve significant process time reduction (factors of 5–20) resulting in cost of goods reduction (factors of 2–10) and GHG emissions abatement (factors of 4–20) [26]. In this use case, the digital twin is created for the production process. The technological building blocks that enable such solutions include IoT, advanced analytics, and machine learning. Chemical mixing processes and the use of solvents is one of the major drivers of process-related waste and emissions in pharmaceuticals manufacturing [28]. Simulations of these processes can enable scientists and plant operators to run multiple scenarios to find the optimal configuration, accelerate the speed and accuracy, and reduce waste, including related emissions. In addition, recycling solvents, using fewer fresh solvents, or burning less solvent waste has been shown to reduce total emissions for a process significantly [28] —something that process digital twins can also help address.
Pharma industry emissions are estimated to be about 52 Mt. CO2e globally for Scope 1 and 2 emissions (i.e. not accounting for indirect value chain emissions, which will increase that figure substantially); taking a WRI estimate of global emissions from 2016 at 46.1Gt, this equates to 0.11% of global CO2e emissions. 5
The Dimension of Markets for the Digital Twin
91
For this use case, we have prioritized and focused on two value drivers where the impact is most evident6: reduction in Cost of Goods Sold and reduced embedded carbon footprint. Our findings: digital twins of the production process can drive a USD $106 billion reduction in cost of goods sold due to lower operating expenses (thanks to accelerated time to market, reduced material and energy cost base, and improved quality in the production process). They also can cut production GHG emissions by 61 Mt. due to efficiency improvements and lower solvent and material usage. 2.7.1 Case Study: Sanofi’s Framingham Lighthouse Facility The Framingham, Massachusetts, production facility of global pharmaceutical manufacturer Sanofi is a digitally enabled, continuous manufacturing facility where the production process is connected with R&D. Digital twin technology is used to optimize remote manufacturing through the use of real-time data capturing and analysis. The whole industrial process is digitalized and paperless, making it 80 times more productive than a traditional factory. It can make medicines in less time for twice the number of patients and all within a smaller environmental footprint. Observed performance indicator improvements include an 80% reduction in energy consumption and CO2e emissions per year, 91% reduction in water footprint, 94% reduction in use of chemicals, and 321 tons of waste reduction per year [29].
2.8 Electrical and Electronics: Waste Electric and Electronic (WEEE) Equipment Value Recovery Supported by Digital Continuity The electrical and electronics industry holds significant importance in today’s economy—the consumer electronics sector alone is estimated to have a market value of $1 trillion worldwide [30]. These technologies have also become an inseparable part of modern life—for instance, half of the world’s people now own a smartphone [31]. But the industry faces sustainability challenges such as the high rate of device replacement. In fact, the manufacturing stage of electronic equipment alone is responsible for more than one-third of the associated CO2e lifecycle emissions [32]. The analysis is based on the global Pharma industry and a comparison between a business-asusual (what is likely to happen anyway) and an accelerated scenario (where the use case adoption levels by 2030 could be 20% higher than the BAU adoption). While plants may have partial adoption of process twins for selected unit operations, the estimation has been calculated for virtual twins of complete processes. Cost structures for generics and branded pharmaceuticals are significantly different, and these have been accounted for while estimating the material and production cost savings. 6
92
M. Blanchet
Appropriate disposal and recycling of products presents further challenges. With e-waste officially the fastest-growing waste stream in the world [33], it’s imperative to improve value recovery, reduce GHG emissions intensity, and limit risks to human health.7 This is an especially grave challenge if we consider that in 2019, only 17.4% of the 53.6 million tons of e-waste was properly disposed of, collected, and recycled [34]. Digital twins can help product designers embed and follow circular economy principles throughout each stage of design. Research has also focused on exploring how twin technologies can help address the e-waste problem [35, 36]. This use case focuses on the role of digital twin technology in better managing e-waste, looking at how the technology can help extend product life by better facilitating repairs and reuse and increasing overall e-waste recycling rates by making information on material and chemical content available to value chain participants downstream. In this application, the technology supports the manufacturing and remanufacturing or repair of products from both process optimization and data continuity perspectives. It provides a virtual record reflecting the actual status of a device in terms of health and performance of its components, which can support the repair process planning. Additionally, typically when a device reaches recyclers, a significant amount of data and knowledge has been lost—particularly from the product development, manufacturing, and service life stages. Digital twins, through enhanced digital continuity, can enable a constant flow of information between value chain participants, enabling the recycler to initiate appropriate process steps without the need for additional tests or inspection [37]. For this use case, we have prioritized and focused on three value drivers where the impact is most evident8: added value from equipment re-use and refurbishment; emissions reduction through improved refrigerant release; and emissions avoided through product life extension. Analysis revealed companies could generate USD $73 billion more in revenue from greater refurbishment and re-use of equipment and less recycling for material recovery. They also could reduce CO2e emissions associated with the release of refrigerants by 31 Mt. through better handling of relevant WEEE, and avoid an additional 5 Mt. in emissions by cutting the total amount of the informally processed e-waste and associated negative environmental impacts.
Informal recycling of electric and electronic waste has been linked to worker exposure to toxic fumes of various heavy metals through inhalation and contact form skin surface. 8 The analysis is based on an increase in the formal handling of WEEE globally from the current level of 17% to reach 43% by 2030 (latest estimate for formal handling in Europe). A further improvement realized by adopting digital threads is that for the WEEE that is formally handled, the level of refurbishment and reuse can increase significantly by providing better information about the service life and material composition of the product. While e-waste categories such as smartphones have high formal recovery, refurbishment rates and value generation from reuse, we have assumed conservative values that are representative of the overall stock of e-waste. 7
The Dimension of Markets for the Digital Twin
93
2.8.1 Case Study: Circularise Circularise is a Dutch start-up focused on commercializing blockchain-based transparency and traceability technologies for the circular economy. The company’s solution enables a wide range of stakeholders across the value chain—such as mining companies, electronics manufacturers, collection services, and recycling companies—to share information on product material content and flows. The final output is a QR code that provides important data for recyclers. For example, some computer monitors have a mercury lamp that needs to be removed; however, because no one knows which monitors are fitted with these lamps, all are often opened by hand for inspection. This same concept can be applied to electronic plastic components, many of which contain plasticizers and stabilizers which, over time, get regulated and pose a barrier to recycling when difficult to identify and measure. The start-up has also developed proprietary methodology to safely manage IP rights and avoid the sharing of commercially sensitive information within the industry ecosystem [38].
3 Conclusion There’s no doubt that digital twins have significant potential to transform how companies do business—particularly in their operations, which are growing increasingly complex and difficult to manage as companies expand their global footprint and customers become more diverse. As noted in this chapter, three areas where twins can have a major impact are in creating greater operational resilience; optimizing supply chain networks, processes, and inventory; and fostering bigger strides toward sustainability. For digital twins to fulfill this potential, however, companies need to make much greater progress toward digitizing their overall businesses. A high degree of digital operational maturity is necessary for digital twins to function and thrive; without it, the infrastructure and access to the data twins need simply aren’t there. A focus on making substantial investments in the right digital capabilities, combined with attention to the key enablers necessary for successful digitalization, will be especially important for companies that want to truly capitalize on digital twins to propel their business forward in the coming years.
References 1. Source: Dassault Systemes estimates, 2020. 2. Source: Accenture and Dassault Systemes research based on commercial data, 2020. 3. Source: Dassault Systemes, Global Market Insight study, 2019.
94
M. Blanchet
4. Lambert, L. (2020). 75% of companies report coronavirus has disrupted their supply chains. Fortune [online]. Available at https://fortune. com/2020/03/11/75-of-companies-report-coronavirus-has-disrupted-their-supply-chains/ 5. Leonard, M. (2020). 44% of supply chain pros have no plan for China supply disruption. Supplychaindive.com. Available at: https://www.supplychaindive.com/ news/44-of-supply-chain-pros-have-no-plan-for-china-supply-disruption/573899/ 6. Accenture. (2019). A decade to deliver [online]. Available at: https://www.accenture.com/_ acnmedia/pdf-109/accenture-ungc-ceo-study.pdf 7. UNEP. (n.d.). Energy efficiency for buildings [online]. Available at: https://www.euenergycentre.org/images/unep%20info%20sheet%20-%20ee%20buildings.pdf 8. OECD. (2018). Raw materials use to double by 2060 with severe environmental consequences – OECD [online]. Available at: https://www.oecd.org/environment/raw-materials- use-to-double-by-2060-with-severe-environmental-consequences.htm 9. UN. (2018). The world’s cities in 2018 [online]. Available at: https://www.un.org/en/events/ citiesday/assets/pdf/the_worlds_cities_in_2018_data_booklet.pdf 10. Consumer Brands Association. (2020). Industry impact [online]. Available at: https:// consumerbrandsassociation.org/industryimpact/#:~:text=The%20jobs%20supported%20 by%20the,in%20every%2010%20American%20jobs.andtext=The%20total%20labor%20 income%20supported,of%20all%20U.S.%20labor%20income.andtext=The%20CPG%20 industry’s%20total%20contributions%20represent%2010%25%20of%20the%20 national%20GDP 11. Lacy, P., Long, J., & Spindler, W. (2020). The circular economy handbook: Realizing the circular advantage. Palgrave Macmillan. 12. EU Science Hub – European Commission. (2020). Sustainable product policy [online]. Available at: https://ec.europa.eu/jrc/en/researchtopic/sustainable-product-policy 13. Agudelo, L., Mej a-Guti rrez, R., Nadeau, J., & Pailhes, J. (2017). Life cycle analysis in preliminary design stages [online]. Available at: https://hal.archives-ouvertes.fr/hal-01066385/ document 14. Rodrigue, J., & Notteboom, T. (2020). Transportation and economic development. In J. Rodrigue (Ed.), The geography of transport systems (5th ed.). Routledge. 15. World Resources Institute. (2019). Everything you need to know about the fastest growing source of global emissions: Transport [online]. Available at: https://www. wri.org/blog/2019/10/everythingyou-n eed-k now-a bout-fastest-g rowing-s ource-g lobal- emissionstransport#:~:text=1.emissions%20from%20burning%20fossil%20fuels 16. Williams, E., Das, V.. & Fisher, A., (2020). Assessing the sustainability implications of autonomous vehicles: Recommendations for research community practice [online]. Available at: https://www.mdpi.com/2071-1050/12/5/1902/pdf 17. DHL Trend Research. (2019). Digital twins in logistics [online]. Available at: https://www.dhl. com/content/dam/dhl/global/core/documents/pdf/glo-core-digital-twins-in-logistics.pdf 18. Altran. (2020). Digital twins: Creating digital operations today to deliver business value tomorrow [online]. Available at: https://www.altran.com/as-content/uploads/sites/5/2019/09/ digital-twin-povwhitepaper_v7.pdf 19. General Electric. (2016). This “digital twin” of a car battery could deliver new hybrid vehicle into your garage | GE news [online]. Available at: https://www.ge.com/news/reports/ scientists-built-adigital-twin-of-a-car-battery-to-make-it-last-longer 20. Etherington, D. (2019). Techcrunch is now a part of Verizon media [online]. Techcrunch.com. Available at: https://techcrunch.com/2019/07/10/ waymo-has-now-driven-10-Billion-autonomousmiles-in-simulation/?guccounter=1 21. Tata Consultancy Services. (2018). Digital twin in the automotive industry: Driving physical-digital convergence [online]. Available at: https://www.tcs.com/content/dam/tcs/pdf/ Industries/manufacturing/abstract/industry-4-0-and-digital-twin.pdf
The Dimension of Markets for the Digital Twin
95
22. Linchpin. (2020). Linchpin: Trends transforming the life sciences industry outlook in 2021 [online]. Available at: https://linchpinseo.com/trends-in-the-life-sciences-industry/ 23. Wall Street Journal. (2020). Tracking venture capital investment by sector [online]. Available at: https://graphics.wsj.com/venturecapital-deals/ 24. Cushman and Wakefield. (2020). Cushman and Wakefield’s life sciences 2020: The future is here | United States | Cushman and Wakefield [online]. Available at: https://www.cushmanwakefield.com/en/united-states/insights/life-science-report 25. Neville, S. (2019). Pharma finds its feet in fight against climate change [online]. Financial Times. Available at: https://www.ft.com/content/d672b65a-fe30-11e8-aebf-99e208d3e521 26. Schmidt, A., Uhlenbrock, L., & Strube, J. (2020). Technical potential for energy and GWP reduction in chemical–Pharmaceutical industry in Germany and EU—Focused on biologics and botanicals manufacturing. PRO, 8(7), 818. 27. Belkhir, L., & Elmeligi, A. (2019). Carbon footprint of the global pharmaceutical industry and relative impact of its major players. Journal of Cleaner Production, 214, 185–194. 28. Kopach, M. (2012). The green chemistry approach to pharma manufacturing. Innovations in Pharmaceutical Technology [online]. Available at: http://www.iptonline.com/articles/public/ ACSGreenChemistry.pdf 29. Sanofi. (2020). Factory of the future [online]. Available at: https://www.sanofi.com/en/ about-us/our-stories/sanofi-takes-a-step-into-the-future-of-making-medicine 30. Wadhwani, P., & Saha, P. (2020). Consumer Electronics Market Size By Product (Audio and Video Equipment [Personal, Professional], Major Household Appliance, Small Household Appliance, Digital Photo Equipment [Personal, Professional]), By Application (Personal, Professional), Industry Analysis Report, Regional Outlook, Growth Potential, Competitive Market Share and Forecast, 2020–2026 [online]. Available at: https://www.gminsights.com/ industry-analysis/consumer-electronics-market 31. BankMyCell. (2020). How many smartphones are in the world? [online]. Available at: https:// www.bankmycell.com/blog/how-manyphones-are-in-the-world 32. Bordage, F. (2019). The environmental footprint of the digital world. GreenIT.fr. 33. The World Economic Forum. (2019). A new circular vision for electronics time for a global reboot [online]. available at: http://www3.weforum.org/docs/WEF_A_New_Circular_Vision_ for_Electronics.pdf 34. World Economic Forum. (2019). Global electronic waste up 21% in five years, and recycling isn’t keeping up [online]. Available at: https://www.weforum.org/agenda/2020/07/ global-electronic-wasterecycling-management/ 35. Rocca, R., Rosa, P., Sassanelli, C., Fumagalli, L., & Terzi, S. (2020). Integrating virtual reality and digital twin in circular economy practices: A laboratory application case. Sustainability, 12(6), 2286. 36. Wang, X., & Wang, L. (2018). Digital twin-based WEEE recycling, recovery and remanufacturing in the background of industry 4.0. International Journal of Production Research, 57(12), 3892–3902. 37. Ardente, F., & Mathieux, F. (2014). Recycling of electronic displays: Analysis of pre- processing and potential ecodesign improvements. Resources, Conservation and Recycling, 92, 158–171. 38. Wassink, J. (2018). Circularise uses blockchain technology to trace raw materi als [online]. TU Delft. Available at: https://www.tudelft.nl/en/delft-outlook/articles/ circularise-uses-blockchain-technology-totrace-raw-materials/
96
M. Blanchet Max Blanchet is Senior Managing Director within Accenture Strategy and leads the Global Supplychain Strategy practice. With 30 years of experience in strategy consulting, Max Blanchet has supported numerous companies in their industrial transformation, as well as public authorities and industry associations, during which he earned recognition as an expert on reindustrialization topics. He advises top management of leading Corporates in various industry sectors including Automotive, Aerospace & Defense, Process Industries and Industrial Equipment sectors. More recently, he specialized around digital transformation topics covering all function from Design & Engineering, Manufacturing and Supplychain. Max Blanchet wrote two books «L’Industrie France décomplexée» (2012, Lignes de Repères), and «L’Industrie 4.0: nouvelle donne économique» (2015, Lignes de Repères) and published numerous thought leadership pieces. He regularly speaks at industry conferences and events.
Digital Twins: Past, Present, and Future Michael W. Grieves
Abstract The Digital Twin (DT) is a concept introduced at the beginning of the twenty-first century but did not gain traction until the middle of the last decade. It is a concept that was first adopted for tangible industrial products and has since expanded to all manner of products and services. This includes not only application to inanimate entities but also to the biological conditions of people. Digital Twins are rapidly moving into the intangible realm of processes and abstract ideas. The Digital Twin consists of different types to be a framework for the entire lifecycle of the entities. While the Digital Twin today is a concept that is created by its users, the prediction is that its evolution is to be an intelligent platform. This will allow work to move from the physical world into the virtual world with major impacts on efficiency and effectiveness. Keywords Digital Twin · Digital Twin Aggregate · Digital Twin Instance · Digital Twin history · Digital Twin Prototype · Intelligent Digital Twin · Physical Twin · PLM · Product Lifecycle Management
1 Introduction My interest in what would be Digital Twins (DTs) actually dates to the early 70s. I had been fortunate enough to be selected in the summer before my senior year at my Catholic high school, Cabrini High, for a National Science Foundation program at Oakland University in Rochester Michigan. This program drew students from all over the Detroit area and was called a “Math Camp”. Of the three courses at the “camp”, two were about math, but one was on programming. We were taught to program in Fortran on the university’s IBM 1620. Because of that, by the end of my freshman year at the University of Detroit, I was M. W. Grieves (*) Digital Twin Institute, Cocoa Beach, FL, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_4
97
98
M. W. Grieves
working as a systems programmer on timesharing operating systems and compilers for a company called Applied Computer Timesharing or ACTS Computing. ACTS has two different timesharing systems, a GE 465 and later a GE 265 that was located at the Ford Motor Company No. 2 Engineering Center. In the early 70s, one of the ACTS salespeople came to me with project that he was interested in. What was then the local telephone company, Michigan Bell, had the significant problem with people cutting telephone lines that were buried on their property. Michigan Bell had started a program called Miss Dig. Ms. Dig was an attractive employee of the telephone company who was dressed in a miniskirt and white knee-high go-go boots.1 In print and TV ads, Ms. Dig encouraged people to call the telephone company before they dug on their property. It was and still is an expensive proposition to send a team out to a property every time somebody calls, so the salesperson was wondering whether this might be solved by computers. I thought about the problem. My thought was that if we could create what would now be considered a Digital Twin of the counties that Michigan Bell was operating in, we could indicate which part of the property had a telephone line in it. The problem was that the solution I envisioned meant dividing the area up into one square foot areas and then indicating within that area whether there was a telephone line in it or not. The quick calculation however showed that amount of data that this required would completely dwarf the capacity of the GE 465 time sharing system, which had 64k 24-bit words or 192,000 bytes of internal memory and maybe 100 MB of disk space. However, I continued to think from time to time the usefulness of representing physical things in digital space. Later in the 70s, when I was involved with the world’s first supercomputer, the Illiac IV, I quickly concluded that even that computer system didn’t have anywhere near the capacity for a project like this. By my 30s I had started my own computer company and had the privilege of interacting with some of the pioneers of the information networking area, such as Bob Metcalfe, one of the inventors of Ethernet. However, these early personal computer systems clearly didn’t have the capacity for virtualization. By the late 90s, I had tired of being a corporate executive, even if it was the company I founded. In fact, I had taken my company public in the mid 90s, so it was not really my company any longer. I was also spending far more time dealing with lawyers and accountants than the technology. I decided that it was time to do something different. That something different was enrolling in a new multi-disciplinary, executive- oriented doctoral program (EDM) at Case Western Reserve University in Cleveland, Ohio. I was interested in moving beyond the commercial aspects of information processing and delving deeper into the underlying theory and constructs about information itself.
I actually knew the woman who was Miss Dig. My wife, Diane, worked for Michigan Bell, so we were at many of the same social events. 1
Digital Twins: Past, Present, and Future
99
I was particularly interested in the idea that the information that was embedded in physical objects could be stripped from those objects and created as an entity. Because of the exponential increases in Moore’s Law, we were rapidly approaching a point where the information that we could obtain by being in physical possession of physical object could be replicated digitally within a computer. This idea of duality of a physical object and its embedded information and a virtual object, the information itself, was a concept that begin to crystallize early on in the EDM program [16].2 This duality of objects, both physical and virtual, today is known as the Digital Twin Model. I had the idea of the Digital Twin model at the beginning of the millennium. However, because of the compute, storage, and bandwidth intensive requirements that the Digital Twin model requires, the exponential increases only began making the Digital Twin a reality by the middle of the 2010s. To understand how the Digital Twin concept took shape after the Case Western doctoral program, I would like to describe the path that it has taken over the past two decades.
2 The First Digital Twin Model The origins of the Digital Twin concept and its associated model is well-established in both industry and academic literature. As shown in Fig. 1, the Digital Twin Model was first presented at a Society of Manufacturing Engineering (SME) conference in Troy, Michigan in October of 2002 [9]. The presentation was on the support/operational phase of the product lifecycle. The model did not even have a name. The slide was simply entitled, The Conceptual Ideal of PLM. A little later that year, I presented the Digital Twin in a more general way at the organizational meeting of what we were calling the Product Lifecycle Management Development Consortium (PLM DC) [10]. This was a meeting in the Lurie Engineering Center at the University of Michigan. The purpose was to garner commitment for this new product-oriented concept at the time, Product Lifecycle Management (PLM). The idea was to create a research center at the University of Michigan focusing on PLM applied research. I was a Co-Director of the Center. The attendees were of two types: engineering and information technology executives from the auto industry, both OEMs and Tier 1s, and representation from the nascent PLM software community that included EDS,3 Dassault, PTC, and Matrix One (subsequently acquired by Dassault). I am often asked, “So why didn’t you do your dissertation on Digital Twins?” The objective of the doctoral candidate is to get through the program to receive one’s degree. That means picking a dissertation research question that can be understood and approved by one’s doctoral committee. Brand new concepts are extremely difficult to meet those requirements. 3 EDS, which was owned by General Motors at that time, went through some corporate transitions and is now Siemens. 2
100
M. W. Grieves
Fig. 1 The viewgraph from the first presentation about the Digital Twin as part of the conceptual idea for Product Lifecycle Management (PLM)
While the Digital Twin model was the same model presented earlier at the SME conference, the model was intended to convey its applicability across the entire lifecycle of the product. Because of the automotive industry attendees, there was a strong engineering and manufacturing focus. The Digital Twin model from the 2002 time frame is substantially unchanged from today’s Digital Twin model. The original model contains the three main components of today. These components are: (a) physical space and its products, (b) virtual or digital space and its products, and (c) the connection between the two spaces. There was a fourth component in this original Digital Twin model. Since the idea of virtual or digital spaces was relatively new at the time, this component was intended to emphasize that unlike physical space, where there is a single instance that we have access to, virtual or digital spaces have an infinite number of instances that we can use. Because it was an automotive group, the example I used to illustrate this was crash testing. I said, “in physical space, the car that is crash tested is destroyed and cannot be used again. In these virtual spaces, we can crash test that same vehicle repeatedly.” When the Digital Twin model was introduced in 2002, I did not even give it a name. As Fig. 1 shows, it was simply the “Conceptual Ideal for PLM.” I did name the concept in a 2005 paper as the Mirrored Spaces’ Model [11], but changed the
Digital Twins: Past, Present, and Future
101
name in my first book in 2006 [12] on PLM to the Information Mirroring Model, where I also called the Digital Twin a virtual doppelganger [27]. It was not until 2010 that the concept acquired its “Digital Twin” name. I was a consultant to NASA, and modeling and simulating spacecraft was instrumental in thinking about Digital Twin and its entailments.4 The NASA colleague that introduced me into NASA, John Vickers, did take the inelegant names at the time I had for the DT concept and coined the actual “Digital Twin” name. He also introduced the Digital Twin within NASA in his 2010 roadmap [21]. In spite of this, the Information Mirroring Model name stayed the same in my second book on PLM [13]. However, I did take the hint that John Vickers was on to something with the name and included a footnote mentioning John and the “Digital Twin” name. The Digital Twin name finally made it into my work in my often cited manufacturing white paper [14]. From that point on, I have used Digital Twin as the name of the concept and model. “Digital Twin” well conveys the conceptual idea behind the Digital Twin that exists through today. It is the idea that a proposed or actual product information can and should be an artifact. This allows us to move work that has historically been done in the physical world into the virtual world, as I shall discuss later in this chapter.
3 Digital Twin Model Today It is useful to review just what a Digital Twin is. The Digital Twin Model is a concept that, as shown in Fig. 1, consists of three main elements: an actual or intended physical element on the left side that currently exists or will exist in the physical world (the “Physical Twin”) [15], the virtual or digital counterpart on the right side that exists in the virtual or digital world (“the Digital Twin”), and the communication channel of data and information between these two elements (the “Digital Thread”). Figure 2 is the Digital Twin model today. The graphics are better courtesy of my time with NASA, when we put together presentations for a DoD conference [5]. The model itself is pretty much the same as the original model in Fig. 1. There are three main characteristics or core components of the Digital Twin model. On the left side, we have physical space and physical products that we have always had since time There are no lack of articles attempting to trace the Digital Twin back to the Apollo program. It is especially connected to Apollo 13, which might be the most amazing malfunction recovery story ever. I have had the privilege of meeting the Apollo 13 Commander, James Lovell, and hearing first-hand the amazing story of two astronauts sitting on what was basically a couch in Apollo 13, lining up the earth’s meridian vertically and horizontally perfectly on a reticule etched on Apollo 13’s window so that they didn’t burn up or bounce off into space at re-entry. However, the “twin” involved in working the problem on earth was a physical twin capsule simulator. It had very little “Digital Twin” about it. NASA KSC had the same GE 400 series mainframe, so I was asked to help occasionally. There was no digital twin capability. 4
102
M. W. Grieves
Fig. 2 The model of the Digital Twin as practiced today showing the correspondence between the Virtual and Physical Space
immemorial and still will continue to have in the real world. We will always require real, physical products to perform work in the physical world. On the right side is this idea a virtual space. This is our digital representation of the products that are over on the left side, and the information about products are contained in this virtual space. The third component here is a connection between the physical space and the virtual space. What we want to convey here is the idea of moving data from physical space environment into virtual space to create and inform our virtual product. We want to use that information from our virtual space over in the physical space. The connection between the two space is commonly referred to as the “Digital Thread.” I’m not a fan of that term, because I have hung on by a thread too often in my career. The term “thread” doesn’t give me a lot of comfort. I’d rather have digital cable or digital pipe. However, we are stuck with the “Digital Thread” terminology. Why we want to do that is this premise that says that we want to move work from the physical world of the twentieth century and before to the virtual world in the twenty-first century and beyond. As I will discuss below, we want to substitute information for wasted physical resources. This is the key as to why Digital Twin is so important moving forward in product development, manufacturing, and operations/sustainment. This is the current Digital Twin model. It hasn’t changed very much since it was introduced in 2002. However, we are now able to implement it with information technology that wasn’t available in the early 2000s.
4 Digital Twin Scale and Scope The wide-spread interest in Digital Twin dates from about the middle of the last decade. It roughly coincides with publication of one of the first popular press articles on the topic. The article was originally published in the Economist GE Look ahead section, entitled, The Digital Twin: Could this be the 21st-century approach
Digital Twins: Past, Present, and Future
103
to productivity enhancements? The article was re-published by the World Economic Forum, where it is still available [28]. In the 2015 timeframe, there were few mentions of the Digital Twin. Searching Google then, there would have been few hits, with most of them being mine. In 2019, there were about a million hits. Today, in 2023, there are over 9M hits, with a search count of images even higher. In Google Scholar, the number of academic papers increases from around a 1,000 to over 52,000 in 2023. The scope of the Digital Twin is also increasing rapidly. The Digital Twin was originally the purview of aerospace and manufacturing. It has rapidly expanded from that into ships [7, 20], railway infrastructure [1], oil and gas (O&G) [19], smart buildings and smart cities [2]. One of the rapidly growing areas is healthcare with Digital Twins of people [23]. There are articles on implants, cardiac care [24], prosthetics [17], and Covid treatments [3]. There is even an article that proposes Digital Twins should help select your spouse [6]. Digital Twins are also of interest to the pharmaceutical industry. There is an article that suggests that it is unethical to use placebos for humans in the standard double-blind tests. The ethical approach is that placebos should be given only to the Digital Twin of the human [22]. I had always thought that Digital Twins could be used for non-tangible things like processes. My premise was that if we could visualize it, we could create its Digital Twin. I intentionally kept away from the intangible early on so as to not add additional elements that might be confusing. However, we are now seeing Digital Twins of manufacturing processes, supply chains [4], financial products, and other intangible things. While there is always danger of over-hyping a concept, there does seem the ability to move even intangible work from the physical world into the virtual world.
5 Digital Twin Types As shown in Fig. 3, I divide the product lifecycle into four phases: create, build, operate/sustain, and dispose.5 While the lines of demarcation between the phases are not bright lines, this is a useful way of looking at the lifecycle of a product. Under the framework of Digital Twin, there are three types of Digital Twins depending on the phase of the product lifecycle. Figure 4 shows the three types of Digital Twins.
The dispose phase is part of the product lifecycle, and I have discussed it in my PLM books. However, there just has not been much interest in exploring Digital Twin use cases. I will not mention the disposal phase further in this chapter. 5
104
M. W. Grieves
Fig. 3 The four phases of the Product Lifecycle that are useful for thinking about the uses of Digital Twins in Product Support
Fig. 4 The types of Digital Twins that accompany different phases of the Product Lifecycle
5.1 Digital Twin Prototype (DTP) We first start off with what is called the Digital Twin Prototype (DTP) which is the prototypical Digital Twin. This is the idea that that we have a Digital Twin before we have a physical product. This is because we want to move as much work as we possibly can into the virtual realm. We would like to create our product, test our product, manufacture our product, and support our product virtually. Only when we get the product as perfect as we can make it do we want to manufacture the physical product. If we are going to make mistakes, the virtual realm is the place to make mistakes, because the cost of these mistakes virtually is approaching zero. The DTP is all the products that can be made. The DTP is the product and its variants. As we can see from the figure, the product takes shape over time. The product goes from an idea to a first manufactured article.
Digital Twins: Past, Present, and Future
105
Virtually Reality (VR) technology is extremely useful here. The ability for humans to use their highest bandwidth input, their eyes, allows them to process much more data than seeing reams of numbers. Humans, as creatures of the physical world, need to see things to fully understand what is occurring.
5.2 Digital Twin Instance (DTI) When we start to produce production products, we transition to creating Digital Twin Instances (DTIs). These are all the products that are made. I now want to create a Digital Twin Instance of a specific product. I don’t have Geometric Dimensioning and Tolerancing (GD&T) for a specific manufactured instance, such as X + − .05 mm. I now have the specific number of the measurement for a specific part. I have serial numbers and not just simply the name of this assembly. Since we are going to want to track this product throughout its life, we need the As-Built of that product instance as it is created. The requirement for a DTI is driven from the business use case of having this information. The need for a DTI corresponds to the complexity and importance of the product. The F-35 pictured in the figure needs a DTI. A paper clip doesn’t. A good deal of the information for a DTI is going to be coming from the first type of Digital Twin, the DTP. This information will not need to be duplicated. However, we now move from the ideal specifications to the measurements of individual products. Augmented Reality (AR) technology will be increasingly important for DTIs. Instead of the Physical Twin and the Digital Twin being separate things. AR allows the Digital Twin to be overlayed on its Physical Twin. With AR equipment, technicians can not only see the product in front of them, but they can see the performance of that product, such as the temperature gradients, fuel flow speeds, or power outputs.
5.3 Digital Twin Aggregate (DTA) The third type of Digital Twin is the Digital Twin Aggregate (DTA). This is the aggregation of all the products that have been made. We can collect and aggregate the data from the population of products to provide value. We would like to predict issues or failures with the product before they occur. We would like to correlate certain sensor readings to resulting issues. When we see these certain sensor readings in products, we can alert the user that this is an indication a future problem has a high probability of occurring. We want to move from periodic maintenance to conditions-based maintenance. While we would prefer to have causation rather than correlation, we will happily accept correlations if it prevents product failures, even at the expense of replacing some parts too early.
106
M. W. Grieves
Fig. 5 An illustration of the relationship between Digital Twin Aggregates (DTAs) and Digital Twin Instances (DTIs) and how they are used in interrogation, prediction, and learning
Digital Twin Aggregates (DTAs) are the aggregation or composite of all the DTIs. DTAs are both longitudinal and latitudinal representations of behavior. Their longitudinal value is to correlate previous state changes with subsequent behavioral outcomes. This enables, for example, prediction of component failure when certain sensor data occurs. Latitudinal value can occur via a learning process, when a later group of DTIs learn from the experiences of previous products. That learning can be conveyed to the rest of the DTIs from then on. Figure 5 shows an example of DTI and DTA use in interrogation, prediction, and learning.
6 Digital Twin Types Throughout the Lifecycle Since the Digital Twin model applies to the entire lifecycle of create, build, and operate/sustain, we need to understand what Digital Twin types apply to the various lifecycle phases. We can see visually see in Fig. 6 the relationship as to how this occurs. The top line of the figure is the create phase. The figure shows the standard view of the Digital Twin model with both the physical and virtual product. However, in this phase, we create the virtual product first. In keeping with moving our work into the virtual world, we ideally would like to design the product, test the product, manufacture the product, and support the product all virtually. Only when we have all the issues worked out, do we want to create a physical product. Obviously, this is the ideal. We currently may do some physical prototypes at this phase, but we are
Digital Twins: Past, Present, and Future
107
Fig. 6 The application of the different Digital Twin Types through the Product Lifecycle
seeing many organizations dramatically reduce the need for those physical prototypes. However, we need to make a physical product and put it into production. It’s not enough to have the designs of the product that we have perfected. We need to physically produce it. This means that we need to create a Bill of Process (BoP). This is reflected in the representation of the far-right side. There is a misconception that manufacturing is a function of engineering. The reality is that we need the desired product plans to be the result of a function of manufacturing. It is at this stage that we have both design plans and the manufacturing plan that results in those design plans being realized. We are using the DTP here. The next level is the build phase. It is in this phase that we move into production. As shown by the digital threads, we are using the BoP from our Digital Twin of the designated production equipment to provide that information to the DTIs of the actual machines. Those machine DTIs will then provide the required information to the physical machines on the factory floor. Those physical machines then produce the physical products on the left. As we are producing those physical products, we want to create the As-builts of their Digital Twin counterparts. These are the DTIs. We need to capture the necessary data of the actual measurements of what we have produced, key process measurements of how we have produced them, serial numbers of required parts, and quality control data to assure ourselves that these products have been produced to our specifications. In this build phase, we use the DTP information and create the DTIs. Again, much of the information that the DTIs needs will be from the DTP. We will not need
108
M. W. Grieves
to duplicate that information. However, the DTP contains specifications with tolerances. The DTIs will have exact measurements of how these instances were built. The bottom level is the operate/sustain phase. In this phase, we want to have its DTIs reflect any changes to its corresponding physical product. We want to capture behavior and performance metrics. We also want to have the DTI maintain longitudinal data. Unlike the physical world, when the moment passes, we have little access to history. In the DTI, we can capture that data and therefore have full access to the product’s history of behavior. It is in this phase that we create the Digital Twin Aggregate from the DTIs. We can start to correlate precursor data with ensuing results. This will allow us to predict future behaviors, such as product failures, as we collect more and more data from the DTIs. We can engage in machine learning to determine how products decline from their optimum performance at the beginning of their operation and then pass that on to later versions of the product. Finally, by using the DTA, as a digital thread shows going from the operate/sustain phase back to the create phase, we can close the loop between product design and the actual product behavior. Too often future generations of products exhibit the same flaws. This is a result of the product designers not knowing that their assumptions about the product performance are incorrect. Using the DTA can correct this issue.
7 Digital Twin Underlying Economics Intelligence can be defined as goal seeking while minimizing resources. We perform what I will call “tasks” to accomplish goals. As shown in Fig. 7, if we take any physical task, we can divide that task into two parts. In the left bar, the lower part is the most efficient use of physical resources to complete the task. The upper part is any use of physical resources above that, which, by definition, is wasted physical Fig. 7 The use of Digital Twins to achieve efficient use of resources in performing “task” to accomplish goals
Digital Twins: Past, Present, and Future
109
resources. To assess this properly, there are constraints on this task. The first is the task’s goal can be successfully completed. The second constraint is that the most efficient use of resources is set prior to the task. As an example, consider building a runway. One way would be to have thousands of people prepare the runway with hand tools, as the Chinese people did in World War II. The second way would be to have earthmoving equipment deployed to build the runway. Both the cost, time, and deficiencies of the two methods will vary and maybe vary greatly. However, in this paradigm, the most effective use of resources is relative to the physical resources that are available to the task. Our method of evaluation is chosen to be cost. In a capitalistic society we can cost physical resources. We can cost the time of human labor. We can cost the time of capital equipment that is involved. We can cost the energy that we use for the task. Finally, we can cost the material that we will use in completing the task. We also have overhead costs. These are costs that are incurred simply by virtue of performing the task. These overhead costs start when the task starts and end when the task ends. The right bar shows the impact of information. The bars are similar in that the most efficient use of physical resources is the same in both the right and the left bar. However, in the right bar, we are showing the ideal situation where information is replacement for all wasted resources. In our imperfect physical world, this will never happen. However, the key point here is that information is replacement not for the physical resources we require to perform the task in the most efficient manner, but as a substitute for wasted resources over and above that. This is not simply about efficiency. It is also about effectiveness. If we have information that our approach to the task will not result in the goal being reached, we would not be expending the resources on the futile effort. However, information is not costless. The issue we have with information is that there is not a unit of measure that we can use for the cost of information for our task. We do expend physical resources to produce information. In our current environment, this is the hardware and software necessary, the human resources required to engage with the hardware and software, and the energy required to power our equipment. For a task, we can use these costs as a proxy cost for units of information. This brings us to the condition let that allows us to state that information is a substitute or replacement for wasting resources. This is indicated by the formula in the figure. That condition is that the cost of the information is less than the cost of wasting physical resources in performing the task over all the times the task is performed. In most situations, it would make very little sense for us to create information systems for simple tasks. It would make sense to use trial and error to perform such tasks. The wasted resources will be substantially less than the cost of information. For complex tasks that are repeated, the cost of information has shown to be less than the cost of wasting physical resources. While the impact of information technology has been debated, it is pretty apparent to the casual observer that the exponential increase in computing capability over the last 50–60 years has had substantial effect on both efficiency and effectiveness.
110
M. W. Grieves
This is the economics that is driving Digital Twins. In fact, it is economics that is behind digital transformation in general and is driving the movement of work from the physical world to the virtual world.
8 Digital Twin Fallacy There is a widespread fallacy that the Digital Twin does not exist until and unless there is a physical product. In fact, the majority of academic papers in a survey had this position [26]. Some authors go so far as to equate Digital Twins with human twins. This is overspecifying the “twin” metaphor. Metaphors are extremely powerful in invoking complex mental constructs and even entire mental spaces in humans. Metaphors are not simply comparisons, but generative devices that allow rich understandings, new perspectives, and generative ideas that open up areas of opportunities that had not previously been thought of [8]. The twin metaphor has only two key attributes: duality and strong similarity. With respect to this fallacy discussion, there is no metaphorical requirement for timeline simultaneity or for the precedence of one type of twin before another type of twin. That is there is no requirement that a twin only exists if its counterpart exists simultaneously. Nor is there a requirement that one type of twin, the Physical Twin, must exist before there is the other type of twin, the Digital Twin, can also exist. The only requirement is that a twin’s counterpart exist at some point in the twin’s lifecycle. This means the Digital Twin can exist prior to the creation of a physical counterpart and can also exist after the physical counterpart ceases existence or is retired. The requirement that there must be an actual physical thing before there can be a Digital Twin is simply a wrong perspective. The key differentiator of whether a digital model and associated information is a Digital Twin is that it is intended that this model become a physical product and that its physical counterpart is realized. It is that intention and the work that goes into the realization of that intention that differentiates a digital model from a Digital Twin. If the physical counterpart is never realized, then the digital model was never a Digital Twin. A digital model of a flying carpet will never become a Digital Twin because there is no intention, let alone ability, to make it a physical product. From its inception, the Digital Twin has always been intended to exist in all four phases of the product lifecycle: create, build, operate/sustain, and dispose [12]. It is embodied in the saying that “no one goes into a factory, pounds on some metal, and hopes an airplane will come out.” A tremendous value of the Digital Twin is that it does exist before there is a physical product. Dispelling these misperceptions of the “twin” metaphor, there are five major reasons why the Digital Twin does not require a physical product before the Digital Twin exists. These reasons are:
Digital Twins: Past, Present, and Future
111
• • • •
The DT framework should cover the entire product lifecycle The DT is especially valuable during the create phase The DT does exist prior to the physical product –it just has a different name The DT regresses to being functionally siloed if there is no DT prior to the physical product • The DT existing only after there is a physical product is conceptually inelegant and piecemeal The practical reality even for those making a claim that the Digital Twin does not exist prior to a physical product is that there actually is a Digital Twin within their organization before there is a physical product. This product information continues to exist throughout the entire product lifecycle. In all those situations, it simply has a different name. It may be called the digital model, the digital design, the digital systems model, or some such variation. However, it has most if not all the characteristics of the Digital Twin Prototype (DTP). While the Digital Twin has value across the product lifecycle, the DTP is especially valuable in the create phase. It is this phase of the lifecycle that work can be moved from the physical world into the virtual world. If virtual products can be modelled and tested in a virtual environment, replacing physical prototypes and testing, the potential for a reduction of wasted physical resources is substantial. As noted above, these wasted resources include material, energy, and labor time, but also elapsed development time. Even though the create phase may be short in comparison to the entire lifecycle of a product that may span decades, decisions in this phase have a major impact in determining future product costs. Estimates of product cost determination during the create phase are as high as 80% [18]. There is an increasing ability to perform virtual testing at a fraction of the cost and in less time to replace physical testing. This has the potential to reduce costs, improve quality, and reduce time to market. A major problem with renaming the DTP as something different and not having that something in the Digital Twin framework is that it encourages and maintains functional siloing. If this different named thing exists in engineering, prior to manufacturing having a Digital Twin of the instance of a physical product, then this information will tend not to be shared between engineering and manufacturing. The powerful aspect of the Digital Twin is that it is product centric throughout the entire product lifecycle. Information is populated and consumed irrespective of the functional area. For the Digital Twin to exist only after moving to manufacturing diminishes greatly its effect. A substantial amount of the information for a specific Digital Twin Instance (DTI) is contained in the Digital Twin Prototype. Finally, it is inelegant and piecemeal to not have the Digital Twin encompass the entire product lifecycle. The intent of the Digital Twin is to have a framework that persists throughout the entire lifecycle. That has been the intent since the origination of the Digital Twin concept. Requiring the Digital Twin to only exist once there is a physical product is inconsistent with that approach. Having different types of Digital Twins, DTP, DTI, and DTA, allows us to have a consistent framework, yet
112
M. W. Grieves
differentiate how the Digital Twin manifests itself at different phases of product lifecycle.
9 Digital Twin Evolution The Digital Twin is evolving at a fast rate. Figure 8 shows this evolution. The move from physical to virtual maturity is along the x-axis and shows the progression over time. The evolution of information in both scale and scope is shown on the y-axis. Obviously as we move more and more work into the virtual space, the amount and complexity of information increases.
9.1 Traditional – Phase 0 This is labelled as Phase 0, because it is the phase that humans have been primarily in since the beginning of time until recently. At the far-left side is what is called the traditional representation. As soon as an idea for a product began to take shape, it immediately took a physical form. It had to be translated into atoms almost immediately to be shared with other people. Initially this was in the form of sketches and physical models, scale or otherwise. In the mid-1800s it started take the form of
Fig. 8 Different phases in the evolution of the Digital Twin: from Traditional to Intelligent DTs
Digital Twins: Past, Present, and Future
113
blueprints that had measurement details about the product. This continued throughout the twentieth century. We started processing information in computers in the late 60s and early 70s. In the 80s, the ability to put geometric information in computers began.6 However, this information was effectively just an electronic version of blueprints. In fact, CAD, which stands for Computer-Aided Design, was mostly a means to capture this 2D geometric information in a computer and to be able to print out multiple versions of it. Up until this point in time, duplicating blueprints was done by hand.
9.2 Transitional – Phase 1 The Transitional Phase, Phase 1, is the beginning of the Digital Twin era. The 2000’s marked a seismic shift in moving this information into the virtual area. The development of 3D models in a computer was a quantum leap in terms of having geometric information fully contained within a computer. Not only did 3D models give a visual representation of the product from any angle, but it allowed for the integration of multiple parts. We started to get comfortable with being able to manipulate virtual objects within the computer space without having to have them take physical shape in the real world first. This reduced the need to have physical prototypes as these physical prototypes began to be replaced by Digital Mockup Units (DMUs). The ability to simulate the behavior of these geometric models was the next big step. Not only could we do form and fit for our geometric models, but we could also simulate and analyze their behavior. This was not something new. What was new is that the computing resources capable of doing that moved from requiring a supercomputer to being able to be performed on an ordinary computer. This was courtesy of the advances predicted by Moore’s Law.
9.3 Conceptual – Phase 2 The Conceptual Phase is when we take a concept or model and start to ask, “what if”. We begin to create processes and technologies to experiment, test, and even begin to implement the “what ifs”. The purpose is to determine if the concept creates value on an ad hoc basis. For powerful concepts, such as the Digital Twin and the underlying premise of moving work from physical space into virtual space, this can be a period of explosive growth in scale and scope. There is ample evidence that this is occurring. In the early 70s, I worked on a General Motors feasibility project to determine if computers could handle geometric figures. We were successful in programming some rudimentary figures that may have even been 3D. However, it took an entire IBM 360/65 computer to do so. The 360/65 cost millions of dollars, so this wasn’t very feasible, except as a proof of future capabilities. 6
114
M. W. Grieves
In this phase, the Digital Twin is an entity that we conceptually create from disparate and even fragmented data sources. We use different existing systems to pull data from. We start building correlations and even causations of data source inputs to results. We build different simulation views and determine how well they map to reality. We start to put manual processes in place to pull the data from different sources, even if on an ad hoc basis, to create a Digital Twin view. We attempt to determine if our concept and ensuing models can do two things: replicate past and current reality and predict future states. At this stage, the aphorism that all models are wrong, but some of them are useful is accurate. We want to determine the useful aspects and refine the models so that they both replicate reality and allow us to predict outcomes, even if probabilistically. At this phase, while there may not be a discrete, tangible Digital Twin, there is enough substance that we can generate a shared result among users that the Digital Twin exists.
9.4 Replicative – Phase 3 Phase 3 is what I will call the Replication Phase. It is in this phase that we have Digital Twins that are not in do-it-yourself form but exist as entities. This will require that we have technological platforms that will do what we did ourselves in the conceptual phase and pull together the necessary information to present to us actual Digital Twins. While it may be in some cases, the platform system has its own repositories for information, this is not a requirement. Because it is information and not physical artifacts, the information can exist logically in different places and applications. What the platform will do on some sort of basis, depending on the immediacy of the data, is pull together the requisite information to present it digitally. The platform will need access to this information on a secured basis. The platform will need the mechanism to obtain this information either in the form of data transfers or APIs. In some cases, the platform may simply act as a pass-through for existing systems that contain that information. For example, the PLM system may contain all the geometric information to project a complete visual Digital Twin. We would expect the platforms to support all three Digital Twin types, DTP, DTI, and DTA. In the DTP or create phase, the platform would support the development of the product so that at any point in time the Digital Twin to-be product is taking shape. The platform could also consist of behavior stimulations that have been certified as validated replicas of the physical world. This would mean that testing could take place in the virtual world and to a great extent replace physical testing. The platform would need to capture or have access to the actual as-built products and its DTIs. The platform would then be able to at any point in time in the future show a replica of what the physical product of that DTI was doing. The driving economic value that will make these platforms economically viable and attract
Digital Twins: Past, Present, and Future
115
investment is being able to collect the data of the product as it is in use and replace wasted physical resources. The ability to create the Digital Twin Aggregate and, through either correlation or causation, predict future performance will be a significant opportunity for revenue production. In this phase the Digital Twin platform would support both interrogation of the product status at any point time and would allow for degrees of predictive aspects. The opportunity is that the more these platform scale, the more data that is turned into information so that real value will be produced.
9.5 Front Running – Phase 4 Phase 4 is the Front Running Phase, utilizing the Intelligent Digital Twin (IDT) that is characterized by what I refer to as Front Running Simulations (FRS) that are constantly occurring with a product’s Digital Twin. The Digital Twin is “intelligent” because AI is employed to constantly be assessing the data and making predictions. This phase is marked by moving from a platform that is reactive to inquiries from its users to a platform that is proactive in presenting information to its users on a constant basis. This means that this platform is online all the time. In the create phase the IDT is in cued availability mode. This means the IDT is a constant agent looking at what the user is doing. The Digital Twin is constantly looking at the vast amount of data it has access to from the different sources that it is connected into and, with its cues from the user, providing information that it perceives that the user needs to know about. For example, if the user is designing a new part, the IDT will look at the requirements for both geometry and behavior and then propose parts that have the same key geometry characteristics and key behaviors. It will also be constantly running simulations for both fit and behavior of those parts as the part is developing to prevent the user from wasting time on things that will not meet the requirements. This means we can move from periodic reviews to continuous reviews. It will also do this at higher levels of the full system so that that in essence there is a constant review of the complete system. For DTIs, the proposal is that IDT will constantly be running a simulation (FRS) in front of the performance of the product. At every new periodic t0, IDT will run a simulation of the future, predicting potential system states, especially ones that are predicted to cause problems. For example, the IDT will constantly be projecting into the future and warn the user of impending failures or malfunctions. In this phase, the IDT acts as a crystal ball projecting the outcomes with probabilities, utilizing not only the DTI itself but the DTA of all the products that it has information for. With enough data from its population of ever-growing products, the Digital Twin’s front running capability will be able to put probabilities on its predictions. For example, FRS will predict that a specific part with the current sensor
116
M. W. Grieves
readings will fail in the next month with a 60% probability but fail within 2 months with a 95% probability. Clearly this will be the compute intensive phase. However, the next decade or two projections of computing capability predict that there will be a tremendous amount computing capacity and associated information technology capabilities, such as storage and communication bandwidth, available. Current rough predictions of computing capability are that from a current capability of 80 billion transistors, by 2030 that will rise to 6 trillion and by 2040 to 885 trillion. By preventing product failures, warning of errors in human judgment, and preventing avoidable failures, the IDT will be in position of substituting information for the waste of physical resources. Especially for prevention of and the loss of life from catastrophic failures, the IDT take the Digital Twin to its logical conclusion.
10 Digital Twin Progress Through Testing With apologies to Alan Turing [25], the critical requirement is to have the computer simulate everything in the universe, except human intelligence. If we can fully simulate the inanimate universe but only obtain assistance from AI, we will obtain tremendous value for products throughout the product lifecycle. Modeling and simulation (M&S) are about representing physical products and their behaviors in a digital or virtual environment. In this context, a model is a static representation of the physical product. The current technology allows for this model to be a three-dimensional replication that has complete fidelity in terms of dimensioning. The behavior is modeled in mathematical form, describing the forces acting on the physical product and the forces the physical product generates and acts upon the environment. Simulation is dynamic. Simulation adds the component of time and describes how the product changes as forces act on it and how the forces it generates act on the environment. Simulation shows the changes in geometry as the mathematical behavior model of forces transforms material. A simulation of a vehicle crash test shows at a user defined time scale the deformation of all components of the automobile as it crashes into a barrier. Simulation relies on two things: an increasing knowledge of the physics that determine the physical environment and computing power to calculate the physics at the required scale and fidelity. This has meant that there have been limitations of the products that could be modelled and simulated. A few decades ago, only simple product systems could be simulated. With the exponential increase predicted by Moore’s Law, today even complex product systems can be simulated. The question is how do we decide how well we are doing with M&S? Over a decade and a half ago, I proposed some Tests of Virtuality to answer that question. The Test of Virtuality were modeled on the Turing Test. The original formulation of the Grieves Test of Virtuality had three distinct tests: a visual test, a performance test, and a reflection test. The format of the tests was
Digital Twins: Past, Present, and Future
117
similar. An observer was exposed to the physical and the virtual versions. If he or she could not tell the difference between the two versions, then the test was passed. In the Visual Test, the observer looked at the video screens of a product placed in a physical room and the Digital Twin version. The observer could ask for any spatial manipulation to be done. The observer could ask to see the product from any angle. The observer could ask that the product be disassembled and look at any individual component. In the example of a car, the observer could ask that the doors be opened to look inside at the interior or that the hood be opened and to look in the engine compartment. If the observer cannot tell the difference between the physical version and the Digital Twin version, then Grieves Visual Test is said to be passed. The behavioral test was a little more difficult. The observer had the same two views of the product, the physical and the Digital Twin one. The observer could then ask that forces be generated and/or applied to both and observe the results. For an airplane, this could mean that its jet engine would be turned on, and the plane sent down the runway to take off. This would be an internal force. The observer could also ask that once the plane was flying that it be put into a steep dive and see the forces that acted upon it. That would be an external force test. If the observer could not tell the difference between the physical and the virtual performance, then it passed the Grieves Test of Performance. The third test is a test of reflectivity. Reflectivity was defined as any change to the physical product would be reflected in its Digital Twin. Again, we have the observer and the Physical Twin and Digital Twin versions. In this case, the test is that the observer can see no differences between the two versions. Using an example of an oil rig, if the observer were to compare every valve setting, every gauge, every pump serial number, there would be no difference between the two versions. If that was the case, then the Grieves Test of Reflectivity would be passed. These tests were meant to be ideal tests. The tests were always meant to be tied to use cases that would provide value to the user. It was intended that only those things that provided value would be the things that were intended to be tested. If there was no value in disassembling a product and determining if the physical and Digital Twin versions were identical, then that would not be part of the test. If a component of the oil rig above was serialized, but there was no interest in tracking serial numbers, then the Physical Twin and Digital Twin versions not being identical would be irrelevant. So where are we a decade and a half later? The short answer is that these tests of virtuality are easily passed daily. A decade and half ago, we were close to passing the visual test. Today there is no question that the visualization of physical products and its Digital Twins have the fidelity and granularity that we need for most use cases. The behavior and reflectivity tests were proposed before IoT became so prevalent. Back then, the issue for these tests was going to be being able to get the appropriate sensor and instrumentation data to maintain the Digital Twin version. Today with our smart products, we routinely get the sensoring information that we need to pass both those tests. Again, we need to remember that this is driven by use cases.
118
M. W. Grieves
The value to the user needs to be there to expend the resources necessary to maintaining the Digital Twin for all these tests. At this time, I would like to propose a new Grieves Test of Virtuality. This is Grieves Test of Prediction. This is a slightly different test and much harder. In this test version, the observer asks that the Digital Twin version be moved a certain amount of time into the future. The observer then waits that amount of time. When that time has elapsed, the observer now compares the two versions. If the states of the Physical Twin and Digital Twin are effectively identical, then the Grieves Test of Prediction is passed.
11 Conclusion In approximately two decades, the Digital Twin concept and model has gone from simply being the “Underlying Premise of PLM” to being considered for use cases in all aspects of human endeavors. This is from the tangible complex products that the Digital Twin was initially created for to intangible processes, such as supply chains, logistic systems, and monetary systems. Digital Twins are being proposed for such complex systems as cities, the earth itself, and humans, especially in healthcare. This is being driven by the exponential increase information capabilities predicted by Moore’s law. This increase in capability is allowing us to move work from the physical world into the virtual world. We want to do this because information is a substitute for wasted physical resources. The Digital Twin needs to cover the entire lifecycle of its physical targets. It is a fallacy that a Digital Twin only exists once there is a physical artifact. There is tremendous value in using the Digital Twin before a physical product exists. Our ideal is to create the product virtually, test the product virtually, manufacture the product virtually, and support the product virtually. Only when we get it all right, do we move physical atoms to make a physical product. To cover the entire product lifecycle, the Digital Twin has three types: Digital Twin Prototype, Digital Twin Instance, and Digital Twin Aggregate. I expect a Digital Twin to rapidly evolve from the conceptual stage that it is currently in, to the Replicative Platform Stage, and then into an Intelligent Digital Twin with Front Running Simulations. These new phases will be computer intensive, but if Moore’s law continues to hold true, we will have the computing capability to enable these future Digital Twins.
Digital Twins: Past, Present, and Future
119
References 1. Ariyachandra, M. F., & Brilakis, I. (2021). Generating railway geometric Digital Twins from airborne LiDAR data. https://www.researchgate.net/profile/Mahendrini-Ariyachandra/ publication/353452738_Generating_Railway_Geometric_Digital_Twins_from_Airborne_ LiDAR_Data/links/60fe87d72bf3553b29108896/Generating-Railway-Geometric-Digital- Twins-from-Airborne-LiDAR-Data.pdf 2. ARUP. (2019). Evolution of the Digital Twin for the built environment. https://www.arup.com/ perspectives/publications/research/section/digital-twin-towards-a-meaningful-framework 3. Barat, S., Parchure, R., Darak, S., Kulkarni, V., Paranjape, A., Gajrani, M., & Yadav, A. (2021). An agent-based Digital Twin for exploring localized non-pharmaceutical interventions to control COVID-19 pandemic. Transactions of the Indian National Academy of Engineering, 6(2), 323–353. https://link.springer.com/article/10.1007/s41403-020-00197-5 4. Barykin, S. Y., Bochkarev, A. A., Dobronravin, E., & Sergeev, S. M. (2021). The place and role of Digital Twin in supply chain management. Academy of Strategic Management Journal, 20, 1–19. http://genobium.com/32062764.pdf 5. Caruso, P., Dumbacher, D., & Grieves, M. (2010). Product lifecycle management and the quest for sustainable space explorations. AIAA SPACE 2010 Conference & Exposition. 6. de Kerckhove, D. (2021). The personal Digital Twin, ethical considerations. Philosophical Transactions of the Royal Society A, 379(2207), 20200367. 7. Fonseca, Í. A., & Gaspar, H. M. (2021). Challenges when creating a cohesive Digital Twin ship: A data modelling perspective. Ship Technology Research, 68(2), 70–83. https://www. tandfonline.com/doi/pdf/10.1080/09377255.2020.1815140 8. Grieves, M. (2000). Business is war: An investigation into metaphor use in Internet and non- Internet IPOs (EDM dissertation, Weatherhead School of Management, Case Western Reserve University). 9. Grieves, M. (2002a, October 2). Completing the cycle: Using PLM information in the sales and service functions. Bringing product from design to delivery ahead of time and under budget: Best Practices Across Manufacturing Industries. 10. Grieves, M. (2002b, December 3). PLM Initiatives [Powerpoint Slides]. Product Lifecycle Management Special Meeting, University of Michigan Lurie Engineering Center. 11. Grieves, M. (2005). Product Lifecycle Management: The new paradigm for enterprises. International Journal of Product Development, 2(Nos. 1/2), 71–84. 12. Grieves, M. (2006). Product Lifecycle Management: Driving the next generation of lean thinking. McGraw-Hill. 13. Grieves, M. (2011). Virtually perfect: Driving innovative and lean products through Product Lifecycle Management. Space Coast Press. 14. Grieves, M. (2014). Digital Twin: Manufacturing excellence through virtual factory replication (White Paper). Florida Institute of Technology. http://innovate.fit.edu/plm/documents/ doc_mgr/912/1411.0_Digital_Twin_White_Paper_Dr_Grieves.pdf 15. Grieves, M. (2019). Virtually intelligent product systems: Digital and physical twins. In S. Flumerfelt, K. Schwarttz, D. Mavris, & S. Briceno (Eds.), Complex systems engineering: Theory and practice (Progress in astronautics and aeronautics) (pp. 175–200). American Institute of Aeronautics and Astronautics. 16. Grieves, M. (2020). Digital twin: Developing a 21st century product model. In P. Cola, K. Lyytinen, & S. Nartker (Eds.), Voices of practitioner scholars in management (pp. 198–207). CWRU. 17. Hernigou, P., & Scarlat, M. M. (2021). Ankle and foot surgery: From arthrodesis to arthroplasty, three dimensional printing, sensors, artificial intelligence, machine learning technology, Digital Twins, and cell therapy. Springer. 18. Iansiti, M. (1998). Technology integration: Making critical choices in a dynamic world. The management of innovation and change series. Harvard Business School Press.
120
M. W. Grieves
19. Mayani, M. G., Svendsen, M., & Oedegaard, S. I. (2018). Drilling Digital Twin success stories the last 10 years. SPE Norway One Day Seminar. 20. Perabo, F., Park, D., Zadeh, M. K., Smogeli, Ø., & Jamt, L. (2020). Digital twin modelling of ship power and propulsion systems: Application of the open simulation platform (osp). In 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE). 21. Piascik, R., Vickers, J., Lowry, D., Scotti, S., Stewart, J., & Calomino, A. (2010). Technology area 12: Materials, structures, mechanical systems, and manufacturing road map. NASA Office of Chief Technologist. 22. Rozenberg, O., & Greenbaum, D. (2020). Making it count: Extracting real world data from compassionate use and expanded access programs. The American Journal of Bioethics, 20(7), 89–92. https://doi.org/10.1080/15265161.2020.1779857 23. Shengli, W. (2021). Is human digital twin possible? Computer Methods and Programs in Biomedicine Update, 1, 100014. 24. Torkamani, A., Andersen, K. G., Steinhubl, S. R., & Topol, E. J. (2017). High-definition medicine. Cell, 170(5), 828–843. https://doi.org/10.1016/j.cell.2017.08.007; https://www.ncbi.nlm. nih.gov/pubmed/28841416 25. Turing, A. (1950). Computing machinery and intelligence. Mind, LIX(236), 433–460. 26. van der Valk, H., Haße, H., Möller, F., Arbter, M., Henning, J.-L., & Otto, B. (2020). A taxonomy of Digital Twins. Americas Conference on Information Systems. 27. Waffenschmidt, S. (2018, March 14–15). I called it double ganger. Best Practice. 28. World Economic Forum. (2015). Can the Digital Twin transform manufacturing. https://www. weforum.org/agenda/2015/10/can-the-digital-twin-transform-manufacturing/ Dr. Michael W. Grieves splits his time between the business and academic worlds. He is the author of the seminal books on Product Lifecycle Management (PLM): “Product Lifecycle Management: Driving the Next Generation of Lean Thinking” (McGraw-Hill, 2006) and Virtually Perfect: Driving Innovative and Lean Products through Product Lifecycle Management” (SCP, 2010). Dr. Grieves originated the Digital Twin concept that is driving twenty-first century product realization. Dr. Grieves was named as one of the Top 2% of Scientists Worldwide by Stanford University in 2021. Dr. Grieves is an acknowledged leading international expert on the Digital Twin and PLM. He lectures world-wide on engineering, manufacturing and operations in both industry and academic conferences. His work is referenced in many publications such as Forbes and Scientific American. In addition to his books, Dr. Grieves has numerous publications and articles. Dr. Grieves has been a consultant and advisor to many leading international manufacturers and governmental organizations such as NASA, Boeing, GM, Unilever, and NNS. Dr. Grieves has over forty years’ experience in the computer, data communications, and manufacturing industries. He has been a senior executive at both Fortune 1000 companies and entrepreneurial organizations during his career. He has served in executive, technical, finance and accounting, marketing, product development and production roles. He founded and took public a systems integration company that he grew as its CEO from zero to $100 million in revenue. Upon stepping down from active management, Dr. Grieves subsequently served as its audit and compensation committee chair with approval from the PCAOB under the then new Sarbanes Oakley requirements. Dr. Grieves has sub
Digital Twins: Past, Present, and Future
121
stantial board experience, including serving on the board of public companies in both China and Japan. Dr. Grieves was President of the Michigan Technology Council non-profit that focused on small manufacturers. Dr. Grieves has been a Co-Director of the Purdue PLM Center of Excellence and served as a Visiting Professor at the Purdue University College of Technology. Dr. Grieves has also been affiliated with the Eller School of Business MIS Department at the University of Arizona. Dr. Grieves is Chairman Emeritus of Oakland University’s School of Business Board of Visitors, where he served as Chairman (1988-2008). He has taught in the United States, China, and Europe at the university senior undergraduate and graduate school levels and has authored and taught executive education courses for NASA, University of Michigan, and Purdue. Dr. Grieves was a Professor in the International MBA program at CIMBA University, Asolo, Italy with an appointment at the University of Iowa. Dr. Grieves was Chief Scientist of Advanced Manufacturing, Executive VP, and CFO at Florida Tech. Dr. Grieves has a BSCE from Michigan State University Engineering College and an MBA from Oakland University. He received his doctorate in Executive Management from the Case Western Reserve University Weatherhead School of Management.
Part II
Technologies
Digital Twin Architecture – An Introduction Ernö Kovacs and Koya Mori
Abstract This chapter focuses on giving an overview about Digital Twin architectures. It emphasizes the description of a general architecture of a Digital Twin system based on different usage scenarios. These usage scenarios are following the evolution path of Digital Twins, staring from Digital Twin for production lifecycle management (PLM). From this starting point, Digital Twins are used with 3D visualization (AR/VR) and product usage simulations. Then Internet-of-Things (IoT) technology enabled capturing the dynamic state of the real asset. In todays closly connected world, cloud-based Digital Twins are utilizing broadband networks to enhance the real asset with cloud-based functionalities. The emergence of data spaces (such as define in GAIA-X or IDSA) enables secure and trusted data sharing. This will be the basis for distributed Digital Twin Worlds that simulate large parts of the world and solve important problems like the decarbonisation of the global society. Each step in this journey has an abstract architecture and a concrete example how the architecture is used. Keywords Digital Twin architecture · Product lifecycle management · Digital Twin prototype · Digital Twin continuity · Multi-actor distributed Digital Twins · Society 5.0 · Context management Imagine a world like in fairy tales: “Oh, shake me, shake me, we apples are all of us ripe!” (Grimms Fairy Tales, Mother Hulda). The reader is surprised, as the typically lifeless objects (the ripe apples on the tree) are telling the bypassing girl what needs to be done. Now imagine today’s world of mobile communication, Clouds, and the Internet-of-Things. It is obvious that in this world, we have the technology pieces to create the mentioned intelligent apples and apple trees. We can attach small sensors E. Kovacs (*) NEC Laboratories Europe GmbH, Heidelberg, Germany e-mail: [email protected] K. Mori NTT Digital Twin Computing Research Center, Tokyo, Japan © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_5
125
126
E. Kovacs and K. Mori
to the apple to measure how ripe the apple is, we have the 5G communication technology to cost efficient communicate the measured data, and we have the mobile devices as intermediaries to receive the message “Shake me”. Still that kind of services do not exist or only exist for highly specialized devices, typically using proprietary technology. First successful use of the technology exists, but a massive application seems to be far out. Several factors are needed to make the technology vision a reality: component technology need to mature, understanding of architecture and the needs of the envision technology need to improve, and standardization need to provide the specifications for the reference points which define how multiple systems can interact. We believe this evolution is needed for Digital Twin technologies, especially when we consider Digital Twins not only as an in-house simulation technology (the first field where Digital Twins have been successfully used), but rather on a path towards a new element in the world wide communication networks probably the computing foundation for the Metaverse. Those Digital Twins are facilitated by 5G, 6G and future networks such as those specified by IOWN. Digital Twins as the concept for merging the physical world and the digital world is also a cornerstone of “Cyber-Physical Systems”, the European “Industry 4.0” and the Japanese “Society 5.0” concept. Lastly, Europe has adopted the “Green Deal”, the big political vision to become carbon neutral by 2050. One instrument to achieve this is called the “Green Twin Transition”, indicating the high expectation for Digital Twin technologies to support the Green Deal. In this chapter, we are focusing on the core architecture of Digital Twins, Digital Twin Ecosystems and Digital Twin Computing. Reader will benefit from the chapter through a tutorial style introduction into Digital Twin technologies. They will get familiar with a set of terms typically used together with Digital Twins, as well as architectures, protocols and APIs.
1 About Architectures and Why They Are Important Large-scale ICT system – as the one we are envisioning here – are systems in which many independent designed components are working together. This needs general principles (an architecture) how the interworking works in general, which interfaces and protocols are needed, and what non-functional properties (e.g. security, management, performance …) need to be fulfilled. The purpose of a system architecture is to define these aspects and build a common understanding. For the environed Digital Twin system, the architecture needs to answer the following questions: • System Architecture: how is the total system structured and which sub-systems are existing? • Functional Architecture: which functional building blocks do exist and how are they working together? • Component Architecture: How are Digital Twins designed and what are the essential parts of a Digital Twin?
Digital Twin Architecture – An Introduction
127
• Interaction Architecture: How are Digital Twins communicating with their counterpart, the real asset? How are Digital Twin communicating with each other? How are they building a Digital Twin ecosystem? In case of distributed deployments, the architecture also explains the distribution of the logical components to the respective execution environment, e.g. in multiple data centers or into an edge-cloud. • Deployment Architecture: What infrastructure need to be available to execute Digital Twins? • Lifecycle Architecture: What is the lifecycle of a Digital Twin? In this chapter, we will explain the basic principle of this architecture, give examples and develop a reference architecture for a Digital Twin system.
2 Brief History of Digital Twin The digital twin concept was first introduced in 2003 for product lifecycle management [1]. Since then, digital twins have been widely studied, from NASA’s re- definition of digital twins for aerospace use-case in 2012 [2] to Grieves’ white paper for virtual factory replication in 2014 [1]. Grieves provided insight to use a digital twin in a manufacturing process and proposed the Digital Twin Prototype (DTP) for product design and the Digital Twin Instance (DTI) for reflecting the actual product state to the corresponding digital twin [3]. A third element is the Digital Twin Aggregate (DTA) that combines multiple DTIs and allows Digital Twin Computing. Saddik expanded the scope of digital twins to cover multimedia and focused on data transmission between the physical and virtual worlds [4]. Recently, digital twins have been studied in manufacturing, aviation, and healthcare [5–7]. Industries have implemented many types of digital twins especially in manufacturing, construction, and healthcare sector. The following gives some examples: Siemens applies the concept of digital twins to various industries including logistics, healthcare, and construction. They diverted the idea of digital twins grown in product development to their logistic network to use an intelligent simulation and consulting model that runs through possible scenarios for the supply chain from start to finish, illustrating all resulting consequences on the basis of real corporate data [8]. They also created a digital twin of a hospital facility to identify ways to enhance and streamline processes, improve patient experience, lower operating costs, and increase higher value of care [9]. In other cases, they utilize the Building Information Models to create a digital twin realizing infrastructures that ensure people’s safety, provide real-time information, guidance and productivity; and to not only manage greenhouse gas emissions but also generate sustainable energy [10]. Microsoft joined this market with a unique approach. They also started their entry to the digital twin market from manufacturing and factory automation, but
128
E. Kovacs and K. Mori
combined their digital twin offering with their existing line of services including Microsoft Azure and Microsoft Hololens [11]. These two services are strong differentiators as the former is one of the leading cloud services commonly used worldwide, and the latter is a cutting-edge augmented reality (AR) device to visualize digital twins overlaid on to the real world. Gradually, their application focus has expanded to smart building and space management by using the newly introduced Azure Digital Twins [12]. To grow their technology in an open community, they also made public the specifications of the Digital Twin Definition Language (DTDL) so that it can become a common language to define various digital twins and their relationships [12]. In 2015, Dassault Systems started working to develop Virtual Singapore, a realistic and integrated three-dimensional (3D) model with semantics and attributes in the virtual space [13]. They constructed Virtual Singapore with three phases: Virtualize (3D modelling of land, buildings, and others), Visualize (integration of additional data and making them VR/AR), and Venturise (open architecture for modelling and simulation) [14]. Their approach is the first to create a static city digital twin as a foundation of a virtual world and then add other necessary information on top of the foundation. Since 2011, the European Future Internet Public-Private Partnership FIWARE has been working on a platform to exchange information about real world entities and their context. The core element of the platform is the Context Broker, a component to find and exchange the information about real world entities. Since 2018, the old Open Mobile Alliance (OMA) standard NGSI has been enhanced with modern concepts such as semantic information (ontologies), relationships (to enable Open Linked Data), a modern information model (using JSON-LD), and revised APIs (e.g. extended to spatial-temporal queries, for efficient data delivery using paging, for federation, and many more…). The new standard NGSI-LD is defined by the ETSI Industrial Specification Group on “Context Information Management” (ETIS ISG CIM). Since 2020, the group is extending the specification to accommodate the needs of Digital Twins. The target is to build a worldwide interoperable system for building and executing Digital Twin as well as to enable large-scale Digital Twin Computing. NGSI-LD has been adopted by the European Commission (as part of the Connecting Europe Facilities, CEF), in India (as cornerstone of the Indian Urban Data Exchange, IUDX), in Korea (as part of the Strategic National project for Smart Cities called CityHub) and in Japan (as part of the SIP project for a Smart City Reference Architecture following the guidelines of the Society 5.0 standards and as a foundation of the Supercity project). To summarize, we have explained the evolution from Digital Twin used in building products (Digital Twin Prototypes), to using Digital Twins for system simulation (Digital Twin Simulations) to Digital Twin as part of life systems (Digital Twin Instances). With the evolution comes changes in the underlying system architecture. Furthermore, the application area of Digital Twin have expanded from product information systems to building information systems to large scale systems such as factories and smart cities. The future is to enable a worldwide Digital Twin
Digital Twin Architecture – An Introduction
129
framework as a basis for the European Digital Common Market, or the Japanese Society 5.0.
3 Requirements for a Digital Twin Architecture The Digital Twin architecture developed in this chapter will consist of so-called Black Box models – used for giving an overview about the Digital Twin Architecture-, and the White Box models – focusing on specific aspects of the architecture and give technical details. The Black Box models contains the System Architecture, the Functional Architecture, and the Deployment Architecture. The white box models contain the Component Architecture, the Deployment Architecture, the Distribution Architecture, and the Lifecycle Models. The Digital Twin architecture will cover different types of Digital Twins. It will be incrementally developed from standalone Digital Twin prototypes used for designing products to a worldwide ecosystem of communicating Digital Twins. As a further requirement for the Digital Twin architecture, we will identify potential APIs and respective standards to build the Digital Twins.
4 Evolution of Digital Twin Architectures with Examples This chapter discusses the evolution of Digital Twin architectures and introduces the respective Black Box models. We will discuss examples and their relationship to the Black Box architecture. The first sub-chapter will define some core aspects of Digital Twins, the following sub-chapters will look at specific aspects (white box).
4.1 Principle Architecture of Digital Twins: Data, Model and Services This section explains the basic principles of Digital Twins. It will introduce the core components that make up a Digital Twin. Main purpose of the section is to lay the foundation for the architecture evolution described in the next sections. Digital Twin/Real Asset Digital Twin are digital representations of a real asset, typically a machine, a building, or another real world object. Increasingly, Digital Twins of a virtual real asset – e.g. an industrial process for which no real hardware is available – are becoming used as well.
130 Fig. 1 Real Asset and Digital Twin
E. Kovacs and K. Mori
Real Asset
RP1
Digital Twin
Functional Architecture – Reference Points As Fig. 1 is showing, the Real Asset needs to talk to the Digital Twin and vice versa. This exchange is realized with different APIs and protocols. In this abstract description for Digital Twins, we call this the reference point RP1- [Cyber-Physical System Interface]. Some architectures separate between the data send from the real asset and the potential control/actuation commands sent to the real asset. In our architecture, both services are part of RP 1. Important Facets of Digital Twins Following the definition of the “Industrial Internet Consortium (IIC)” [15] a Digital Twin has three major aspects: • Data – the data that represent that state of the real asset, as well as intermediate state of the Digital Twin itself. Among the data stored in the Digital Twin are: historical data, real-time data, design documents, usage data, transactional data, context data, relations to other Digital Twins, situation data, and more. • Services – the services offered by the Digital Twin, either to the real assets, to other Digital Twins, or to applications using the Digital Twin. A service interface (RP2) makes the services of the Digital Twin accessible. A Digital Twin can support multiple service interfaces. • Models – models associated with the Digital Twin. Those models can be 3D models, visualisation models, AI/ML models, or physical models used in services or simulations. Typically, service implementations make use of one or more models to execute a specific business logic or Digital Twin function. Using the combination of these three elements, Digital Twins can now capture the current state of the real assets, diagnose its condition, then simulate and optimize its behaviour based on the stored models. Remark There are close relationships and sometimes overlaps between the three elements. For example, a model has internal data elements that it needs to execute the model functions. In a simulation, the steps and intermediate data of each step of a simulation might be kept in the model. Also, a service might have some business logic and might access the data and the model parts of the Digital Twin. Reference Points In terms of interfaces and APIs, Fig. 2 shows RP2, the interface that make Digital Twin services available to the outside world. It also shows that RP1 split into RP1.1 for data exchange and RP1.2 for services exchange. Whether a real implementation uses two different protocols for RP1.1 and RP1.2 or have a unified protocol is a matter of the concrete implementation. The Reference Architecture explains that there might be two kind of service interfaces, one towards the Real Asset, one towards other Digital Twins and towards applications.
Digital Twin Architecture – An Introduction
131
RP2
RP1.2
RP1.1
Service Interface
Service Interface
Service Interface
Service Interface
Service
Service
Service
Service
Data
Models
Data
Models
Data
Models Digital Twin
Fig. 2 Basic Digital Twin Model (following IIC Definitions)
Note on Internal Reference Points This description focus on the external visible properties of the Digital Twin. For an implementation architecture, there will be reference points between services, data and model parts of the Digital Twin. Furthermore, most of the existing Digital Twins systems separate in discrete (sometimes called basic or atomic) Digital Twins, as well as in composite Digital Twins. Composite Digital Twins are build using RP2. The composition relationship can have different properties reflecting the real world composition. IIC defines hierarchical, associational, and peer-to-peer relationships. Note on 3D Model 3D descriptions of a real asset are an important part of a Digital Twin. Simple 3D descriptions are pure data models – therefor belonging to the Data section of the Digital Twin. Advanced 3D models are coming with computation models that can use the 3D descriptions for spatial processing – either simply visualizing the object or to do spatial computation like moving objects in a 3D simulation and checking object collisions. Such 3D models will have a specific part in the Data, Service, and Model section of a Digital Twin.
4.2 Digital Twin Extensions to Design Tools as Part of the Product Lifecycle Management (PLM) Digital Twins emerge from the evolution of the design tools for products and the complete Product Lifecycle Management (PLM) [16]. Following the terms introduced by Grieves, the first use of Digital Twin in manufacturing are Digital Twin
132
E. Kovacs and K. Mori
Prototypes (DTP) in the design phase of the product. The DTP serves as the collection of all data related to the product design and management. These are initially all design and planning document, e.g. CAD drawing. The definition following general accepted terms though the exact definition is taken from [17]. Definition: Digital Twin Prototype This type of Digital Twins describes the prototypical physical artefact. It contains the informational sets necessary to describe and produce a physical version that duplicates or twins the virtual version.
When a real asset is created from the DTP, it will be paired with a Digital Twin Instance (DTI). The DTI captures all digital information about the real assets, might have a link to the DTP (or a copy of the respective documents), might receive status information from the real asset (using e.g. an IoT protocol), and can provide additional services for the real assets. Definition: Digital Twin Instance This type of Digital Twin describes a specific corresponding physical product that an individual Digital Twin remains linked to throughout the life of that physical product.
As already said, Digital Twin can be atomic or can be composed from other Digital Twins. A composed Digital Twin is called a Digital Twin Aggregate (DTA). Definition: Digital Twin Aggregate This type of Digital Twins is the aggregation of all the contained DTIs. Unlike the DTI, the DTA may not be an independent data structure. It may be a computing construct that has access to all DTIs and queries them either ad-hoc or proactively.
These three kind of Digital Twins are running in a Digital Twin Environment (DTE), basically a runtime that provides basic services to the Digital Twins. Different Digital Twin implementation provide different kind of DTE. For example, the Digital Twin can be executed in a cloud hosting environment, on dedicated hardware close to the real asset, or embedded in the real asset. For the future, we expect an open Digital Twin Environment in which different runtimes can be combined together following common word rules. More on this later in the section on Digital Twin Computing. The following definition is not from [17] but developed by the authors for this book:
Digital Twin Architecture – An Introduction
133
Definition: Digital Twin Environment (DTE) The DTE provides a runtime for Digital Twins. It provides basic services like creating DTs, starting their execution, enabling search and discovery of other Digital Twins, and more. A special service in the DTE is the “Digital World Model” which describes the environment in which the Digital Twins are executing. After explaining the basic concepts of Digital Twins as well as some further terms as part of the explanation on Digital Twins for Product Lifecycle Management (PLM) [18], we are now moving to the evolution of Digital Twins in various scenarios.
4.3 Local Applications for Simulations and Visualization Concept Digital Twins started from the digitalization of the product design process. PLM systems manage the data and digital artefacts needed for building the real asset in one place (➔ DTP). They created a toolchain of tools working together for designing the product. W. Kuehn [19] gives a good introduction into the use of the product data along the “Digital Thread” in production. As part of the digitalization, toolchains enabled the use of the product data for simulations. Typically, the simulation verified product characteristics like elasticity or duration. For example, from the 3D model of a car, simulation models were able to do crash tests (e.g. using finite element method), simulate the energy consumption, or analyze the heat dissipation in the real asset. Example Havard et al. [20] explains an environment in which Digital Twin and Virtual Reality work together. The Digital Twin provides the needed simulation, physical constraint and behavior modeling functions, while the Virtual Reality environment enable the real-time interaction with the simulation as well as a presentation in real-time (Fig. 3). In the described system, the Digital Twin stores the models, e.g. the 3D model and the behaviour model. It provides simulation services and has internal constraints based in the material used and further physical know-how needed by the Simulation (“Digital World Model”, DWM). Furthermore, the Digital Twin model connects to the sensor information form the real asset or a respective “Functional Mockup Unit (FMU)” [21] in case the real asset is still in the planning phase. The Digital Twin is communicating through a Data Server with the VR environment. In the VR environment, the 3D models are visualized, the interaction from the end-user captured and transformed into manipulations of the Digital Twin model. The interactions with the data server are done using the RP2 reference point. In real implementations this can be databases, message brokers like MQTT or Kafka, as well as dedicated interfaces from the data server.
134
E. Kovacs and K. Mori
Redrawn Picture from [HAVA19] Digital Twin Modelling
Constraints
3D Model PLM/PDM Data Funconal Mockup
Mechanics Physicals
Simulaon Co-Simulaon Tasks
Asynchronous Data Exchange
Data Server Storage Data Exchange Asynchronous Data Exchange
Virtual Reality Environment Visualisaon Real Assets Behaviour in 3D
Interacons
Context
Manipulate Model Trigger new Tasks
VR Context Infomaon
Fig. 3 Co-simulation workflow between Digital Twin and Virtual Reality Environment
Based on this model, every user interaction in the VR system is triggering a new simulation run in the Digital Twin (using RP2), resulting in new behaviour parameters for the VR environment. Utilizing this setup, the system is helping in “Safety & Robot Behaviour Assessments in VR”, in “Ergonomic Assessment in VR” and other visualisation and simulations tasks. Principle Architecture The toolchain shows that Digital Twins are built from various models (product data, 3D data, physical behaviour models) using an interworking set of tools. In today’s not fully integrated systems, the various tools keep their own data and copy them to the Digital Twin. In future highly integrated digital twin systems, the tool might utilize the Digital Twin as the common data repository (Fig. 4). Digital Twins are consisting of different models (incl. data and functions), each created with different tools. Digital Twin Continuity describes the ability of the
Digital Twin Architecture – An Introduction
Visualisaon Dashboards
VR Environment
Visulaons Tools
PLM Database
Editor and Behaviour Tools
135
Smart Environments
RP Tools
CAD Editor
Funconal Mockup
Digital Twin Simulaon
RP2
Digital Twin Data Management & Brokering Data Exchange Design Toolchain for Digital Twins incl. Digital Twin Connuity
Fig. 4 Toolchain for creating Digital Products
Digital Twin Data Management to use data from earlier stages of the toolchain in other tools, e.g. to re-use the PLM data in simulations. The Digital Twin Prototype combines these different models and make its services available to tools ranging from database to virtualisations and interactive simulations software. Reference Points RP2 provides the services to store data into the Digital Twin and send messages between the different tools (via the Digital Twin). Because of the many legacy systems involved in the described process, there are actually many different reference points towards the design tools. We summarize them under the “RP Tools”. RP Tools is typically out of scope for a Digital Twin Reference Architecture.
4.4 Modular Extensions of Digital Twins An important aspect of Digital Twins is extensibility meaning the ability to dynamically extended the Digital Twin for new usage scenarios. For example, the original Digital Twin of a Factory Assembly line was created for simulating the physical behaviour of the line. By adding the needed information on the energy consumption of each machine, the same Digital Twin computes the energy consumption of the assembly line. For this dynamic extension, each of the three elements of a Digital Twin, the data, the services, and the models shall be dynamic extendable to enable the new functionality. Creating the Digital Twin architecture in such a way that this modular extension (sometimes called augmentation) can be done, is important. Typically, at the external visible RP this requires to provide functions for discovering the available data, services, and models. Furthermore, the management layer of the Digital Twin need to have operations for adding the new “aspect” to the Digital Twin.
136
E. Kovacs and K. Mori
Digital Twin Management Digital Twin Management are service functions that covers the configuration, extension, and transformation of Digital Twins in the course of their life cycle. Especially the Lifecycle Management of Digital Twins is an important aspect of Digital Twin Management. Conceptually DT Management is a part of RP2. Digital Twin Lifecycle Management Engineers design Digital Twins Prototypes during the design time of the real asset. In that phase of the Digital Twin, the prototype represents the not yet realized concept of the real asset. Design Engineers can add features to the Digital Twin in order to understand and simulate the needed functionalities of the real asset. In the data section, engineers can add new data models in order to represent specific aspects of the Digital Twin. For example, in a Digital Twin representing a building or a factory, the building model might contain a logical structuring of the building in floors, rooms, and exits. The 3D model captures the geometric of the building. Services are used to represent dynamic interactions of the building with other Digital Twins, while models are used to represent dynamic aspects of the asset, e.g. its behaviour, prediction or optimizations. At design time, the designer adds new data element, new services, as well as new models to the Digital Twin prototype. Digital Twin Prototypes can already be used to study and verify the properties of the real asset that should be build. See [22] for further discussion on digital twin continuity and lifecycle. Digital Twin Continuity – Virtual Commissioning A first important transformation happens to Digital Twin prototypes when the Digital Twin is virtual commissioned into a Digital Twin Ecosystem. This creates a Digital Twin Instance without a Real Asset (Virtual Digital Twin). As part of this commissioning, the parameters of the Digital Twin are defined as well as Digital Twin Instance information added (Digital Twin Instance Creation). Respective Lifecycle Management Functions take the prototype elements and extend them with the concrete elements of a Digital Twin and its connection to the real assets. Typically, the Digital Twin Prototype elements serve as common design element for the Digital Twin Instances. Digital Twin Continuity – Asset Realization Once the Digital Twin Prototype is mature enough and has been tested with virtual Digital Twins, the production of the real asset can start. An important process is to bind the Digital Twin Instance to the real assets (Bounded Digital Twin). Respective information is provided into the Digital Twin, e.g. by providing the network address of the real assets. Furthermore, specific configuration of the real assets are stored in the Digital Twin Instance. Digital Twin Continuity – Transformations In the lifetime of a Digital Twin Instance, the original set of elements (data, services, models) can be dynamically replaced or new elements added. In this way, the Digital Twin Prototype/Instance is adapting to the lifecycle of the real asset. Respective Lifecycle Management function enable: CREATION, DELETION, and REPLACEMENT of the Digital Twin elements. Furthermore, the Lifecycle Management enables the reading of configu-
Digital Twin Architecture – An Introduction
137
ration options (GET), as well as dynamic changes (PUT). Examples here might be the parameters influencing ongoing simulations of the Digital Twin or its AI models. Those changes are not only affecting the properties of the Digital Twin, but also the relationships of the Digital Twin to other objects. Furthermore, in the case of Digital Twin Aggregates, the Lifecycle Management enables the changes of the aggregate. Digital Twin Continuity – Ownership Change Ownership changes are happening, when a Digital Twin (with its real asset) is sold to a new owner. As part of that process, the Digital Ownership needs to be transferred to the new owner, potentially also with a move of the Digital Twin (and its digital resources) from one execution environment to another. A special aspect of the Ownership Change is whether the link to the Digital Twin prototype (owned by the manufacturing company) is completely cut, or whether specific information like e.g. the design parameters of the real asset are kept. Another specific aspects of the handover is about which parts of the Digital Twin data model are kept, deleted, or encrypted. Furthermore, as services and models might be needing software licenses, as part of the handover the needed license structure might be re-evaluated and adjusted to the new owner. Furthermore, the new owner might assign the Digital Twin to a new Digital Twin Ecosystem. Digital Twin Continuity – End-of-Life At the end of the life cycle, the Digital Twin is moved into an abandoned state (Decommissioning). Historical data of the Digital Twin is either kept or destroyed depending on the needs of the owner. Utilizing eventually kept relationships between the Digital Twin Instance and its prototype can be used to by informing the prototype about the end-of-life of the Digital Twin, e.g. for quality tracking.
4.5 Cloud-Based Device Digital Twins Concept Digital Twins (DTw) will be realized on top of a server or cloud infrastructure. The DTw is connected to its real asset using an IoT/CPS communication service (RP1). The DTw is connected to applications using a typical application API, e.g. REST (RP2). Example The Eclipse project DITTO is creating a software component to create Device Digital Twins. Major goal of Ditto is to create a device digital twin independent of the available IoT protocol used by the devices. Ditto used various sub- modules for communication to the real devices, e.g. the Eclipse module HONO (MQTT, HTTP) or any available MQTT broker (e.g. Mosquitto). Furthermore, Ditto provides the Device Digital Twin model to different backend systems, e.g. using HTTP, WebSocket, AMQP, Kafka, and several others. The general architecture can be found in Fig. 5.
138
E. Kovacs and K. Mori Web App
IoT Solution
e.g. Event Processing
Mobile App
AMQP 0.9.1 Broker
HTTP
WebSocket
Apache Kafka 2.x
AMQP 0.9.1
AMQP 1.0
Device Twins
HTTP AMQP 1.0
MQTT 1.0
Kafka 2.x
AMQP 1.0
Device Connectivity
MQTT
HTPP
IoT Devices Fig. 5 Device Digital Twins using Eclipse Ditto
Ditto provides • Devices-as-a-Service – meaning an abstract API to access the device twins as well as routing functions which delivers device information from the device to the twin and commands from the twin to the respective devices, • State Management including sate synchronization, prediction of state, generation of state change events, as well as • A very basic support for Digital Twin Environment services, e.g. search on Digital Twin meta data and state information Principle Architecture In the Digital Twin Architecture, the CPS Interface (RP1) is connecting the DTw Data Management to the IoT devices in the operational environment. The CPS Interface is an abstracted API that allows to access different IoT systems. It is a realization of the RP1 reference point (Fig. 6). On top of the data management, we have a set of models – each supported by a respective tool – that jointly will deliver the service of the Digital Twin. Reference Points Multiple tools can work together with the help of the Digital Twin Data Management using the RP2. This results in an indirect coupling of the various digital twin tools through the data management. In addition, tools can work directly with other tools using the RP3 [“Tool Chain Interface”]. The main reason for the direct interface between tools are predictability of real-time performance so that changes in one tool can directly trigger changes in the next tool.
Digital Twin Architecture – An Introduction
139 RP Tool
Tool_1 Model_1 RP2
RP3
Tool_2 Model_2
RP3
Tool_3 Model_2
RP2
Digital Twin Data Management
RP2
RP1
IT Environment
CPS Interface
IoT Devices Operaonal Evironment
Fig. 6 Digital Twin Cyber-Physical Interface using IoT
4.6 Interaction System: Towards Communicating Digital Twins Concept The Digital Twin Computing was published by NTT Corporation in 2019 and is a vison that entails the use of high precision digital representations of humans and objects called “digital twins” to create diverse worlds in cyberspace that will exceed the limitations of the real world [4, 23]. The Digital Twin Computing includes three features: interoperability among various types of digital twins, simulation capability in cyberspace, and the digital twin operations to exchange, fuse, and replicate digital twin data [24]. Example Architecture of Digital Twin Computing: The Digital Twin Computing consists of four-layered architecture (Fig. 7). The first layer is the Cyber/Physical Interaction Layer including a sensing function of realworld things and humans to generate digital twins and a feedback function to the real world of the results of trials in a virtual society. In our model, this is the RP1. The second layer is the Digital Twin Layer storing collected data and models. This layer provides functions to create, search for, update, and delete a digital twin on the basis of characteristics. Digital twins stored in the Digital Twin Layer are up-to-date information based on the actual status of the corresponding things and humans in the real space. The third layer is the Digital World Presentation Layer. A virtual environment is built in this layer by fusing, exchanging and replicating digital twins stored in the Digital Twin Layer. A virtual environment is used for specific purposes such as a traffic environment, an urban space, and a group of humans to simulate interactions of digital twins in a required time frame, location, and condition. (RP2).
140
…
E. Kovacs and K. Mori
Simulations at Earth and Outer Space scales
Discovery and resolution of latent urban issues
New economic activity through creation of sister cities
Prediction and suppression of disease spread
Multifaceted personal decision-making
…
Application layer
Applications Executed Using the Digital World Presentation Layer
Traffic Environments
Urban Spaces
Groups
Digital World Presentation Layer Digital Twin Usage Framework
Digital Twin Layer
Generating and Maintaining Digital Twins of Humans and Things
Cyber/Physical Interaction Layer Interaction Between Real Space and Cyberspace
Things
Real space
Humans
Fig. 7 Four-layered architecture of the Digital Twin Computing
The final layer is the application layer. Various types of applications are implemented and executed in this layer to use the Digital World Presentation Layer. Principle Architecture The Digital Twin Computing tackles the problem that in the future multiple Digital Twins need to collaborate to jointly provide the needed simulation or optimization service. It can be expected that there will be Digital Twins machines, everyday objects, as well as humans. Each of them will provide their own CPS interface, need to find other Digital Twins and start interacting with them (Fig. 8). Note: The described system is distributed as it accesses many different real world objects, potential with different protocols. Furthermore, it provides the Digital Twin services through a service API that can be access by multiple client applications. The core element of the system is still a centralized engine, in which the various Digital Twin are executing.
4.7 Multi-Actor Digital Twin Concept As explained in the introduction, we envisioned a world in which Digital Twin will be available for basically all man-made objects as well as for humans. In that world, many different stakeholders will operate their Digital Twins (multi-
Digital Twin Architecture – An Introduction
141 RP2
Service Interface Service Interface
Service Interface
Tool_1
Tool_2
Tool_3
Tool_1
Tool_2
Tool_3
Model_1
Model_2
Model_2
Model_1
Model_2
Model_2
Digital Twins
Digital Twin Environment A
Digital Twins
Digital Twin Environment B
Digital Twin Data Management & Brokering RP1
CPS Interface
RP1
CPS Interface
RP1
CPS Interface
IT Environment
IoT Devices
Everyday Object
Human
Operaonal Evironment Fig. 8 Digital Twin Computing
actor). Today, there are typically independent, not connected Digital Twins. As real assets can be sold, the respective Digital Twin need to be also transferred to the new owner. For the Digital Twin Lifecycle, Digital Twins might need to be exchanged between actors. Note: The aspect of having the Digital Twins communicate across different actors will be handled in the next section. Example The concept “Industry 4.0” describes how to organize digital manufacturing using the concept of connected factories and a respective platform. That platform follows the “RAMI” reference model. As part of RAMI, the Administrative Asset Shell (AAS) describes how to represent the assets of a factory (e.g. the machines on the assembly line) with a digital artefact. AAS defined how the digital part of an industrial asset is realized. A specific use case for the AAS is the exchange of an AAS between two partners.1 The use case is depicted in the following figure (Fig. 9): The picture shows that there are Digital Twin Instances (D1, also called “Instance” in the AAS specs) and Digital Twin Prototypes (B1, A1, called “Type” in the AAS specs). The use case illustrates how these kind of Digital Twins are transferred between partners. The following picture illustrates the concept of the AAS (Fig. 10). The AAS specs defined in details how IDs are created, defines security mechanisms like access control, and specifies the meta-model of the AAS. The
Download “Details of the AAS – Part 1:” at https://www.plattform-i40.de/IP/Redaktion/DE/ Downloads/Publikation/Details_of_the_Asset_Administration_Shell_Part1_V3.html 1
142
E. Kovacs and K. Mori
Integrator
Supplier Publish
Publish
A2
A3
B2
B3
D2
D3
A1
T B1
T product type
D1
product delivery
Composite Type machine
A4
T B4
C4
T
T
D4
Composite Instance machine
I4.0Plaorm
Repository
Fig. 9 Exchange of an Administrative Asset Shell
I4.0 Component Administraon Shell, with unique ID Properes, with IDs Properes, with IDs Properes, with IDs
Complex Data, with IDs
Documents with IDs
Asset, e.g. electrical axis system Unique Id
Pictures used according to Creave Commons license
Fig. 10 Elements of the Asset Administrative Shell
Digital Twin Architecture – An Introduction
143
meta-model describes the principle structure of an AAS, e.g. it introduces properties and specifies which data types are allowed for properties. Furthermore it defines a structuring principle using submodels, introduces references, meta-data, and more. For the exchange of AAS, they introduce serialisation formats in XML and JSON and a packaging standard called AASX (AAS eXchange). Further needed for the exchange of Digital Twins are filters for export and import operations. Principle Architecture In our principle architecture, Digital Twin can now be executed in different Digital Twin Environments. Transfer between the Digital Twin environments can be done with by serialisation of the Digital Twin system state, transfer of the state and code to the new environment, and re-starting the Digital Twin in the new environment. Reference Points In our Reference Model, the exchange of Digital Twin model is done via a new reference point RP4 “Digital Twin Transfer”.
4.8 Multi-Actor Distributed Digital Twins Concept The Multi-Actor Distributed Digital Twins are adding two new aspects to the Digital Twin. First, the Digital Twin Environment in which the Digital Twins are executed are becoming distributed execution system, e.g. utilizing resources from many cloud notes, or utilizing an Edge Cloud computing environment. Consequently, the Digital Twin and its various models can also be distributed among the various execution places. Second, it enables communication between Digital Twin executed in different Digital Twin Environments. Up-to-now, Digital Twin were able to communicate only within their Digital Twin Environments. By federated multiple Digital Twin Environments, Digital Twins from different stakeholders can collaborate. In the long run, we envision – similar to Web servers – everybody can setup their own Digital Twin Environment and host Digital Twins. Different to Web servers, Digital Twins are strongly connected to their Real Assets, giving them a sense of orientation and location in the real world. Thus, the way of finding the needed Digital Twins in a multitude of execution environments needs sophisticated interfaces for spatial queries and geometric matching. Example The ETSI standard for Context Information Management (CIM) has standardized an API and a Data Model for representing real-world entities, their attributes and their relationships called NGSI-LD [25]. The data model is a property graph, a special variant of a Semantic Knowledge Graph. The API defines a REST_ based interface to a so-called Context Broker. The broker dispatches incoming data to interesting parties, stores selected information in a history database, and enables application to query entities for their data. With that functionality, it has realized a standard for the data layer of a Digital Twin (Fig. 11).
144
E. Kovacs and K. Mori
Fig. 11 Federated Broker to realize Multi-Actor Digital Twins Fig. 12 Broker-based Realisation of a MultiActor Digital Twin
NGSI-LD defines the so-called federation mode, in which different brokers are collaborating, thus enabling the described Multi-Actor Digital Twin model. NGSI-LD has means for forwarding query and subscriptions including sophisticated queries types like type-based queries, geo-queries, and history queries. NGSI-LD has already been used to represent the logical structure of e.g. water pipeline networks, airports, or EV charging infrastructures. While NGSI-LD is focused on the data part of the Digital Twins, a recent working paper of the group has shown how the established federation infrastructure can be re-used to realize distributed service invocations on federated Digital Twins [26] (Fig. 12).
Digital Twin Architecture – An Introduction
145
Principle Architecture Basically, the previous monolithic Digital Twin Environment is now broken into several Digital Twin Execution Servers. The Digital Twin itself can be executed on one or many Digital Twin Execution Servers. This means that the internal components of the Digital Twin, for example its models, can be distributed. Furthermore, the various Digital Twins environments can communicate with each other utilizing a broker model. Sophisticated lookup and discovery operations are needed to find the right Digital Twin in the set of connected brokers. Especially geo-scoped queries need to be efficiently supported, as Digital Twins a typically interacting with close-by Digital Twins. Reference Points This introduces two new reference points – RP5 for distributed execution of a Digital Twin, and RP6 for communication between different Digital Twin Environments. Both RP need to be supported by execution platforms providing essential matchmaking, search, and meta data management services.
4.9 Overlay Digital Twins Concept As already explained, Digital Twins are built from various data sets which are utilized by different models. This composition flexibility enables Digital Twins to represent various aspects of their real twin. For a building, its respective twin need to be able to represent building information, the contained energy network, as well as the health situation for the people living and working in the building. Thought there are multiple data sets, multiple models and multiple services, the Digital Twin was typically owned and managed by a single “owner”. This changes with “Overlay Digital Twins”. Those twins are constructed from different Digital Twins belonging to different owners. The challenge of “Overlay Digital Twins” is the weak linking between the “Contributing Digital Twin” and the “Overlay Digital Twin”. As they belong to different owners, each owner can decide when to participate in an “Overlay” and when to stop the participations. In turn, the “Overlay Digital Twin” must deal with the dynamics of this weak relationship. Example For a Smart City [27], we can construct an “Overlay Digital Twin” which receives information from the Digital Twins of the buildings of the city. In turn, the “Overlay Digital Twin” can also communicate with the “Component Digital Twin” to enforce specific policies. For example, in case of an earthquake emergency, the City DTw – an overlay Digital Twin – can inform the component DTws about the upcoming event. The component can react to the event by, e.g. ringing the alarm in the building, stopping elevators, and ensuring doors are open. Principle Architecture The principle architecture of an “Overlay Digital Twin” is again consisting of multiple DTEE in which the component and the overlay DTws are executing. The DTEEs are connected with a DTEE connection protocol that ensure that requests like discovery, information dissemination, or commands are
146
E. Kovacs and K. Mori
exchanged between the participating DTws. Furthermore, the DTEEs deal with e.g. mobility of the real asset, by e.g. transferring a DTw for one DTEE to a better placed DTEE.
5 Common Building Blocks The previous sections have explained the different kind of Digital Twin and have sketched the architecture changes as the usage of Digital Twins are progressing from pure simulation objects to large scale, always-on, always co-simulating ecosystems. In this section, we like to explain common building blocks of Digital Twins that can be found in many of the described systems. This section gives another view on the architecture of Digital Twins.
5.1 State Management An important module of a Digital Twin is to capture is state. The state might reach from its identification attributes, to its geometric information, but especially to its dynamic state information as captured by sensors or their virtual representations in simulations. Typically, the state management consist of modules in the data, service and model section. For example, utilizing a IoT subsystem, the Digital Twin can update the state of its real assets using sensor information or digital interfaces to the real asset.
5.2 Context Management The internal state of the Digital Twin is extended with the captured context in which the Real Asset and the Digital Twin is operating. Such context information can be captured from IoT system, from AI modules, as well as from any kind of IT system providing such data. Context information is represented in the form of context properties, as well as in the form of relationship between the Digital Twin and other object, e.g. other Digital Twins, other digital artefacts (documents, database, APIs and more). An advanced form of context capturing is to capture a semantic description of the current situation and the various contexts influencing the situation. In modern system a dynamic “Data Enrichment Pipeline (DEPs)” can be used to build the semantic dynamic context of the Digital Twin and to reason about it.
Digital Twin Architecture – An Introduction
147
5.3 History Management The History Management of Digital Twins provides long-term storage of the state of the Digital Twin as well as a history of the established/removed relationships, the invoked services, as well as the changes applied to the elements of the Digital Twin during its Lifecyle Management.
5.4 Cognitive Functions When Digital Twin Instances are connected to Real Assets, it is important to be able to capture the state or the context of the Real Asset using cognitive processing like text, image, video or sound analysis. Further cognitive functions include inferencing about the state changes of the environment, interactive exchange with humans, or AI-based predictions.
5.5 Event Processing Multiple information about the state of the real assets, its context, and its interactions is captured by the Digital Twin. It is important to understand the low-level event stream and to create high-level situation descriptions understandable for humans and for other Digital Twins. Processing such events and creating the needed semantic understanding is a function of Event Processing. Typical elements of event processing are: • complex event processing utilizing time intervals-based event conditions • behaviour model matching – by comparing events streams agains simulated behaviour models, the Digital Twin can try to understand the behaviour of the Real Assets or its environment • AI-based Analysis
5.6 What-If Simulations As Digital Twins are used in planning as well as real time optimizations tasks, a common task is to provide “What-If Simulations”. The typical API of those What If Simulation include: • The creation of the simulation parameters, e.g. which scenario should be simulated. The configuration of the simulation includes attributes, relationships, and behaviour models.
148
E. Kovacs and K. Mori
• The execution of the simulation based on available resources and application needs. • The utilization of the simulation results for visualisation and decision making.
6 Summary and Outlook The purpose of this chapter was to introduce the principle architecture for Digital Twins and their respective Digital Twin Execution Environments. We explained the evolution from standalone Digital Twin system to multi-actor systems realizing an Overlay of Digital Twins. This evolution is mainly driven by the evolution of technology supporting the Digital Twin. Computing environments evolve from powerful standalone computing nodes to distributed clouds and edge systems. Networks evolved from low-bit rate mobile networks, to the gigabit networks of the coming decade. At the same time, devices evolve from dump sensor boards to intelligent processing nodes at the edge. How much importance Digital Twins have gained, can be easily seen by looking at the next generations networks. In the upcoming 6G network generation Digital Twin will play a dual role. On one hand, 6G promised to deliver the hyper-connected Digital Twins of the future. Those 6G Twins are connected to their real-word twin with extreme low latency communication. But they are also connected to all Digital Twins in their surrounding, meaning they can interact, co-simulate and co-predict the behaviour of the real-world. As such, Digital Twins play a major role in realizing the envisioned Metaverse, a concept combining AR/VR, Digital Twin, IoT, and Social Networks. On the other hand, it will be used as a concept for managing the 6G network itself. The 6G infrastructure will be represented by Digital Twin that can gather, analyse, simulate and predict various operational modes and parameters of the system. Such a infrastructure can operate with an higher degree of autonomic management as the used management policies can be understood through using the Digital Twins. Complementing 6G networks, the IOWN initiative (Innovative Optical and Wireless Networks,) is working on an all-optical network infrastructure that allows to transport high data volumes with ultra low latencies. IOWN is already exploring how the DCI (Data Communication Infrastructure) is supporting the permanent exchange of data between Digital Twins. The Digital Twin Computing [23] layer is formulating the paradigm that in the future, Digital Twins will be the main concept to realize service enabling “beyond human perception” as well as so-called “Mixed Reality” applications. The read of this chapter might think that Digital Twins realized by the architectures describe here, are already available everywhere. While the authors tried to describe the mentioned variants with an harmonized architecture and well aligned concepts, it should be clear that the reality for Digital Twin system are far more complex then described. Many different systems are available on the market. Many standards organisations are compete for creating the one Digital Twin standard that
Digital Twin Architecture – An Introduction
149
will dominate. Furthermore, we think we have only started to understand how close the Digital Twins can replicate the real twin and how to make good use such replication capabilities.
References 1. Grieves, M. (2014). Digital twin: Manufacturing excellence through virtual factory replication. White Paper [Online]. Available: http://www.apriso.com/library/Whitepaper_Dr_Grieves_ DigitalTwin_Manufacturing Excellence.php 2. Glaessgen, E., & Stargel, D. (2012). The digital twin paradigm for future NASA and U.S. Air Force vehicles [Online]. Available: https://arc.aiaa.org/doi/pdf/10.2514/6.2012-1818 3. Grieves, M., & Vickers, J. (2016). Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems, trans-disciplinary perspectives on system complexity (pp. 85–114). Springer. 4. Saddik, A. E., et al. (2018, April). Digital twins: The convergence of multimedia technologies. IEEE Multimedia, 25(2), 87–92. 5. Tao, F., Zhang, H., Liu, A., & Nee, A. Y. C. (2019, April). Digital twin in industry: State-of-the- Art. IEEE Transactions on Industrial Informatics, 15(4), 2405–2415. https://doi.org/10.1109/ TII.2018.2873186 6. Barricelli, B. R., Casiraghi, E., & Fogli, D. (2019). A survey on digital twin: Definitions, characteristics, applications, and design implications. IEEE Access, 7, 167653–167671. https://doi. org/10.1109/ACCESS.2019.2953499 7. General Electric. The digital twin: Compressing time to value for digital industrial companies, p. 10. Availabel at: [https://www.ge.com/digital/sites/default/files/download_assets/The- Digital-Twin_Compressing-Time-to-Value-for-Digital-Industrial-Companies.pdf] Access on 2023-01-22 8. Siemens Digital Logistics. (2020). The digital twin a new dimension in logistics. Availabel at: [https://resources.sw.siemens.com/en-US/white-paper-the-digital-twin], Accessd on 2023-01-22 9. Siemens Healthcare. (2019). The value of digital twin technology. White Paper. Availabel at: [https://cdn0.scrvt.com/39b415fb07de4d9656c7b516d8e2d907/1800000007115634/14 3ec96042c1/Siemens-H ealthineers_Whitepap e r _ D i g i t a l -Tw i n -Te c h n o l o g y 3 _ 1800000007115634.pdf]. Accessed on 203/01/22 10. Siemens Switzerland. Digital twin – Driving business value throughout the building life cycle. Availabel at: [https://new.siemens.com/global/en/products/buildings/contact/digital- twin.html]. Access on 2023/01/22 11. Microsoft Services. (2017). The promise of a digital twin strategy. Digital Transformation, availabel at: [https://info.microsoft.com/rs/157-GQE-382/images/Microsoft%27s%20Dig ital%20Twin%20%27How-To%27%20Whitepaper.pdf]. Access on 2023/01/22 12. Microsoft. (2019, November). Exploiting disruptive technologies: DIGITAL TWINS [Online]. Available: https://midwestacc.com/wp-content/uploads/2019/11/PJJohnson- Technical-Microsoft-Azure-Digital-Twins.pdf 13. Microsoft Azure. Digital Twins Definition Language (DTDL) [Online]. Available: https:// github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md 14. Dassault Systemes. (2017, March). Smart cities in the age of experience [Online]. Available: https://www.slideshare.net/LuciaGarcia71/alexandre-parilusyan 15. Industrial Internet Consortium (IIC). (2020, February). Digital twins for industrial applications, availabel at:[https://www.iiconsortium.org/pdf/IIC_Digital_Twins_Industrial_Apps_ White_Paper_2020-02-18.pdf]. Accessed on 2023/01/22
150
E. Kovacs and K. Mori
16. Benjamin, S. (2017). Shaping the digital twin for design and production engineering. CIRP Annals, 66(1), 141–144. 17. SPHERE. (2019). Digital twin definitions for buildings. White Paper, Q4 [Online]. Available: https://sphere-project.eu/download/sphere-digital-twin-definitions-for-buildings/ 18. General Electric. (2016). Predix—The platform for the Industrial Internet, availabel at: [https:// www.ge.com/digital/sites/default/files/download_assets/Predix-Platform-for-the-Industrial- Internet-Datasheet.pdf], accessed on 2023/01/22. 19. Kuehn, W. (2019, January). Simulation in digital enterprises. In Proceedings of the 11th international conference on computer modeling and simulation (pp. 55–59). ACM, https://doi. org/10.1145/3307363.3307370 20. Havard, V., Jeanne, B., Lacomblez, M., & Baudry, D. (2019, January). Digital twin and virtual reality: A co-simulation environment for design and assessment of industrial workstations. Production & Manufacturing Research, 7, 472–489. https://doi.org/10.1080/2169327 7.2019.1660283 21. Modelica Association. Functional mock-up interface for model exchange and co-simulation. Functional mock-up interface 2.0.2, available at [https://fmi-standard.org/], access on 2023/01/22 22. Macchi, M., Roda, I., Negri, E., & Fumagalli, L. (2018, January). Exploring the role of Digital Twin for Asset Lifecycle Management. IFAC-PapersOnLine, 51(11), 790–795. https://doi. org/10.1016/j.ifacol.2018.08.415 23. NTT Corporation. Digital twin computing White Paper version 2 [Online]. Available: https:// www.rd.ntt/e/dtc/DTC_Whitepaper_en_2_0_0.pdf 24. NTT Corporation. Digital twin computing reference model version 2 [Online]. Available: https://www.rd.ntt/_assets/pdf/iown/reference-model_en_2_0.pdf 25. ETSI. (2020, August). ETSI GS CIM 009 V1.3.1 (2020–08) – Context Information Management (CIM) – NGSI-LD API. ETSI. Accessed 28 Aug 2019 [Online]. Available: https://www.etsi. org/deliver/etsi_gs/CIM/001_099/009/01.03.01_60/gs_cim009v010301p.pdf 26. ETSI ISG CIM. Context Information Management (CIM); feasibility of NGSI-LD for digital twins. ETSI, Draft ETSI GR CIM 017 V0.0.2 (2021-06). 27. Bauer, M., Cirillo, F., Fürst, J., Solmaz, G., & Kovacs, E. Urban digital twins – A FIWAREbased model. at – Automatisierungstechnik. https://doi.org/10.1515/auto-2021-0083 Dr. Ernö Kovacs (male) received his PhD from the University of Stuttgart. He worked in ICT research various positions (at IBM from 1986 to 1990, at Sony from 1997 to 2004) on topics like multimedia e-mail, multimedia documents, distributed hypermedia systems, mobile networks, and mobile services. In 2005 he joined NEC’s European Laboratories. He is now working as Senior Manager for “Data Ecosystems and Standards”. His group works on context brokering, Cloud-Edge Computing, real-time situation awareness, knowledge extraction and smart cities. They were major contributor to the standard for context brokering in OMA (NGSI) and now in the ETSI ISG Context Information Management (NGSI-LD). He is leading the Smart Industry Mission Support Comitee in the FIWARE Foundation. His team has worked on Digital Twin for autonomous driving cars, on Digital Twin for Manufacturing and recently on Digital Twin for Energy Efficiency and Energy Flexibility.
Digital Twin Architecture – An Introduction
151
Koya Mori is a Senior Research Engineer at NTT Digital Twin Computing Research Center. Since joining the NTT Corporation in 2004, he has been involved in the research and development of an IoT application based on software technologies, such as OSGi, OpenStack and Edge Computing. Since 2020, he is in charge of research for the digital twin computing at NTT DTC Research Center.
Achieving Scale Through Composable and Lean Digital Twins Pieter van Schalkwyk and Dan Isaacs
Abstract The application of Digital Twins (DTs) has gained traction across industrial and manufacturing organizations. The adoption and implementation of Digital Twins invariable requires the business case to pass muster before the allocation of resources to a specific project. The bar is even higher if Digital Twins are to be adopted and encouraged within an enterprise as a way of doing business and as a way of thinking about complex problems. For individual projects within an organization, it is crucial to access technologies and methodologies that repeatedly have a high probability of achieving success. This includes adoption of a general framework where the organization can gain confidence from learnings across projects and to develop trusted general processes and tools that lift the level of practice across the organization over time. The Composable Digital Twin is such an approach and has at its core several important features. It offers re-use of effort, accelerated time to results, general applicability, and the dynamic range to address both simple and complex issues within an enterprise at scale. The building of enterprise capabilities and the development of project specific Digital Twins also requires processes management techniques that minimize risk and build confidence. The Lean Digital Twin exploits the idea of a minimum viable product and lean and agile development techniques for managing DT development projects. It provides a set of steps for guiding Digital Twin projects while aligning business goals and outcomes with technical capabilities across a project’s lifecycle. It also emphasizes the accomplishment of early results. The Chapter explores and illustrates the importance of both ideas, composable and lean DTs, with step by step descriptions and fielded examples. Keywords Agile development · Composability · Composable Digital Twin · Composite Digital Twin · Digital Twin · Digital Twin Ecosystem · Digital Twin Functionality · Digital Twin lifecycle · Digital Twin requirements · Lean development · Minimum viable product P. van Schalkwyk (*) XMPRO Inc., Sydney, Australia e-mail: [email protected] D. Isaacs The Digital Twin Consortium, Boston, MA, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_6
153
154
P. van Schalkwyk and D. Isaacs
1 Introduction This chapter addresses aspects of methodologies and technologies used to build Digital Twins in a business context. It is based on the experience and lessons learned from an operating company XMPRO Inc., [https://xmpro.com/] that supports Digital Twin projects its enterprise customers undertake around the world. It also includes the results some shared experiences from the Digital Twin Consortium [https://www.digitaltwinconsortium.org/] and from analysts who track the progression of Digital Twin capabilities. The value in conducting Digital Twin Projects is operational repeatability of successful outcomes and a progression of tangible benefits along the way, starting with positive results as early as possible. The time to results is important not only for financial reasons, but also for the acceptance of Digital Twin methodologies within an enterprise culture. The heart of what we cover are two important concepts – the Composable Digital Twin and the Lean Digital Twin. The first concept, Composability, can be thought of in terms of a Rubik’s Cube of abilities. These abilities encompass many types of capabilities where each capability is an essential element required to meet the requirements of the “job” that the Digital Twin is being hired to accomplish. At the highest level of abstraction, one can envision the arrangement of these capabilities in terms of the layers of a Rubik’s cube as shown in Fig. 1. The Rubik’s cube analogy also promotes the flexibility of re-arranging these layers, so they are aligned based on the elements’ attributes. Each layer of the cube is aligned and grouped with a matching color scheme. This effectively becomes a categorization of essential related elements, color coded and arranged based on the key capabilities. The layered colored region alignment of the elements, can be consider as the familiar elemental arrangement, organized by common attributes, as represented by the familiar construct of the Periodic Table. Each color-coded grouping of the capabilities is arranged based on the primary category’s ability. Figure 2 provides an example of this Periodic Table of abilities where the primary elements are arranged by color-coded capability categories (for example – Intelligence: Machine Learning, AI, …; User Interfaces – UX: Dashboard, AR, VR, …). To further illustrate the concept, Fig. 3 shows an example of a Periodic Table for a Digital Twin of a Wind Farm and the resulting overlay of these features on the representation of the Windfarm depicted in Fig. 4 and highlighting the defining fundamentals of a Digital Twin as published by the Digital Twin Consortium definition [https://www.digitaltwinconsortium.org/]. With these concepts of composability and the role in establishing the Digital Twin, the next section provides the background reference on the origination of the Composable Digital Twin with an in-depth description of the business value proposition. It concludes by moving from concept to realization (designing, building, deploying, and operating) of the Composable Digital Twin in a step by step process based on actual customer experiences. The realization of the implementation is illustrated using the advanced features of the XMPro tool suite, provided for further clarity. The following section introduces the concept of a minimal viable digital twin in the context of the lean and agile approach. The Lean Digital Twin focuses on addressing a key business value metric with a defined financial benefit that can be
Achieving Scale Through Composable and Lean Digital Twins
155
Fig. 1 Digital Twin scope and scale
Fig. 2 Categorization of Digital Twin Capabilities by commonly encountered features analogous to those in a Periodic Table
verified in a short period of time. To further the reader’s comprehension, a step-by- step approach for development of the Lean Digital Twin with template “canvass” included to assist in identification of digital use cases with the highest likelihood of successfully addressing pressing business issues.
2 The Value of a Composable Digital Twin The recent pandemic will go down in history as one of the most disruptive, uncertain, and transformative times for people and organizations worldwide. Or, as one of my learned friends explained, “overnight it exposed the systemic fragility in the operating capabilities of enterprises and organizations”.
156
P. van Schalkwyk and D. Isaacs
Fig. 3 An example of the Periodic Table adapted to the specific use case of a Windfarm
Fig. 4 The capabilities of a Digital Twin and how they relate to the physical counterpart indicating the “Abilities” in the Periodic Table for the specific example of a Windfarm
Many experts have commented on the unprecedented acceleration of digital technology adoption and the behavioral transformation in business that was activated during the pandemic. Five years of Digital Transformation happened in less than 5 months, according to industry analysts. Some organizations were able to adjust quickly and exploit opportunities from the turmoil. In contrast, others are still paralyzed by rigid business systems and processes that are typical of traditional enterprise architecture approaches. The future reality is that the rate of change will only increase. Professor Peter Frisk, who specializes in “innovative business future”, says that we will see more change in the next 10 years than in the last 250 (Fig. 5).
Achieving Scale Through Composable and Lean Digital Twins
157
Traditional enterprise models were designed when computerized business was still in its infancy. IT was a department that supplied and supported servers, desktops, and notebooks to business users. IT improved business efficiency, but it wasn’t the platform for business. Technology was slow-changing and siloed organizational structures reflected the rigid, hierarchical approach to organizing people and work. But the pace of business changed in correlation with the rate of change as Peter Frisk’s graph points out. The average lifespan of companies has fallen from 75 years in 1950 to 15 years today; 52% of the Fortune 500 in 2000 were gone by 2020. The traditional enterprise has not kept up with the demands of fast-paced emerging technologies like Digital Twins, Internet of Things (IoT), Distributed Ledgers, AI, Quantum Computing, and a smorgasbord of high-impact technologies. These innovative technologies drive changes in new products, services and the new business models needed to support them. The pace of change requires agile and resilient enterprise architectures that enable multi-disciplinary teams to be highly responsive in exploiting business events in an innovative way. It still, however, needs to have guardrails to protect the organization from unnecessary risk.
2.1 What Is a Composable Digital Twin? The keynote at Gartner’s 2020 IT Symposium/Xpo introduced the concept of a Composable Enterprise that enables teams that combine business and IT to
Fig. 5 The rapid rate of change in technology driving the accelerating rate of in innovations. (https://www.slideshare.net/geniusworks/megatrends-20202030-by-peter-fisk-207194903)
158
P. van Schalkwyk and D. Isaacs
compose new “fit for purpose” applications in a fraction of the time that it takes in traditional software development processes. Gartner predicts that “by 2023, organizations that have adopted an intelligent composable approach will outpace the competition by 80% in the speed of new feature implementation” [1]. The concept of composability and the origination of the Composable Digital Twin, including specific examples and value propositions, are based on the extension of the concept developed by Gartner termed The Composable Enterprise. The Composable Enterprise is powered by Composable Business Applications. Digital Twins, in turn, are a class of these Composable Business Applications. It is the Composability that delivers specific high-value business capabilities in the Composable Enterprise. Digital Twins can create and package these high-value capabilities in what Gartner refers to as PBCs or Packaged Business Capability. In what follows these concepts are further showcased utilizing the XMPro Digital Twin Composition technology. The XMPro platform provides an end-to-end capability to create, deploy, operate, and maintain Composable Digital Twins at scale. To better understand Composable Digital Twins (CDT) and their value to the organization, it is worth exploring Composable Business Applications in more detail.
2.2 What Is a Composable Business Application? Traditional 3-tiered business applications were hierarchically structured with a data layer at the base, an application logic layer in the middle, and a user interface layer at the top. The introduction of service-oriented architecture (SOA), APIs and microservices provided some flexibility, but traditional business applications still reflect this rigid design pattern. Changes to business logic are compiled into the larger codebase of the application and rely on specialized developers to make changes at any of the three layers of the business application. The reference model describes the key components necessary to create Composable Business Applications for your Composable Enterprise. It is highly recommend that you read the reference model document by Gartner, but the following Figures highlight a few elements related to composing Digital Twins using this approach (Fig. 6). Composable business applications use a modular approach to compose and recompose applications quickly to address a specific problem at that point in time. An Application Composition Platform composes Packaged Business Capabilities (PBC) around specific user Application Experiences that address a particular business need. It uses PBCs as building blocks to create a highly relevant application without superfluous capabilities that never get used. Think of your ERP or CRM solutions and how many of the features actually get used. Composable Business Applications use Application, Data, and Analytics PBCs through a Low Code Application Composition Platform to integrate, compose, orchestrate, and provide a tailored user experience that focus on the critical business outcomes for the application. Applications can be composed and recomposed quickly by subject matter experts. This provides a high level of agility and flexibility
Achieving Scale Through Composable and Lean Digital Twins
159
Fig. 6 Reference Model for Intelligent Composable Business Applications showing the integration of IT and Business Functions. (Source: https://www.gartner.com/document/3991699)
to address new business challenges. A reliability engineer can, for example, create a predictive maintenance solution around a key asset that is suddenly impacting production capability due to a change in operating conditions or supply chain intelligence. The solution can be composed in a matter of days, in stark contrast to the traditional model that may take months or even years to deploy a solution like this. PBCs enable the engineer to use sophisticated business capabilities without being an expert in data integration, predictive analytics, or Digital Twins. These capabilities are pre-packaged and available in a low code environment to drag-and- drop onto an application experience canvas. PBCs wrap integration, business logic, and analytical capabilities into reusable elements. It’s like Lego bricks that kids use to build (compose) creations that are only limited by their imagination. The range of modular bricks enable the kids to, for example, create motion experience through packaged wheels, electric motors, gears, and a range of other capabilities that take a few seconds to assemble. They have the option to follow the vendor’s assembly plans and build it as shown on the outside of the box. Alternatively, they only use elements of the original design and create their own unique instance that they can play with for a while before reconfiguring it into a new creation. PBCs provide similar building blocks for the Composable Enterprise and bring built- in resilience, scalability, and expertise (Fig. 7). Since a PBC is built around business capabilities, it is easily recognized by a business user. This is different to a JSON file, for example, with comprehensive code and information that is only understood by an experienced developer.
160
P. van Schalkwyk and D. Isaacs
Fig. 7 Reference Model for Intelligent Composable Business Applications showing the integration of Analytic and Application specific building blocks. (https://www.gartner.com/ document/3991699)
Gartner describes PBCs in the reference framework as follows: PBCs are encapsulated software components that represent a well- defined business capability, recognizable as such by a business user. Well-designed PBCs are: • Modular: Partitioned into a cohesive set of components. • Autonomous: Self-sufficient and with minimal dependencies to ensure flexibility in composition. • Orchestrated: Packaged for composition to assemble process flows or complex transactions through APIs, event interfaces or other technical means. • Discoverable: Designed with semantic clarity and economy to be accessible to business and technical designers, developers, and active applications.
Digital Twins are a specific type of Application PBC in the framework for Composable Business Applications. Composable Digital Twins follow a similar development and usage patterns as the applications that they serve.
2.3 The Composable Digital Twin Business Capability Now that we know what a Composable Business Application is, we can turn our attention to Composable Digital Twins. We mentioned above that it is an application- based “Packaged Business Capability”. Digital Twin PBCs are used to create unique application experiences to address target opportunities or challenges.
Achieving Scale Through Composable and Lean Digital Twins
161
In “Digital Twins: The Ultimate Guide” describes the 6 types of data and models that we can combine in different configurations to provide the real-time intelligence and decision support that we to reusable compose Digital Twin capabilities. The 6 data types and models are: 1. Physics-based models (FEM, Thermodynamic, Geological) 2. Analytical Models (Predictive Maintenance) 3. Sensor and Time series data (IoT platforms and Historians) 4. Transactional data (ERP, EAM) 5. Visual Models (CAD, AR, VR, BPM, BIM, GIS, and GEO) 6. Master Data (EAM, AF, BPM) The different configurations are “packaged” to create reusable business capabilities based on the underlying technologies of the 6 types. This now makes it accessible to a business user as a citizen developer, without needing to be a technology specialist. Examples of these Packaged Business Capabilities (PBC) for Digital Twins are: • Time Series data ingestion, validation, anomaly detection and forecasting as an integration PBC • Real-time data updates from time-series data sources to a gaming engine or 3D visualization model • Integrating real-time sensor data and context data from a maintenance system to a Python scheduling and optimization algorithm • Integrating real-time IoT sensor data with GIS technology to create track-andtrace with geofencing capabilities. These PBCs can now be used to compose tailored Digital Twins quickly and securely through a low-code composition and orchestration platform.
2.4 The Value of a Composable Digital Twin The business value of a Composable Digital Twin (CDT) is in the speed, resilience, and agility that it enables. Organizations need to respond quickly to business events that bring opportunity and risk. It is the backbone of the Continuous Intelligence capability, which is the business’ ability to respond to business events from: • • • • •
the actions of people in your business the actions of your competitors, customers, legislators, or suppliers external events such as pandemics and extreme weather equipment breakdown, process failures, environmental events the operational intelligence that you gather from your business applications, data sources and web services; and • more recently, the influx of information from the Internet of Things (IoT) with sensor-based or smart device machine-born data (IoT Platforms). The World Economic Forum’s “Value at Stake” framework for Digital Transformation Initiatives (DTI) provides a model for measuring the value of a CDT in terms of the financial impact across multiple dimensions (Fig. 8).
162
P. van Schalkwyk and D. Isaacs
The advantage of the Value at Stake (VaS) framework is that it measures the benefits of CDTs in financial indicators that are understood by executives, boards, and shareholders. These numbers can impact balance sheets and share prices and go beyond the “warm and fuzzy” metrics that are often used to describe technology benefits. The VaS model is based on the practical measurements and quantification of the following elements: • The value of empowering your subject matter experts with low code composition tools to create their own CDTs without being an experienced developer –– They can address the business problems firsthand without being a technology expert, but leverage state-of-the-art capabilities when composing CDTs –– They can create new solutions in an iterative way, by composing and recomposing minimum viable Digital Twins until they have a problem/solution fit. See our 3-part series on the Lean Digital Twin –– Your subject matter experts can compose new Digital Twin capabilities in days, rather than weeks or months, to address opportunities and risks in a quick and timely manner • A resilient composable architecture reduces the brittleness of traditional system and business application complexity, and the resulting cost of ownership –– It removes bloatware or features that you pay for, but don’t use in traditional business and OT applications –– It enables you to compose Digital Twins or your assets, processes, supply chains, or organization from the bottom up. You can compose Digital Twins based on business problems and aggregate the solution as you gain the benefits of each incremental step
Fig. 8 Typical Value at Stake calculation for Composite Digital Twins
Achieving Scale Through Composable and Lean Digital Twins
163
–– A marketplace of preassembled CDTs based on PBCs reduces the complexity of composing Digital Twins –– Custom CDT capabilities of modern composition platforms allow you to develop your own CDTs and PBCs that address the unique requirements of your use case • Create your own library of reusable CDTs with your own organizational IP as a competitive advantage for capabilities that make your organization unique. Use generic CDTs for capabilities that provide no competitive advantage (Fig. 9).
2.5 XMPro Digital Twin Composition Platform Creating Packaged Business Capabilities (PCB) like Composable Digital Twins (CDT) typically require a low-code Application Composition framework to deliver quick time to Value at Stake. XMPro’s Low Code solution provide the end-to-end capabilities for a Digital Twin Composition Platform. These capabilities include integration, composition, orchestration, development, and UX development in a single solution to create Digital Twin PBCs and more comprehensive Composed Digital Twin Experiences. XMPro manages the design and operation of Digital Twins across the lifecycle and can be compared to a composer who is also the conductor in an orchestra (Fig. 10). The XMPro Data Stream Designed (DS) lets you visually design the data flow and orchestration for your Digital Twin compositions. Our drag & drop connectors provide reusable PBCs that makes it easy to bring in real-time data from a variety of sources, add contextual data from systems like EAM, apply native and third-party analytics and initiate actions based on events in your data (Figs. 11 and 12). The XMPro App Designer is a low/no-code application and UX composition platform. It enables Subject Matter Experts (SMEs) to create and deploy real-time Digital Twins applications without being programmers. XMPRO’s visual page designer enables you to compose custom page designs by dragging blocks from the toolbox onto your CDT, configuring their properties and connecting to your data sources, all without having to code. It provides integration, composition, orchestration and UX development capabilities in the XMPro Digital Twin Composition platform (Fig. 13). The XMPRO App Designer integrates real-time data from XMPRO Data Streams with other business, operational or 3rd party data sources. Each block in your page designs can be connected to a different data source, enabling you to build apps that provide your team with comprehensive decision support from multiple systems. After publishing your app, your pages will update with live data. XMPRO Recommendations in the App Designer, are advanced event alerts that combine alerts, actions, and monitoring. You compose recommendations based on business rules and AI logic to recommend the best next actions to take when a
164
P. van Schalkwyk and D. Isaacs
Fig. 9 CDTs maximize the opportunities from “Easy Wins” vs. the risk of the “Big Bets”
Fig. 10 XMPro Digital Twin Composition Platform
specific event happens. You can monitor the actions against the outcomes they create to continuously improve your decision-making (Fig. 14). The XMPro Digital Twin Composition platform provides an end-to-end capability to create, deploy, operate, and maintain Composable Digital Twins at scale. The green process blocks show the data integration, validation, and storage requirements for real-time Digital Twins. The blue process blocks represent the analytics requirements of Composable Digital Twins, whereas the purple elements show the user interaction and actionable capabilities of XMPro’s Composable Digital Twins (Fig. 15).
Achieving Scale Through Composable and Lean Digital Twins
165
Fig. 11 XMPro Data Streams Designer with Packaged Business Capabilities
Fig. 12 Examples of drag-and-drop Packaged Business Capabilities in XMPro DS
3 The Lean Digital Twin The recent rise in interest in Digital Twins for industrial applications reminds of medieval seafarers navigating unchartered waters in search of fame and fortune. Their charting maps showed “Here be dragons” at the edges which meant dangerous or unexplored territories, in imitation of a practice of putting illustrations of dragons, sea monsters and other mythological creatures on uncharted areas of maps where potential dangers were thought to exist [2]. Industry analysts like Gartner, Forrester, and the ARC Advisory Group who survey a broad spectrum of the industrial market, all agree that Digital Twins will proliferate over the next couple of years.
166
P. van Schalkwyk and D. Isaacs
Fig. 13 From drag-and-drop composition to production Digital Twins
Fig. 14 Composable recommendation rules from real-time event data
Digital Twins have attracted wide-spread attention and are considered to be the key to smart manufacturing and other industrial applications that are embracing large scale digital transformation initiatives [3]. This section presents a methodology to create Digital Twins in a simplified, systematic, and problem-to-solution approach.
3.1 The Challenge and How Do We Want to Solve It? Because Digital Twins are digital representations or software design patterns, we look to software development practices and approaches for guidance. There are multiple different software development approaches, and our aim is not to review all the alternatives to determine best fit. We will focus on applying the principles that are used by many software startups that have vague requirements, limited resources, and funding, and first need to find “product/ market” fit. The approach used by many of these startups is described as “The Lean Startup”.
Achieving Scale Through Composable and Lean Digital Twins
167
Fig. 15 End-to-end Digital Twin development process in XMPro Digital Twin Composition Platform
3.2 The Lean Digital Twin: What We Can Learn from the Lean Startup Approach 3.2.1 The Lean Startup Methodology Silicon Valley entrepreneur Eric Ries first introduced the term Lean Startup in 2008 and published his best-selling book titled The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses in 2011. The initial concept was aimed at helping startups take an agile and customer- centric approach to product development. The Lean Startup Methodology has since been widely adopted by startups, government organizations and global enterprises as a framework for creating innovative products and business models. For example, GE FastWorks launched 100 projects globally by using the Lean Startup approach. The results GE has achieved include: half the program cost, twice the program speed, and products selling over two times the normal sales rate [4]. The following section covers the elements of the Lean Startup Methodology most applicable to creating Digital Twins: 3.2.2 Minimum Viable Product Eric Ries defines a minimum viable product (MVP) as a version of a product that enables you to learn enough to test your current assumptions with the least amount of development effort [5].
168
P. van Schalkwyk and D. Isaacs
A common misconception about building an MVP is that it is simply a paired down version of the ultimate solution you aim to create. This approach overlooks the philosophy underpinning the MVP concept, which is to aid in validating your assumptions as fast as possible by getting user feedback. From past experience, designing high-fidelity mockups and interactive prototypes have been powerful tools for generating the initial MVPs for Digital Twins. Because there is no development involved in this initial phase of experiments, we are able to rapidly turn around revisions of the prototypes and get multiple rounds of feedback. 3.2.3 Validated Learning The Lean Startup approach draws on the Scientific Method by recommending that practitioners conduct falsifiable experiments to inform their decision making. This is in stark contrast to the traditional approach of creating a comprehensive one-off plan before embarking on the development phase of a project. For most industrial companies, creating Digital Twins is still uncharted territory. This is why an approach originally designed for startups can be useful in navigating the process of innovating in the midst of uncertainty. By running small experiments with measurable outcomes, you are better able to make constant adjustments based on real user feedback, rather than on your initial assumptions (which are often incorrect). Using the rigorous Validated Learning approach prevents the misfortune of perfectly executing a plan that produces an obsolete solution. 3.2.4 Build-Measure-Learn The Build-Measure-Learn feedback loop is a core element of the Lean Startup methodology. It consists of turning your ideas into an MVP, getting feedback from users, and then making decisions based on what you learned (Fig. 16). Eric Ries emphasizes the focus on minimizing the total time to complete a cycle of the Build-Measure-Learn loop. This is particularly relevant to industrial organizations, who operate in hyper-competitive environments where innovating faster than the competition can have large-scale impact. Based on past experience with customers in asset-intensive industries, Digital Twins have provided significant impact. When working on a scale where 1 h of downtime equates to hundreds of thousands of dollars in revenue loss, even small- scale Digital Twin projects can produce enough ROI to fund additional development. A large, multi-national Oil & Gas customer has seen a reduction of $8 M in costs & production losses in a 6-month period by using a Digital Twin of an oilfield to optimize maintenance and regulatory inspections. A mining customer is using a Digital Twin of their underground conveyor system to reduce downtime by detecting one type of failure mode with results of $3 M+ additional revenue per annum.
Achieving Scale Through Composable and Lean Digital Twins
169
We have extracted and applied the most-relevant elements from this well- adopted approach to help industrial organizations create Digital Twins that solve real challenges and don’t take months to architect and develop. 3.2.5 A Practical Approach to Lean Startup In 2011, veteran software entrepreneur, Ash Maurya, expanded on the work by Eric Ries and released Running Lean: Iterate from Plan A to a Plan That Works [6]. In the book, Ash provides a practical and systematic process for applying the Lean Startup Methodology. In his work since then, Ash has built on these concepts and created the Continuous Innovation Framework [7]. The framework consists of three key steps: Model, Prioritize and Test. 3.2.6 Model The first step is to document your initial assumptions in a lightweight format. In the Continuous Innovation Framework, practitioners often use an adapted version of the Business Model Canvas developed by Alex Osterwalder and Yves Pigneur in the book Business Model Generation [8]. 3.2.7 Prioritize Before you start creating your MVP, understanding what to build should be a key step in your Digital Twin project. Finding the right problem to solve can be challenging. Fig. 16 Build-Measure- Learn Diagram. (Adapted from The Lean Startup by Eric Ries)
170
P. van Schalkwyk and D. Isaacs
We have developed a Digital Twin Ranking Matrix that enables cross-functional teams to rapidly brainstorm potential Digital Twin use cases and prioritize them based on factors like business impact and technical feasibility. 3.2.8 Test Once you know which problems are worth solving, you can move on to defining the MVP and testing your assumptions using rapid experimentation. A good experiment optimizes for speed, learning and focus. It has a goal, a falsifiable hypothesis to test, a defined timeline and one key metric to keep track of. Examples of experiments include: # Goal Hypothesis 1 Confirm that we are solving Problem interviews will validate the right problem that cyclone pumps are responsible for 30 h of downtime 2 Validate which features to Lightweight mockups will build into the cyclone pump validate that a Digital Twin will Digital Twin reduce cyclone pump downtime 3 Validate that we can get the Developing an initial solution real-time data required to will validate that we can get solve the problem access to the required data in real-time 4 Confirm that we can Deploying the Digital Twin to improve the operational one site will reduce cyclone metrics pump downtime by 30% 5 Scale the improvement in Deploying the Digital Twin to 3 operational metrics to more new sites will produce $3 million plants ROI
Timeline KPI 1 week # positive responses 2 weeks
Feature Priority Ranking
2 weeks
Seeing live data in the application
12 weeks Hours of downtime 12 weeks $ value of reduced downtime
The key is to run rapid experiments that build on each other and provide you with more learning and validation as you progress. 3.2.9 Applying a Lean Startup Approach to Digital Twins We’ve adapted the principles of the Lean Startup Methodology and the practical tools from Running Lean and the Continuous Innovation Framework to help industrial companies innovate like startups in the uncharted Digital Twin arena. 3.2.10 Benefits of Using a Lean Approach for Digital Twin Development The Lean Digital Twin focuses on addressing a key business value metric with a defined financial benefit that can be verified in a short period of time. It provides short-term return on investment that can be used to fund the development of more advanced production twins. This removes the cost and risks associated with projects
Achieving Scale Through Composable and Lean Digital Twins
171
where a lot of time is spent on requirements definition, product specifications, architectural designs, and waterfall style development cycles (Fig. 17). The cost and risks of this approach are significantly reduced as the initial phases to validate product/market fit are typically measured in days and weeks rather than months and years. The journey to develop Digital Twins is still uncharted, but by building on the Lean Startup approach you can create a clear path to success in the midst of uncertainty.
3.3 The Lean Digital Twin Process: Lean Startup Applied to Digital Twins The previous section describes the fundamentals of the Lean Startup approach and how it can be applied to creating Digital Twins. In this section, we’ll look at the practical application of this approach to develop a Lean Digital Twin. The first phase of the approach minimizes development effort as it focuses on identifying key business issues that can be addressed with a Digital Twin by describing the overall solution in an easy-to-understand manner. It is referred to as the problem/solution fit phase of the Lean Startup methodology. The second phase of the approach defines a minimum viable Digital Twin (MVDT) based on the problem/solution statement from the previous phase. The MVDT is used to validate and verify assumptions and hypotheses made during the problem/solution assessment. The MVDT may undergo multiple iterations to demonstrate a Digital Twin/business fit. This is derived from the product/market fit in the Lean Startup approach. This is best accomplished with agile development tools that allows subject matter experts to quickly change elements of the MVDT. Once an MVDT hypothesis has been validated and verified, the Digital Twin can be scaled for full production applications and lifecycle. These first two phases are focused on validated learning based on iterations and potentially pivoting the Digital Twin application as new learning emerges (Fig. 18). It is best to construct the lean Digital Twin in a series of consecutive steps that are outlined below. Step 1: Find a problem worth solving (Understand the Problem) The prioritization approach described here is used in two iterations to (1) rank multiple initiatives in an organization that could benefit from Digital Twins, and then (2) rank assets or processes that collectively operate as a system where use cases for the system are ranked. The first prioritization exercise focuses on prioritizing which system to focus on. The latter exercise provides guidance on the prioritization of use cases for an asset grouping that could be serviced by a single Digital Twin. Examples include a packing line in FMCG (fast moving consumer goods), a well in Oil & Gas, a robot assembly cell in manufacturing, a main line conveyor in materials handling or a processing plant in mining (the example we use later). Both exercises follow the same approach, and the initial prioritization matrix can be omitted if the business is clear on the system that could benefit from a Digital Twin. It is,
172
P. van Schalkwyk and D. Isaacs
Fig. 17 The Digital Twin journey over its lifecycle
Fig. 18 Alignment of Solution, Market Fit, and Operational Scale Considerations
however, recommended to do the initial prioritization exercise to ensure that the real business challenges are addressed. It is the authors’ experience that organizational biases often influence the selection of Digital Twin candidates, and the Prioritization Matrix approach assists in identifying impactful projects. The objective is to establish a falsifiable hypothesis to test around the business problem that the Digital Twin will address. “Cyclone pumps are responsible for 30 hours of downtime per month” is an example of such a hypothesis that can be tested in product/solution fit interviews and workshops. The prioritization process assesses both business impact and technical readiness of a Digital Twin project. The high-level business outcomes in the prioritization framework are the basis for scoring and agreeing on the business impact of a specific use case. The prioritization process starts with a list of potential Digital Twin use cases and ranks them based on their business impact for each desired business outcome. The business impact metrics are chosen to align with the strategic objectives of the organization. These are often referred to as the business drivers in digital transformation programs (Fig. 19). To avoid analysis paralysis [9] a simple high, medium, and low scoring methodology is used in setting up a ranking matrix. This is best done with the business
Achieving Scale Through Composable and Lean Digital Twins
173
Fig. 19 Business Impact ranking matrix
(operations), IT and OT representatives in a working session. Once the business impact is scored for each scenario the technical feasibility (or complexity) is assessed for each scenario, again without over-analyzing or getting into too much technical detail. It is a top down approach and even though information from reliability engineering practices like FMEA can be helpful indicators, it is important to guard that this becomes a technical feature or requirements design session. The impact assessment is done based on the strategic drivers of the business such as Safety, Down Time, Quality, Throughput, Cost etc. In this example, the following technical assessment criteria is used: (1) OT complexity, (2) IT complexity, (3) analytics, (4) system complexity, and (5) project readiness. Technical assessment criteria can be adjusted to fit the requirements of the business, but for this example the criteria are for a typical industrial installation. OT and IT complexity are described in terms of availability, accuracy, latency, and geographical location. Analytics is described in terms of maturity, sophistication (predictive and cognitive analytics) and application of business rules or physical models. System complexity is based on deployment infrastructure (edge, local, and cloud) and geographical constraints. Project readiness is assessed based on availability of subject matter experts and technical resources. It is generally useful to define “order of magnitude” financial measures to agree on the high-level impact of each new state or scenario. The objective is not to be accurate in estimating the value of a business case, but to get high-level agreement between the different stakeholders on the potential impact of each scenario. In this example a scale of (1) greater than $100 k, (2) greater than $1 m, or (3) greater than $10 m is used. This “order of magnitude” is visually represented in a bubble chart with the business impact and technical readiness scores as the two major measures. The weighted average values of each of the measures are placed on the graph which is divided into four quadrants. The size of the bubble is determined by the value of the economic impact. The four quadrants represent the business readiness for each of the Digital Twin scenarios. The “Do Now” quadrant represents high business impact and a high level of technical readiness. Opportunities on the far right of the quadrant with the biggest bubble size often represent Digital Twin projects with the highest likelihood of success for all stakeholders (Fig. 20).
174
P. van Schalkwyk and D. Isaacs
Fig. 20 Bubble chart that visually rank Digital Twin Projects based on business impact and readiness
This approach provides a common understanding of the expected business outcomes and potential technical challenges in achieving this goal. It provides the basis for more detailed analysis of those projects with a high likelihood of success. A downloadable copy of the Prioritization Matrix is provided in the Lean Digital Twin Kit linked at the end of this Chapter. In using an iterative approach where Digital Twin scenarios are first done at an overall business level, the ranking will provide guidance on the highest priority process or system that in turn gets broken down into sub-systems, components or assets that are ranked based on the same process. In an FMCG scenario the initial ranking may be done for business areas such as raw materials handling, production processes, filling and packing, and shipping. If filling and packing is identified as the best “Do Now” opportunity then the followon exercise could rank Digital Twin scenarios for filling/sealer, labelling, cartooning, packing, and palletizing. This will identify digital use cases with the highest likelihood of successfully addressing pressing business issues. These sessions should be limited to 90 min as more detail will be required in later analysis, but the objective is to be lean and reduce waste. The outcome of step 1 is a prioritized list of Digital Twin scenarios for problems worth solving. Step 2: Document The Plan – Lean Digital Twin Canvas (Define the Solution) Once one or two Digital Twin candidates are identified, a single page solution description is created for each candidate. This single page solution description is based on the Lean Canvas described in the section on the Lean Startup approach. The canvas is adapted for the Lean Digital Twin process and is referred to as a Lean Digital Twin Canvas. It describes all the key elements of the problem/solution fit (Fig. 21). The proposed sequence for completing the Lean Digital Twin Canvas is described below. It is recommended to do the initial draft in a 60 min session following the prioritization ranking exercise. Sequence for completing the canvas in a workshop session:
Achieving Scale Through Composable and Lean Digital Twins
175
Fig. 21 Typical Documentation of Key technical project parameters and business considerations
1. (Problem) Define top 3 problems that your twin will address based on the prioritization matrix. 2. (Customer Segments) Who are the target users that will benefit from the solution? (Digital Twin) 3. (Digital Twin UVP) What makes this Digital Twin different from what you are already doing? 4. (Solution) What are the top 3 features of the Digital Twin? (AI, real-time, decision support etc.) 5. (External Challenges) What are the external red flags for the Digital Twin? (security, data access, connectivity) 6. (ROI Business Case) How will this Digital Twin deliver ROI? 7. (Key Metrics) How will the Digital Twin be measured (quantitatively)? 8. (Integration) What are the key integrations required to make it work? 9. (Costing) What is the projected costing? It follows a logical sequence that starts with the problem and the users rather than jumping to product features. This holistic approach considers all aspects required to deliver a successful Digital Twin project (Fig. 22). The initial workshop session is aimed at completing as much as possible, but the Lean Digital Twin Canvas is updated continuously (versioned) as we test, validate, and learn. One major benefit of the canvas is that it is easy to complete in a workshop session and it provides a one-page business plan for the Digital Twin that is easy to communicate to both end-users and project sponsors. It is presented as a single slide in sponsor meetings. The completion of the Lean Digital Twin Canvas concludes the product/market fit phase of the Lean Digital Twin approach. The next phase focuses on validating
176
P. van Schalkwyk and D. Isaacs
Fig. 22 A holistic approach to sequencing and balancing Digital Twin requirements and influence factors
and verifying your hypothesis and assumptions. This is done by developing a minimum viable Digital Twin or MVDT similar to the product/market fit phase of the lean startup methodology. Step 3: Decide what goes into V1 of the Minimum Viable Digital Twin (MVDT) The same prioritization process that was used for choosing MVDT candidates is used to determine what features to include in the initial release or MVDTv1. The only difference is that Digital Twin features are now assessed on business impact and technical feasibility rather than business applications of the Digital Twins. Typical features in an industrial application include, but are not limited to: • • • • • • •
real-time equipment data from sensors and devices time series data from historians and automation systems machine learning algorithms such as anomaly detection predictive algorithms such as classification and regression models production data from enterprise systems physics-based or engineering models that describe equipment behavior simulation models
Digital Twins are typically composed of combinations of the above features. Ranking these features in their ability to address the problem statement and solution identifies the two or three key features in the “Do in V1” quadrant. This approach identifies key capabilities in the initial release that will provide the best guidance on validating and verifying the solutions capability to address the business challenge. It is important to note that the minimum viable Digital Twin features are still implemented as fully featured solution components, but the scope of the overall MVDT is limited to the two or three features identified in this process (Fig. 23).
Achieving Scale Through Composable and Lean Digital Twins
177
Fig. 23 A methodology for identifying and prioritizing the features of the minimum viable Digital Twin
Step 4 – Create Lightweight MVDT to Validate Hypothesis Mocking up an MVDT that includes the three features chosen in the previous step provides a basis for validated learning (Fig. 24). This mockup for a proposed well Digital Twin uses real-time event information for reservoir data from a subsurface data store, a well location map from a GIS data source, and production data from an operations database. The information is not yet integrated to the real-time data, but the mockup provides a realistic view of the lean Digital Twin and the decision support that it will provide users including operations data, recommendations based on descriptive and prescriptive analytics, and open tasks associated with this entity. Digital Twin application tools that support an agile approach have this capability built in, but these mockups can also be done in simple tools such as Microsoft PowerPoint. The mockup typically goes through multiple iterations during a 2-week period of review by users and stakeholders. Mini presentations and interviews with stakeholders provide immediate feedback that is easy to incorporate in the mockups. The outcome of this process provides a validated basis for creating a live version of MVDT that will be used to verify that it solves the problem that it set out to do. Step 5: Create an operational MVDT Once the mockup is agreed upon, the actual MVDT is built out, preferably in agile development toolset, and integrated to operational data sources. This can range from a few days to 3–4 weeks depending on the tools that are used. The benefit of using a Lean Digital Twin approach is that the feature set is limited which means users can be trained in a relatively short period of time to use the
178
P. van Schalkwyk and D. Isaacs
Fig. 24 The use of a visual mockup to validate the features and requirements for the Digital Twin
Digital Twin application. The objective of this step is to verify the assumptions and hypotheses made along the way and to identify areas where the MVDT must be improved or changed. This can be done for between 4 and 12 weeks based on the complexity and impact of the solution. This phase is also used to validate and clean up data sources, verify algorithms, and check calculations to improve the quality of the decision support the Digital Twin provides. The results from this step concludes the product market fit review and serves as the basis to deploy and scale the Digital Twin to production.
3.4 Conclusion The Lean Digital Twin approach is well suited to organizations that are starting a Digital Twin journey and want to use an iterative approach to discover requirements in a systematic way while demonstrating business value at the same time. The approach requires top-down support and bottom up commitment. It is a collaborative approach that provides the guardrails for managing the process as it is not prescriptive. It does, however, require multiple iterations or pivots to find the best Digital Twin/business fit.
4 Summary This chapter has presented two primary concepts: The Lean Digital Twin and Composable Digital Twin. Descriptions of the methodologies, and workflows provide the foundation to develop and deploy a digital twin including mechanisms and
Achieving Scale Through Composable and Lean Digital Twins
179
considerations that help enable successful outcomes. Examples illustrating the above approaches, based on the experience and lessons learned from XMPRO Inc., [https://xmpro.com/] Digital Twin projects and work product findings from shared experiences from the Digital Twin Consortium [https://www.digitaltwinconsortium. org/] and from analysts who track the progression of Digital Twin capabilities. Templates covering these concepts and methodologies provide the reader’s with tools to conduct their own workshop and begin their journey or use as a cross-check on their existing journey.
References 1. Gartner, use Gartner’s reference model to deliver intelligent composable business applications, Natis et al., 14 Oct 2020. 2. Wikipedia Contributors. (2019, August 10). Here be dragons. Wikipedia. [Online]. Available: https://en.wikipedia.org/wiki/Here_be_dragons 3. Qi, Q., & Tao, F. (2012). Digital Twin and big data towards smart manufacturing and industry 4.0: 360 Degree comparison – IEEE Journals & Magazine. IEEE.org. [Online]. Available: https://ieeexplore.ieee.org/document/8258937 4. Power, B. (2014). How GE applies lean startup practices. Harvard Business Review. [Online]. Available: https://hbr.org/2014/04/how-ge-applies-lean-startup-practices 5. Ries, E. (2017). The lean startup: How today’s entrepreneurs use continuous innovation to create radically successful businesses. Currency. 6. Maurya, A. (2017). Running lean: Iterate from Plan A to a plan that works. O’Reilly. 7. The Lean Startup is NOT enough Leanstack.com, 2019. [Online]. Available: https://leanstack. com/library/categories/fundamentals/courses/what_is_continuous_innovation/lessons/ lean_startup_not_enough 8. Osterwalder, A., & Pigneur, Y. (2010). Business model generation: A handbook for visionaries, game changers, and challengers. Wiley. 9. Wikipedia Contributors. (2019, September 10). Analysis paralysis. Wikipedia. [Online]. Available: https://en.wikipedia.org/wiki/Analysis_paralysis Pieter van Schalkwyk is an experienced engineer and technologist who helps organizations use real-time, event-based Digital Twins to improve situational awareness, process efficiency and decision-making without disrupting operations. He is recognized as a thought leader in Industrial Digital Transformation and has written and spoken extensively on topics such as Digital Twins, IIoT, AI/ML, and Industrial Blockchain applications. Pieter holds a Bachelor’s in Mechanical Engineering and a Master’s in Information Technology. He currently chairs the Natural Resources Working Group in the Digital Twin Consortium (DTC) where he created the Digital Twin Capabilities Periodic Table that is used in many industries and organizations. Prior to this, Pieter was the chair for the Digital Twin Interoperability Task Group in the Industrial IoT Consortium (IIC). In February 2019, Pieter received the IIC Technical Innovation Award from his peers. He is also the co-author of the book “Building Industrial Digital Twins”, published by Packt in 2021 and was named in Onalytica’s 2022 “Who’s Who in IoT” as an Industry Key Opinion Leader in Digital Twins.
180
P. van Schalkwyk and D. Isaacs Dan Isaacs serves as the Chief Technology Officer and General Manager of Digital Twin Consortium. He is responsible for establishing and driving strategic technical direction and leadership to support the growth and expansion of the consortium through business development and membership initiatives. Additionally, Dan develops strategic partnerships and liaisons with other international consortiums, organizations, and alliances, to further advance the consortium’s objectives. Dan also holds the position of as the Chief Strategy Officer for the Object Management Group (OMG). His responsibilities include developing and implementing a comprehensive strategy to unify the OMG community of consortia. Dan is responsible for driving advanced technology awareness and adoption towards accelerating sustainable digital transformation across industries, academia, government, and geographies. Previously, as Director of Strategic Marketing and Business Development at Xilinx, Dan was responsible for emerging technologies, including AI/ML, defining, and executing the IIOT and Automotive ecosystem strategy, including responsibilities for Automotive Business Development focused on ADAS and Automated Driving systems. Dan represented Xilinx at the Industrial Internet Consortium (IIC), leading the development of two testbeds from concept to production. Dan has over thirty years of experience working in Automotive, Industrial, Aerospace, and Consumer-based companies, including Ford, NEC, LSI Logic, and Hughes Aircraft. An accomplished speaker, Dan has delivered keynotes and seminars and served as a panelist, and moderator at global forums and conferences Embedded World, Embedded Systems, and FPGA Conferences. He is a member of multiple international advisory boards and holds degrees in Computer Engineering from California State University and in Geophysics from ASU.
The Role of Digital Twins for Trusted Networks in the “Production as a Service” Paradigm Götz Philip Brasche, Josef Eichinger, and Juergen Grotepass
Abstract Saturated markets and the continued need to lower costs drive the evolution of modular and networked Smart Factories also in reply to the trend towards an individualized customer demand driven mass-production. The accelerating digitalization of industries support the migration from Industry 3.0 which focuses on automation to Industry 4.0 where digital twins of both, the product and production itself enable this paradigm shift towards a networked production. Digital twin representations have evolved from definitions of the German Platform “Industry 4.0” since 2017 starting from asset administration shells (AAS) that describe services being offered from assets like products, machines, sensors and software components. Together with OPC-UA as future standard, asset administration shells allow vendor independency and active networking in product design and manufacturing. This enables a new degree of interoperability and thus facilitates the new paradigm of networked production embracing manufacturing across different enterprises. This has the potential to drive outsourcing and just-in-time supply chains to a new level with disruptive implications for conventional product provisioning. Virtual factories could be designed by production elements acquired in a highly dynamic platform-based economy. In addition to this fundamental shift, the technology progress of wireless connectivity results in new interfaces for 5G networks in vertical industries offering deterministic and reliable ad-hoc connectivity of people, machines and factories where needed. The network itself will be treated as an asset in near future being described and managed by an AAS. The digital twin of 5G user ends and network gears will help to forecast potential network loads in given industrial applications and to negotiate the quality of service parameters needed in the actual application, i.e. latency and reliability of a 5G connection. Other recent international initiatives related to trustworthiness focus on the development of trusted architectures in these networked production settings which G. P. Brasche · J. Eichinger · J. Grotepass (*) HUAWEI Technologies Duesseldorf GmbH, Düsseldorf, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_7
181
182
G. P. Brasche et al.
will further push the evolution of digital twins. Finally, emerging auction-based digital business models enable a new monetarization of services and the negotiation of costs of production under consideration of new aspects such as environmental constraints, CO2 footprints and corresponding additional costs resulting of i.e. energy, waste, and resources supporting sustainability in a circular economy. Thus, digital twins break with traditional paradigms and open up amazing opportunities in the engineering phase of collaborative production schemes. All this will be further supported by artificial intelligence services and powerful Cloud-Edge infrastructures that will help to match the technological, business and ecological requirements and to mitigate between service requester and supplier in a platform economy as a valuable first step in reaching climate neutrality per design. Keywords Holistic model based systems engineering · Trusted cloud infrastructures for manufacturing · Digital as enabler for sustainability · Industry 5.0
1 Motivation to Look at Digital Twins for Manufacturing Data-driven services across company boundaries break up conventional value chains. Such a “networked production” enables “manufacturing as a service”, a new manufacturing paradigm in which the customer is an active part of the production work-flow and a co-designer of the product he wants to purchase. As shown in Fig. 1 below, networked production brings together design, production and use of products at regional locations.
Fig. 1 From cost driven mass production to online configured & customer individual production
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
183
On the left side of the drawing, we see the traditional automation and production scheme to serve mass-production at lowest costs. Once designed the production is rigid and does not allow adaptivity. In the middle, we see the paradigm shift towards customization, with the customer starting to act as co-designer of his product. On the right side, the digital twins of assets, machines and services are depicted which will further enable new types of service-oriented cost models which take the common value in ecosystems into account - opposed to existing cost models which focus more on the “private” value of (key) suppliers. Initial new formats for auction- based selling of services and diverse goods based on the work of Wilson and Milgrom – awarded with the Nobel Prize for economy 2020 [17] – are discussed and adopted for systems engineering in [19]. Thus the digital twin is an enabler for a new AI based “green” engineering approach of collaborating ecosystems. As regional enterprises will become more and more connected to better serve demands of local customers, offshoring of mass-production from low wage countries to a close-by location becomes more attractive. High environmental costs related to excessive energy consumption and logistic efforts for shipment are saved. Thus, connected modular Smart Factories enable customer demand driven mass- customization at reduced environmental, design and manufacturing costs. However, networked production requires trust across the entire system and value chain: trusted infrastructures, networks and business models which assure data sovereignty in manufacturing ecosystems. The current European GAIA-X initiative – introduced in Sect. 2.2 is working towards the development and provision of such trusted cloud-based infrastructures and data spaces for manufacturing industries and thus leverages “digital” as enabler of sustainability. With 5G new cloud based services can be brought to existing production facilities providing a mobile connection on demand, where connectivity is missing or restricted. Deploying 5G as communication bearer technology allows to run applications where data are generated. As computing power and storage capacity can be distributed within the cloud-edge continuum, privacy of data can be guaranteed, e.g. bringing pre-trained machine learning algorithms down to the machine, sensor or actor. The digital twins enable service provision and deployment between machines and at the edge or in the cloud by means of Asset administration shells (AAS) as depicted in Fig. 2. First movers in R&D and Industry are already driving the future paradigm as will be showcased in the next chapters. Artificial intelligence, integrating existing infrastructures, digital twins, and further resources for data, allow computers to independently design advanced products, corresponding production processes and production lines. In this context digital twins cover existing assets as well as no longer existing (historical) assets. This approach breaks with traditional paradigms and opens-up ground-breaking possibilities also for the acceptance of new solutions in industry and society.
184
G. P. Brasche et al.
Fig. 2 Service oriented architecture in networked production supported by AAS
2 Why Trusted Architectures are Important Trustworthiness - and confidence in trustworthiness – is an essential aspect of industrial systems. Such systems are complex systems of systems that exhibit emergent properties due to the interconnection and interactions of their subsystems. These subsystems include information technology (IT) focussing on data and operational technologies (OT) that use data, sensors, and actuators to change the physical environment. The consequences of incorrect action could lead to loss of human life, disastrous impact on the environment, interruption of critical infrastructure, or other consequences such as disclosure of sensitive data, destruction of equipment, economic loss or damage to reputation. Other trustworthiness concerns for the business include compliance with regulations, avoiding potential liability and litigation and consideration of the potential benefits from a trusted reputation [11, 12]. Trusted infrastructures are a necessary precondition to enable new forms of data based business of a platform economy in manufacturing.
2.1 International Activities on Building Trusted Infrastructures A guidance to assure trustworthiness is provided by the Industrial Internet Consortium (IIC). The vision of the IIC is to deliver a trustworthy Industrial IoT (IIoT) in which the world’s systems and devices are securely connected and controlled to deliver transformational outcomes [11, 12]. In addition to the trustworthiness characteristics, IIC also specified four groups of threats that endanger a trustworthy system, which resulted in the following
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
185
Fig. 3 Trustworthiness characteristics and threats [11, 12]
definition: “Trustworthiness is the degree of confidence one has that the system performs as expected. As shown in Fig. 3, characteristics include safety, security, privacy, reliability and resilience in the face of environmental disturbances, human errors, system faults and attacks” [11, 12].
2.2 European Activities on Building Trusted Infrastructures The Industrial Digital Twin Association (IDTA) – founded in Sep 2020 in Germany under the umbrella of Germany’s Industry Organisations – is the international focal point for this core technology and manages its international distribution. IDTA serves as central point of contact for the digital twin – an alliance of active designers who work together to make the digital twin practical for industry using open technologies [10]. The industrial digital twin establishes a link between physical industrial products and the digital world and thus has to be considered a fundamental and integral technology component of Industry 4.0. With this evolution, digital twins are becoming the core element of digital value creation for industrial applications and with the asset administration shell as an overarching concept, IDTA is creating an innovative, standardized and open industrial digital twin framework. Manufacturers, users and users of industrial software and automation technology thus create the necessary prerequisite for more interoperability, more powerful software components, more “artificial intelligence”-based algorithms and general innovation for smart manufacturing. The sovereign and secure handling of data of industrial digital twins therein plays an outstanding role; with the asset administration shell, IDTA is implementing an important industrial application for GAIA-X.
186
G. P. Brasche et al.
GAIA-X is a program initiated by Europe for Europe and beyond. Its aim is to develop common requirements for a European data infrastructure. Therefore openness, transparency and the ability to access and consume data and services in all European countries without any vendor lock-in or violation of data regulation are central to GAIA-X. Representatives from several European countries and further international partners are currently involved in the initiative. In addition to the general framework for common data infrastructures, GAIA-X also covers the specific development and provision of trusted cloud-based infrastructures for manufacturing industries and thus leverages “digital” and “trust” as enabler of the very much desired sustainability in this important industry sector. As laid out in [8, 16], the value proposition of GAIA-X for digital industries and network production can be summarized as follows: • Without GAIA-X, shared production for companies would not be possible or not so easy to implement. GAIA-X facilitates an open and modular approach to production, enabling companies to work more closely and transparently with each other across company boundaries in a production process. This makes it possible to manufacture new products in which individual companies can take over individual production steps. • Furthermore, in addition to production partners, the company’s own suppliers or end customers can be seamlessly connected and informed in real time about the production process. The control of the production and the value-added networks is data-driven, whereby the ownership rights to the data remain guaranteed. • Already in traditional value chains, the seamless collection and federated processing of services and data open a huge potential for new business models. The difference of a networked production setting is that the potential is enlarged by the formation of ad-hoc value-added networks to each of which different business models could be applied as appropriate. Within first GAIA-X projects, a use case that has already been tested as a German- Dutch- Belgium demonstrator for a 3D production environment for individualised USB sticks is being extended to validate feasibility. Conceptually, the use case is an evolved implementation of Industry 4.0 in a factory or in a company by integrating machines and associated cloud services. The integration is carried out across company boundaries, whereby it is not machines but value-added partners that are constantly being integrated in order to fulfil a particular production order. 5G connectivity is added to this example of a trusted networked production to further enable the integration of remote cloud based services, as illustrated in Fig. 4. Retrofitting of legacy machines, e.g. by deploying smart AI-services from the edge is also demonstrated as part of the project.
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
187
Fig. 4 5G connectivity is added to networked production to enable cloud based services
2.3 Collaborative Manufacturing Based on Data Sharing Along Value Chains The use of external sources offered “as a service” (aaS) to the core industrial process steps such as design, modelling, production, and maintenance, allows the dynamic and flexible creation of multiple industrial ecosystems which will push the industry to the next level on its way to a truly automated system engineering and production. Transformational and disruptive collaborative manufacturing ecosystems will show the transformation of the value creation, value distribution and value delivery to customers, moving ahead from the “traditional value chain”, based on the sequential value aggregation, to the “ecosystem-based value creation” proposing the optimal cooperative value aggregation as shown in Fig. 5. In the industrial context multiple arrangements and models can be considered, but for. SMEs we will focus on the ones that can be implemented in the short term using solutions and providers already available in the market: 1. Manufacturing or Production as a Service (MaaS or PaaS): This is the service the supplier needs to implement for being able to offer to the market manufacturing capacity. The motivation is to offer unused spare capacities to increase the overall efficiency. 2. Aside manufacturing this approach is applicable to other sharable capacities: Design, modeling, testing, maintenance, etc. Order Controlled Supply Chain (OCP): for the flexible and accurate supply (purchasing) of the production needs. This is a first step on the way to the ordered controlled Production. Supplies are used as offered but the supply chain is automatically created and controlled by AI.
188
G. P. Brasche et al.
Fig. 5 Optimal cooperative value aggregation in ecosystems [19]
3. Cooperative production or manufacturing: where multiple-plant-capacities are managed jointly to optimize the overall productivity and efficiency. The same approach is applicable to other industrial processes: Design, modelling, testing, maintenance, etc.
2.4 Mobile Controlled Production – 5G for Digital Factories The 5G working group at ZVEI “5G-Alliance for connected industries and automation” (5G-ACIA) has published work in progress at digital Hannover fair 2021 introducing use cases and testbeds of 5G enabled smart manufacturing and networked production [1, 2]. The German Industry 4.0 platform has published a whitepaper discussing the enabling potential 5G offers for use cases in production and logistics [14]. Substituting cables will lead to new machine design and engineering solutions in production and logistics deploying smart edge and cloud services using wireless connectivity. Actors in the value chain may need different types of network features (quality of service parameters) and solutions as illustrated in Fig. 6: The next chapter introduces how network components will become assets with their services being described by administration shells. They can be treated as Industry 4.0 components, featuring vendor independency and easy integration into sensors, machines and also into software components and end-2-end applications.
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
189
Fig. 6 Actors in the value chain merging telecom and manufacturing industries
Fig. 7 5G capabilities and services
3 State of The Art: Factories Go wireless 3.1 Use Cases and Different Level of Challenges 5G is the first mobile radio technology developed with the target to enable new services for the so called vertical industry. The term vertical is used to define any market that has a specific and focused set of needs that differ from traditional usage of mobile radio. Examples are manufacturing, construction or health. New features have been introduced in order to support ultra-reliable low latency communication (URLLC) of 1 ms and massive machine type communication (mMTC) with millions of connected devices within 1km2 area. It is important to understand that it is difficult to get both at the same time. Typically a triangle diagram is used to visualize the three main features of 5G as shown in Fig. 7.
190
G. P. Brasche et al.
The 5G system architecture is defined by 3GPP and standardized in 3GPP TS 23.501. 3GPP has collected a comprehensive set of use cases and listed the relevant key performance indicators and requirements to fulfill the needs of the different industries in particular the manufacturing and process industry. Industrial organizations like 5G-ACIA (5G Alliance of Connected Industries and Automation) have contributed to the specification of use cases defined in [18]. 5G aims to become the wireless information backbone of the factory of the future in order to provide connectivity for the factory shop floor, seamless integration of wired and wireless components for motion control e.g. based on TSN (Time Sensitive Networks), local and remote control-to-control communication, Mobile Robots and AGVs. Furthermore closed-loop control and remote monitoring for process automation. All services can be classified in Non-Real-time, Soft-Real-time and Hard-Real-time as explained in the 5G-ACIA Whitepaper on “Key 5G Use Cases and Requirements” [3]. It has to be noted that all of these use cases and requirements are based on today’s technologies and a vision on potential enhancements to be worked on in the next years. Unfortunately the technology in certain application areas is evolving very fast. A good example are AGVs (Automated Guided Vehicles). Typically an AGV is a very simple vehicle, with an embedded controller that gets the instruction and waypoints over wireless connections to move on a very clear defined path. New developments allow to add more and more sensors to the AGV platform like cameras or LIDAR systems and to connect them to an edge computing unit. This retrofitting enables more flexible planning of the routing and offloads the intelligence to the edge. This trend is accelerated with the availability of high reliable communications systems like 5G providing very deterministic closed loop control delays of few milliseconds. Saying this, the catalogue of KPIs has to be updated frequently and the density of AGV will increase further. One of the key requirements of the smart factory is the high demand on flexibility. AGVs will be upgraded by additional robot arms and generate a new class of devices called mobile robots. Mobile Robots enable extreme flexibility for the initial planning and frequent update of factory and production lines. It saves investment and enables new services like quality control during transport. 5G will also accelerate the usage of smart grippers equipped with additional sensors. Frequent exchange of the tools mounted at the robot arm don’t require expensive data connectors or slip rings due to the integrated 5G modems. It is an obvious challenge to integrate all of the new 5G capable devices, machines and platforms into the factory IT network and to manage the frequent changes enabled by the wireless connectivity.
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
191
3.2 Factories Go Wireless In the last section AGV and mobile robots have been discussed as example use cases. No doubt that an AGV requires wireless connections. In general there is an increasing demand for more flexibility in factories for more flexible usage of machines with the possibility to place it without time consuming planning and installation costs everywhere if and where needed. Production facilities require rapid reconfigurations to produce different products or variants. Bosch expressed in a press release from April 2021: “Adaptable and efficient: the factory of the future” [5]. Robots and machines shall be portable and easy to use. It seems to be obvious that each machine requires power cables and why not using data cables too? The main motivation for the wireless links is the simplification for the deployment and maintenance of the communication infrastructure. Furthermore data cables are getting more sensitive at higher data rates and less reliable with respect to mechanical stress, e.g. fast moving and rotating robot arms or flexible exchangeable grippers and front tools. Adding new sensors in order to upgrade machines without interfering existing infrastructure is called “brownfield updating” or retrofitting. All these examples motivate operators of factories to introduce wireless systems because it enables new services and improvement of the productivity. In the previous section we focused on factories but fabrics can be very different. A good example are chemical production plants that could be interpreted as factories without roofs. In these large industrial complexes with structures usually grown over decades, it is difficult to integrate new, wired systems. It is very easy to connect devices wirelessly in this case. But especially wireless communication is a challenge in these complex arrangements of metal tanks, connected pipes and endless conveyor belts. Here again, 5G offers various solutions to which the reliability of the connections guarantees even if hundreds of thousands are operated on a square kilometre. Safety is the highest priority here and wireless video communication up to augmented reality helps and minimize the risk for humans. In medical industries hospitals also have a great similarity to production in manufacturing industries when looking at demands and communication needs. Of course, humans are in the focus here, but the technology necessary for the “repair” and “care” of humans is similar in its requirements. Complex machines in the operating rooms and intensive care units and the hospital ward need to be networked. Cables are problematic to guarantee the required hygiene standards and require long conversion times if not every operating room and intensive care unit is to be equipped with all devices. Especially the flexible use of wirelessly connected medical devices allows a new stage of patient care. It is common to all these industries that they place the highest demands on the availability of the system. This includes wireless communication. Availability also means that occurring errors be fixed in shortest time, also troubleshooting, replacement of components and reconfiguration or cloning of devices. This can only be possible if it is precisely known which device, which software, which service is
192
G. P. Brasche et al.
compatible with others and the exact, safe knowledge of how a substitute behaves in the system. With regard to the millions of components, this can only be done with a uniform description of the digital twin or AAS supported by AI.
3.3 Flexible Communication Across Factory Hierarchies 5G allows more flexible communication through the embedded support for isolation of services on the basis of slicing. In addition, the distribution of communication end point can be flexibly defined depending on the requirements of the use cases, the factory operator and the communication relationships, as described in the 5G-ACIA whitepaper [4]. Communication links in 5G are characterized by the User Equipment (UE) and the end point called UPF (User Plane Function) and so called PDU (Packet Data Unit) sessions as shown in Fig. 8. The UPF can be seen as a network port that connects the 5G system with the data network of the factory (Gateway). Each UE can setup up to 12 independent PDU sessions to the same or different UPF. In addition a PDU session can be served by different UPFs at the same time (e.g. in multihoming or redundancy purpose), the
Fig. 8 Flexible communication cross factory layers by slicing
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
193
same UPF can support different PDU sessions for the same UE, however the UE can support up to 12 PDU sessions (Rel16), and different UE can be served by different UPF. 5G can be understood as a very flexible switching network with distributed ports. The 5G control functions (core network) guarantees the data transmission from the UE to the UPF and vice versa. All in all 5G offers a very flexible highly secure and reliable routing of data to any place where a UPF is deployed. The colours in Fig. 8 show the communication links and how a single UE (e.g. in OT domain 2) communicates via redundant PDU sessions with the robot (red UPF5a and UPF5b) on the shopfloor of OT domain 2. An additional link or slice enables the exchange of status information and measurements to the enterprise level of the factory (blue UPF1). On one hand it is obvious that all of this flexibly increases the value of 5G as a communication backbone and on the other hand it requires powerful mechanisms for the management and visualization. A suitable methodology has to be used in order to guarantee a perfect representation of a digital twin of the entire factory including the 5G system.
3.4 The Life Cycle of an Industrial 5G System and Beyond The section above gave an overview on the capabilities of the 5G system and how to use slicing technologies for the secure and flexible communication across hierarchies. Factories are not static. Each production unit or production line has a life cycle with different phases as depicted in the Fig. 9 and explained in [7].
Fig. 9 Life cycle of a factory. (Source 5G-ACIA)
194
G. P. Brasche et al.
A digital twin of the 5G network has to describe the 5G network in terms of its topology, communication links (PDU sessions), data sheets, documentation, network planning records and so forth. Not all information may be available at all phases of the life cycle depending on who plans, deploys and operates the network. Factories, production lines, or production cells are dynamically updated as required to improve or introduce new products or redesigns or to improve efficiency. The frequency of such updates varies across industries. New approaches for more flexible production and greater product customization may shorten the intervals between production line updates. New machines and sensors will be added, making it necessary to update the communication networks. Different scenarios involving updates of and changes to the 5G system can be considered for example if the network is upgraded for a new 3GPP 5G release, integrating new 5G-capable devices or retrofitting existing ones or changing 5G network settings during operation in order to adapt Quality of Service requirements. This means that information of a digital twin has to be updated during the factory’s life cycle (across development, engineering, operation, and maintenance to final scrapping and recycling). Any changes must be documented, and no references to other connected asset management shells may be deleted. 5G is constantly being developed. Functions are improved and their applicability being increased and supplemented by new capabilities. A good example is the positioning of goods in logistics with precisions being required of less than 1 m. For the control of vehicles, the industry demands an accurate positioning and distance determination of 20–30 cm within less than a second at extremely high reliability. This continuous development also requires a continuous further development of the digital twin of the 5G communication system. Every 10 years there is a new generation of mobile radios. For the successor of 5G, the combination of communication and environmental perception (sensing) is in discussion as a logical continuation on top of positioning. Dramatic extensions that will allow completely new applications. For example, Huawei already demonstrated at the Hanover Fair 2021 that 6G allows highly accurate detection of objects and sensing of structures in the order of 1 mm in size even within a paper box, as depicted in Fig. 10. It enables e.g. new methods for the quality control of packing machines and the warehouse management. Despite the very promising possibilities, it must be guaranteed that company investments in the communication infrastructure can be planned in time horizons for more than 10 years. Of course, the new capabilities must also be described as a further twin. An interesting challenge to the standardization of AAS is when the sub model of the communication parameter also includes qualities of sensory perception as part of the mobile radio system.
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
195
Fig. 10 Integrated Sensing and Communication (ISAC) demonstrated at Hannover Fair 21
3.5 The Digital Twin as Enabler of Sustainability In a global context the digital twin concepts becomes an enabler of the circular economy as tracking of substances contained in materials gets possible. Life cycle data chains with trusted and unified descriptions based on the principles of AAS allow targeted support of each individual partner in the supply chain to design parts in such a way that the final product or the final process is as resource-friendly and energy-efficient as possible. Last but not least further benefits as reduction of waste and minimizing overheads and double effort increases the “overall equipment effectiveness” (OEE) of “ecosystem-based value creation” proposing the optimal cooperative value aggregation as depicted in Figs. 1 and 5. In full consequence this would mean that not only each element of the supply chain, but also the transport systems itself have to be modelled as a digital twin. Transport systems that now encompass transport systems from the physical world such as vehicles, trains and the necessary infrastructure but also data transport networks that connect the different production sites down to the end user. Logistics and production, physical and digital have started to merge. The life cycle definition therefore may be updated to now include the entire spectrum: the extraction of raw materials, operating materials and auxiliary materials, production, operation, different reuse cases, reconfigurations, recycling and disposal. The life cycle and the material chains for production according to IEC 62890 as well as the aspects of life cycle assessment (according to ISO 14040 and 14,044) should be compliant and useable, data in all phases being accessible and kept. This includes the assets’ data after delivery and end of life –the so called „digital
196
G. P. Brasche et al.
aftermath “(or „afterlife“ or “after-effects”). These data will be added to the life cycle so that AI based tools get enabled to use actual and historical data for ongoing learning to be used in future sustainable engineering tasks in Ecosystem-based value creation.
4 Outlook on Self Optimising Networked Factories As explained in the previous sections, 5G becomes the wireless communication infrastructure backbone of the smart factory and will enable more flexible communication and more frequent changes of the production environment and setup. With reference to ([9, 15] and as already described in the preceding sections, an asset administration shell is a key component of the Smart Factory and the Industry 4.0 architecture for ensuring integration across system boundaries and interoperability across value chains. It supports the notion of working with digital twins of all factory’s assets including the communication systems.
4.1 A 5G DT Enabled Active Asset Administration Shell Model There are multiple possibilities to define an AAS for a complex and dynamic 5G systems. It could be understood as a powerful, highly secure network switch with distributed network ports. One type of ports represent the 5G UE (User Equipment) and a second type is the so called UPF (User Plane Function). A 5G System should be modelled as an AAS on a level that best suits the functional purpose for which 5G will be deployed in a future factory. In addition, all kinds of physical, virtual (e. g. computing platforms or software in general), or contractual roles have to be considered in the definition of an AAS sub model representing the 5G system. This includes for sure also life cycle aspects. 5G-ACIA proposed in the recently published white paper two different types of AAS [6]. One covers all information, parameters, and properties relevant for the 5G-UE and is called 5G UE AAS. A 5G-UE (5G user equipment) is a functional part of a 5G-capable device and integrated in an industrial wireless modem or device. As shown in Fig. 11, the 5G UE AAS consists of a passive part, an active part, and a message interface that supports Industry 4.0-compliant communication. This structure is aligned with the definition of AAS principles in [13]. A second sub model called “5G Network AAS” contains the most relevant information required to build a digital twin of the 5G system as such, including operational parameters and information about the network topology, logical link as a list of connections and associated parameters. Furthermore a section is introduced in the 5G Network AAS that contains all updates of the network performance monitoring of all communication links and in particular whether the quality of service requirements are fulfilled. Hence all of that information’s and properties build a
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
197
Fig. 11 5G UE AAS as proposed by (DigitalTwin [6])
Fig. 12 hierarchical 5G network AAS sub model and sources for updates during life cycle [6]
complete digital twin of the production lines and the entire factory providing a complete view of the details of the deployed 5G system as well as the frequent updated performance monitoring. Different stakeholder along the different phases of the live cycle are in charge to maintain the 5G Network AAS as depicted in Fig. 12 and in a similar way also the 5G UE AAS.
198
G. P. Brasche et al.
4.2 Enablers for a Continuous Update of the AAS or DT Models Digital Twins are enablers for holistic and efficient engineering of new production systems. It allows the simulation of all process steps and how the machines have to be orchestrated to guarantee maximum performance and highest quality of the produced products. Accurate modelling and detailed description of all of the used assed and their static and dynamic behaviour is the foundation of the high accurate simulation. The proposed 5G UE AAS and 5G Network AAS provides detailed description of the 5G system. How it is deployed and how the connections are configured. Dynamic simulations will derive the required quality of service settings for each of the wireless logical connections during the engineering phase. This includes also the update of the factory and the exchange of the used 5G components e.g. 5G capable devices. All stakeholders are in charge to maintain the properties defined in the 5G AAS sub-models. Interfaces to the planning tools, the construction data base are the enabler for high accurate models. The 5G system in particular the 5G core network and the 5G management system (5G OSS) has to get access to the servers where the 5G AAS sub models are stored in order to maintain the defined properties. Some information are more static and not changed very frequent like the used frequency band or the granted spectrum license. Other parts has to be updated regularly or triggered by defined events. Reporting of the status of the communication links, continuously monitoring of the key performance indicators and update of the AAS via a suitable interfaces creates a life view of the entire factory system including the wireless communication network.
4.3 Self-Optimizing Factories and Production Units Asset administration shells were originally meant as basic information pools, but recently this concept has evolved further. Commands and information calls to I4.0 components can now be triggered by manufacturing execution systems (MES) or an enterprise resource planning (ERP) system. The notion of an active (or interactive) AAS is becoming an important element for building dynamic, self-organizing, self- optimizing, and cross-company value-added networks [8]. Assuming a deep integration of the 5G system in the factory management system, active 5G Network AAS and active 5G UE AAS could interact with the active AAS sub-model of the factory machines to negotiate quality of service requirements. Continues reporting and monitoring of the radio links and the production entities would trigger new negotiations to ensure an error-free and resilient production.
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
199
4.4 Standardization of 5G Network AAS and 5G UE AAS Asset administration shells are subject to the standardization by the IEC TC65 WG24. Some of the sub-models and properties have to be standardized in order to ensure vendor independent support of digital twins. It is task of the standardization bodies in IEC and 3GPP to define the set of mandatory and a set of vendor and operator specific properties. Some other parts might be kept flexible to enable new innovative services. With 5G, so-called private or non-public networks (NPN) were introduced and defined by standardization in 3GPP. Different models are to be distinguished as well as described in the 5G-ACIA white paper [13]. There is on the one hand the completely private network that is under perfect control e.g. by the factory owner and -on the other hand- the variants in which network services are provided by a mobile operator. Depending on the business model and operator model, different levels of control of the 5G network are also provided by the network operator. This has, of course, a strong influence on the information and control possibilities provided through the 5G Network AAS. Here too, IEC standardization is required to take into account the different variants of the operator models. It should be noted that the 5G standardization is a continuous process and new features will be added in upcoming releases. 5G AAS definitions have to be updated along with the 5G releases.
4.5 Trusted Cloud Infrastructures and Common Data Spaces In the previous sections, we laid out the importance of 5G as deterministic wireless communication bearer technology. We explained the role digital twins play to enable production as a service and mapped the concept of digital twins to 5G as integral part of future smart factories. We also mentioned trusted architectures and explained how 5G relate to this. Another aspect of trusted architectures and networked production beyond connectivity are common data spaces in combination with the ‘cloudification’ of data and service assets and the underlying cloud-edge infrastructure. The realization of digital twins as fundamental enabler requires not only powerful compute and storage capabilities (both at the edge and the cloud), but also interoperability and portability of data and services. To be able to seamlessly access and process data which is federated across multiple trust domains and company boundaries the concept of Common Data Spaces has been introduced already several years ago [10]. These data spaces create a federated trust domain which is based on common security and identity management and data governance principles. Services can be accessed through a unified service and application catalogue. All participants of a data space can create and consume federated services and data.
200
G. P. Brasche et al.
This concept is going to be further developed through the already introduced GAIA-X initiative as Common Project of European Interest (CPEI) and integrated part of the to be formed European Alliance on Industrial Data and Cloud. Going beyond the value proposition of GAIA-X introduced in Sect. 2.1, the recently established GAIA-X association, the so called Gaia-X AISBL, is an international non- profit association (French: association internationale sans but lucrative, in short: AISBL) under Belgian law. As written in (GAIA-X, 2021) “the association has been founded with the goal to develop the technical framework for the sovereign data infrastructure and operate the Gaia-X Federation services. It works hand in hand with the Gaia-X Hubs, which give users a voice at the national level, and the open-source software Community in which everyone is welcome to participate […]”. The use case introduced in Sect. 2.1 highlights another important aspect which goes beyond sovereign data infrastructures and common data spaces, but is equally important for the adoption of digital twins and networked production: the use of federated artificial intelligence for the control and (predictive) maintenance as well as match-making of demand and supply, auctioning and contract provision, implies new policies and system assessment criteria related to fairness, explainability, accountability, robustness, reliability, and safety of AI as listed in Table 1. These criteria have to be mapped to digital twins and applied in an internet of production. Various expert groups have been looking into the definition of an Assessment List for Trustworthy Artificial Intelligence (ALTAI), see e.g. (AI, 2021). The federation of data and services as active assets within a networked production ecosystem also require an extension of the asset administration shell. While the past focus could be placed more on hardware, firmware and connectivity related aspects within the shop-floor or physical factories and their physical environment, virtual factories and digital twins thereof require to consider more and more the software dimension and cloudification. Work related to this is ongoing in the corresponding industry associations in particular the ZVEI working group “IT in Automation”. A white paper on the software aspects of the asset administration shell is targeted for publication in the 2nd half of 2021. Clearly digital twins can be considered at various levels such as the Production Twin, Product Twin, Performance Twin and User Twin, and 5G Twin etc. and will lead to operational performance gains and increase resiliency. Considering the cloud as embedded part of the (eco)-system of production as a service, requires new Table 1 Elements of Trustworthy AI
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
201
approaches of the integration of enterprise data across the information technology (IT) and operation technology (OT) domains. The cloud will have to integrate at all steps of the manufacturing life cycle and offer a holistic trustworthy data management. The cloud enriches the picture with business focused data logic and offers the required platform services for the shop floor for the actual smart manufacturing. Service such as maintenance or value added services can make use of AI and analytics cloud features, product development with the product twin and automated configuration will benefit from dedicated high performance compute clusters. The increasing abstraction and scale at the manufacturing level for e.g. production design based on production twins, remote monitoring and asset management can be supported through an integrated cloud-edge IoT platform with rich and powerful data ingress, data lake / storage, and advanced analytics capabilities. Digital twins for the manufacturing sector at all levels will thus be truly enabled through the cloud-edge continuum as crucial component of an open, trustworthy and sovereign data infrastructure. The various activities in that space driven by the EU and national governments in close interaction with the industry and academic institutions put Europe on the forefront of the future of production. With digital twins as cornerstone, networked production will soon become not only reality, but an important part of a vivid data ecosystem for Europe.
References 1. 5G-ACIA. (2021a, 03 04). 5G-ACIA. Retrieved from 5G Alliance for Connected Industries and Automation: www.5g-acia.org 2. 5G-ACIA. (2021b, April 27). Life stream hannover fair 2021. Retrieved from Life Stream Hannover Fair 2021: https://www.hannovermesse.de/apollo/hannover_messe_2021/obs/ Binary/A1089033/5G-ACIA_HMI21_Flyer_Exhibitor%20Livestreaming.pdf 3. 5G-ACIA, 5. f. (2018). 5G for connected industries and automation – 2nd edition. 5G-ACIA. Retrieved from https://www.5g-acia.org/publications 4. 5G-ACIA, I. o. (2019). Integration of industrial ethernet networks with 5G networks. 5G-ACIA. Retrieved from https://www.5g-acia.org/publications 5. Diedrich, C. (2018). Sprache für I4.0-Komponenten – Semantik der Interaktionen von I4.0Komponenten. . 6. DigitalTwin, 5.-A. (2021, April). Using digital twins to integrate 5G into production networks. 5G-ACIA, 5G-ACIA. 5G-ACIA. Retrieved from 5g-acia.org: https://5g-acia.org/ersources/ whitepapers-deliveries 7. Eichinger, G. V. (2019). Mit 5G zu neuen Potentialen in Produktion und Logistic, Handbuch Industrie 4.0. Springer Vieweg. 8. Germany, B. F. (2021, April 27). BMWi federal ministry for economic affairs and energy Germany. Retrieved from Shared Production: https://www.bmwi.de/Redaktion/EN/Artikel/ Digital-World/GAIA-X-Use-Cases/shared-production.html 9. i4.0, p. (2020, July). The background to plattform industrie 4.0. Retrieved from www.plattfrom-i40.de/PI40/Navigation/EN/ThePlatfom/Background/background.html 10. IDTA. (2021, April 28). Industrial digital twin association. Retrieved from Industrial Digital Twin Association: https://idtwin.org/ 11. IIC. (2021a, April 27). IIC-Trustworthiness. Retrieved from Trustworthiness Framework: https://www.iiconsortium.org/pdf/Trustworthiness_Framework_Foundations.pdf
202
G. P. Brasche et al.
12. IIC. (2021b, April 27). Industrial internet consortium. Retrieved from Trustworthynet: The Industrial Internet Consortium’s vision is to deliver a trustworthy Industrial IoT (IIoT) in which the world’s systems and devices are securely connected and controlled to deliver transformational outcomes. We work toward that vision by thoughtfully a. 13. NPN, 5.-A. (2019). 5G Non-public networks for indsutrial scenarios. 5G-ACIA. Retrieved from http://5g-acia.org 14. PI4.0. (2021, April 27). German platform industrie 4.0. Retrieved from https://www.plattformi40.de/PI40/Redaktion/EN/Downloads/Publikation/mobil-gesteuerte-produktion.html 15. Platform I4.0, A. (2020). Details of the asset administration shell – Part 1, version 2.0.1. Federal Ministry for Economic Affairs and Energy (BMWI). 16. SFKL. (2021, April 27). Smartfactory-KL. Retrieved from GAIA-X project SmartMA-X: https://smartfactory.de/en/gaia-x-project-begins-in-kaiserslautern-2/ 17. The Royal Swedisch Academy of Sciences. (2020, October 12). The nobel prize. Retrieved from Press release: The Prize in Economic Sciences 2020: https://www.nobelprize.org/prizes/ economic-sciences/2020/press-release/ 18. TS22.104, 3. (n.d.). 3GPP TS22.104 – Service requirements for cyber-physical control applications in vertical domains. 19. ZVEI. (2021, 04 27). German electrical and electronic manufacturers’ association. Retrieved from Artificial Intelligence in Automation: https://www.zvei.org/presse-medien/publikationen/ ai-in-industrial-automation-white-paper Dr. Götz Philip Brasche CTO Cloud BU R&D Europe/Director Intelligent Cloud Technologies Laboratory Is responsible for joint innovation activities with key partners and customers in Europe and steers the research and development of Huawei’s Cloud, IT and data center product portfolio and the underlying software platform technologies. Huawei with its more than 180,000 employees and nearly 80,000 R&D staff is a world leader in mobile communication technologies, smartphones and IT products and solutions. The ERI as part of the corporate R&D organization is Huawei‘s central “innovation engine” in Europe with 20 locations in 8 countries and more than 1800 employees. The customer-centric R&D activities in Europe ensure that Huawei’s products and solutions meet the particular needs of the European market. Dr. Brasche holds a master’s degree in Computer Science with a minor in Business Administration and a Ph.D. in Electrical Engineering. Prior to joining Huawei, Dr. Brasche held various management and research positions at Microsoft and Ericsson.
Josef Eichinger joined Huawei Technologies in 2013 to strengthen the 5G Research team in Munich. He started his professional carrier as technical expert in the field of industry energy and electronic systems. After the study he joined Siemens AG 1994 and was working in development of high frequency radar systems, optical networks and as researcher on radio technologies as HSPA and LTE. He changed to Nokia Siemens Networks 2007 as LTE Product Manager and was head of LTE-Advanced FastTrack Programs. Currently he is leading research on 5G enabled industrial communication in Huawei Munich Research Center.
The Role of Digital Twins for Trusted Networks in the “Production as a Service…
203
The focus are 5G and the next generation mobile radio for industry 4.0, vehicle- to- vehicle communication and vertical domains in general. Complementary to the research and standardization work he is also responsible for the prove of the new concept by trials and live experiments. Since April 2018 he is also member of the 5G-ACIA steering board and leading the Huawei delegation in 5G-ACIA. Prof. Dr. Juergen Grotepass Juergen Grotepass has graduated from RWTH Aachen, holding a PhD in Machine Vision and Machine Learning. He has worked for the automotive and automotive supplier industry in leading positions for more than 25 years with broad experience in managing innovation, R&D and large industry projects related to automation, robot vision and smart production. The implementation of digitization strategies for adaptive production lines and Industrie 4.0 brownfield consulting became his field of expertise since 2014. Since 2014 he is with HUAWEI Technologies Duesseldorf GmbH, European Research Center (ERC) in Munich responsible for brownfield consulting for manufacturing lines and greenfield consulting to build up a smart process lab in Germany. Since April 2020 acting as CSO Chief Strategy Officer Manufacturing, promoting key technologies for manufacturing industries, i.e. AI and 5G for upscaling Smart Factories. Juergen Grotepass is honorary professor at Tongji University (CDHK) in Shanghai since 2005. His annual lectures are on robot vision solutions and smart data approaches in current shopfloor environments and future smart factories. Juergen Grotepass is elected Chair of the 2021 created Working Group of ZVEI on AI for Industrial Automation (AI:IA at www.ZVEI.org). Juergen Grotepass, April 2021
Integration of Digital Twins & Internet of Things Giancarlo Fortino and Claudio Savaglio
Abstract With the raise of Internet of Things (IoT), the Digital Twin (DT) concept came across newfound lifeblood. The rapidly growing volume and breadth of data that can be captured, processed and forwarded by the smart devices through IoT- related technology, indeed, represent a key enabling factor for making DTs finally ready for prime time, beyond the bounded confines of manufacturing domain. Conversely, the value that DTs add to the management, development and commercialization of IoT systems can have a disruptive impact in the whole ICT landscape of the next few years, further bridging the physical-virtual divide. Such, promising yet challenging, synergy is the topic of this Chapter, in which both conceptual and practical solutions for the integration between DT and IoT are presented as well as the state-of-the-art of main DT-aided IoT Platforms reviewed. Keywords Internet of Things · IoT platforms · Digital twins
1 The Internet of Things (IoT) Recent technology advancements are driving the evolution of many physical systems in cyber-physical systems of systems [1], at different scales and levels. The IoT is leading such transformation by providing a plethora of solutions for augmenting everyday objects with sensing, actuation, communication and networking capabilities aimed at the provision of new-generation cyber-physical services for both human users and computing systems. In particular, starting from 2000s, G. Fortino Department of Computer, Modeling, Electronic, and System Engineering, University of Calabria, Rende, Italy C. Savaglio (*) Institute for High Performance Computing and Networking (ICAR) By the Italian National Research Council (CNR), Rende, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_8
205
206
G. Fortino and C. Savaglio
advancements in wireless connectivity, radio frequency technology, micro electro- mechanical systems, just to name a few, have fostered (i) the development wireless sensors, RFiD systems, micro-controllers, micro-processors, so cheap and small that they can fit into consumer electronic devices and, (ii) their subsequent integration in broader systems (like enterprise resource planning and supply chain), in synergy with artificial intelligence, signal processing, distributed computing paradigms and big data [2]. Hence, within only two decades, the IoT in its many forms has deeply changed our life like few others technology, reason why it is considered the leader of the fourth industrial revolution. Latest IoT numbers [3] speak for themselves: a constantly increasing amount of billions of IoT devices already deployed around (e.g., within smart homes, smart cars, smart factories) and upon (i.e., wearable devices, smart textiles) us, a market of almost one trillion dollars that seems untouched by any crisis (including the recent COVID SARS-19), and a worldwide acknowledgment from leaders companies and institutions which invest significant budgets for the development of IoT enabling technologies (5G, Wi-Fi 6, augmented reality, etc.). Such an “explosion” of technology under the IoT umbrella has originally motivated researchers in approaching this paradigm by three main perspectives, i.e., thing-, internet- and semantic-oriented, for eventually ascertain that the IoT full realization lies exactly at the convergence of these different visions [4]. Indeed, object visibility (i.e. the traceability of an object and the awareness of its status, current location, etc.), ubiquitous networking (for achieving the well- known “anytime, anyplace, anything connectivity”) and semantic approaches (for representing, storing, organizing, and automatically processing, searching and exchanging information) equally contribute to the development of an IoT ecosystem where human users and machines, with instilled autonomic and cognitive skills, seamlessly interact within an information continuum [5]. Digital Twin (DT) is certainly one of the paradigms (e.g., software agents, model-based engineering, grid computing) which has discovered newfound lifeblood and application fields with the emergence of IoT. Earlier than present hype, DTs raised in the aerospace industry as a highly specialized informational/numerical replica of those physical assets (satellites, space rockets, etc.) not directly monitorable and/or physically inspectable. The introduction of IoT and the enabled remote sensing and actuation capabilities have evolved and widened both the original (i) DT concept, transforming the aforementioned mirror-image of a dedicated hardware component into a live model that mimics the behavior and the functionalities of a real-world object; and the (ii) DT extent, adding to the multi- physics simulation goals also the contextual interaction with the cyberphysical assets, their environment and users assets as well as the fullfledged management of the historical data of such contingent interactions. IoT data, technology and protocols have thus empowered the DTs and fostered their evolution with the contribution of other relevant paradigms like AI, big data, etc. Indeed, the most advanced examples of DTs, also called Cognitive Twins or Intelligent DTs, are defined just in the context of IoT
Integration of Digital Twins & Internet of Things
207
Platforms and leveraging on their connectivity, analytics, simulation and visualization tools. Conversely, the value digital twins can bring to the IoT is also relevant, with notable impacts on the design, proto- typing and commercialization of current and future IoT devices and services. These aspects are key since they typically hinder developers and organizations in fully achieving the IoT vision despite its inherent heterogeneity, large scale and complexity. Curiously, so far, this aspect is mostly unexplored since researchers have been mainly focused on applying IoT technology to the DT realm and they have underestimated the huge benefits coming from a bidirectional relationships in which both paradigms match their respective strengths and weaknesses, as depicted in Fig. 1. The rest of the work is organized as follow. In Sect. 2 IoT we show how IoT represents a key enabler for concretely supporting important DT functionalities like identification, modeling, connectivity, smartness, cyberphysicality and usability. In Sect. 3 we review ten of the mainstream IoT Platforms, both from industrial and cloud vendors (e.g., Google, Amazon, Microsoft, PTC, Hi- tachi, SAP), already providing DT-based solutions, framing the state-of-the-art through a comparative framework based on the building blocks of an abstract IoT platform. In Sect. 4, instead, we focus on the potentials of DT-based solutions for advancing the development of the IoT ecosystem, thus outlining the bidirectional links between IoT and DT. Final remarks gathering main insights, pros and cons, and future research directions about the integration of IoT and DT paradigms conclude the work.
TWIN IoT
TWIN
IoT SYSTEM
Fig. 1 IoT & DT
TWIN
208
G. Fortino and C. Savaglio
2 IoT for DT On August 2020,1 within the context of the Object Management Group, the Digital Twin Consortium has started working on a taxonomy and on standards for DT enabling technologies, including the IoT. Indeed, DTs have rapidly found applications in some specific IoT-related domains like Industrial IoT (IIoT) and Smart Manufacturing where the so called “product-service-hybrid” led to marked improvements in terms of business intelligence, effectiveness and efficiency. The well- established IoT background on information modelling, communication protocols, development methodologies and technologies, etc., however, can lead DTs beyond the aforementioned application area, with benefits for the whole ICT landscape and society. Indeed, as reported in the following paragraphs of this Section and summarized in Table 1, the IoT represents a key enabler for DTs, actually supporting their identification, modeling, connectivity, smartness, cyberphysicality and usability through a rich set of heterogeneous solutions. DT Identification Identifying entities across the boundaries of highly distributed and densely deployed systems is challenging both into the DT and IoT realms. An asset identifier links the DT and its physical counterpart but the DT itself also requires an identifier for being referable and accessible from applications and services; likewise, there are billions of cyberphysical resources, ranging from sensors to complex IoT devices, which need to be identified in the IoT before being actually interconnected. Over the years, the lack of a common identification scheme has led to a variety of different types of identifiers. In the IoT domain, on the basis of the specific requirements of devices and applications, Internationalized Resource Identifier (IRI), Universal Resource Identifier (URI), Universal Resource Locator (URL) and Table 1 Important DT functionalities enabled by technologies related to IoT domain IoT- enabled DT functionality DT Identification DT Modelling DT Serialization DT Connectivity DT Smartness DT Cyberphysicality DT Usability
IoT-related technology IRI, URI, URN, EPC, IPv6, RFiD tag IPSO, SensorML, OCF, SSN, OWL XML, SOAP, JSON, Flatbuffers, OPC UA Binary, CBOR, Protobuf NB-IoT, LoRaWAN, Sigfox, Bluetooth, 802.11.*, ZigBee, MQTT, CoAP, AMQP, XMPP Reinforcement Learning, Neural Networks, Data Fusion, Data Analytics, Software agents OGC SensorThings API, Webinos API, XEP-0323, OGC Sensor Web Enablement (SWE) Microservices, Containers, NODE-RED, Crosser, EdgeCloudSim, NS-3, ThingsBoard, Freeboard.io
https://www.iiconsortium.org/press-room/08-20-20.htm.
1
Integration of Digital Twins & Internet of Things
209
Universal Record Name (URN) [6, 7] have been used for identifying logical/physical devices as well as service layer applications. Also IP addresses can serve the cause, if we accept an identifier that might change with the physical location of the object. In detail, IPv6 is expected to be the base of IoT by providing 1038 possible addresses and, hence, identities. However, for being applied to those tiny and resource constrained IoT devices which are not designed to implement IP in the first place, IPv6 needs to be extended [8]. For example, to integrate the node of personal area networks into IP network and ultimately into the whole IoT, the IETF has proposed 6LoWPAN (IPv6 over Low power Wire- less Personal Area Networks), a solution that foresees URI and/or IP address to locate and connect to these small devices around us. Alternatively, radio- frequency identification (RFID) tags of 64–96 bits based on Electronic Product Code (EPC) can be carried out as the payload of the IPv6 packet by means of different techniques [4]. Specific Domain naming systems (DNS) have been finally developed by many IoT frameworks for creating ad-hoc directories and registries: these allow supporting a more efficient and safer intra-domain nodes discovery, without exposing them directly to the Internet [9]. The reported approaches to IoT devices identification can be exactly used also for DTs; beyond identity, however, a variety of additional information about IoT device/DT need to be abstracted and modeled, motivating the need of comprehensive information models. DT Representation/Information Modeling Information modelling is widely acknowledged as the cornerstone for the subsequent development processes of design, implementation, simulation and deployment; therefore, over the years, different models focused on IoT devices and their data have been proposed in the IoT scenario, often developed in the context of international initiatives. These models allow capturing, at different levels of granularity, both IoT devices’ static and dynamic properties as well as physical and virtual features. The availability of these models allow querying and exchanging IoT devices’ data without losing meaning and context. Such rich expertise can be useful exploited for DTs, whose development demands high-fidelity virtual models of the physical environment that change in real-time as the physical environment changes. Indeed, exactly as a DT clones an IoT device, a DT model overlaps an IoT devices model in many functionalities and goals. Location, physical dimension, current operational status (on, off, busy, idle), ownership, are just some of the features of an IoT device that can be of interest for a DT. Some well-known standards for modeling simple physical objects, such as sensors and actuators, are IPSO (IP for Smart Objects), Sensor Model Language (SensorML) and OCF (Open Connectivity Foundation) [10, 11]. Mostly related to IIoT, they describe the same set of real-world physical objects (even through a slightly-different terminology) and they recur to software abstractions inspired by the object-oriented approach. Due to these characteristics, they are particularly suitable for part and product twinning, while system and process twinning demand for more comprehensive models. In such directions, ACOSOMETH [12], IEEE P2413 [13], AIOTI [14] allow describing more complex IoT devices and whole IoT
210
G. Fortino and C. Savaglio
systems, including human users and the IoT services they consume. In particular, they are described from a wider perspective, often by means of metadata-based model for the sake of extensibility, specifying fine-grained features as well as behavioral specifications and articulated classification/hierarchical relationships. These models, yet full-fledged and fine-grained, can be further shaped to promote their semantic interoperability and/or to enable the automatic reasoning upon them: to this end, machine understandable and unambiguous ontologies are required, better if aligned to the semantic web standards. Semantic Sensor Network (SSN), Next Generation Service Interfaces-Linked Data (NGSI-LD) and IoT-Lite, widely used for modeling key IoT concepts, can be properly extended for formally representing a DT in the context of IoT and CPS [10, 15]. In the current practice, however, complex ontologies are rarely used in favor of information models which embed just few machine-readable semantic descriptors [16]: this consideration holds for the majority of computer engineering fields, including IoT and DT domains. DT Serialization Alongside the modeling of IoT devices/DTs, their features and relationships, data-interchange formats and serialization mechanism, as much lightweight and interoperable as possible, have pivotal importance [17]. In such direction, XML, SOAP and JSON (JavaScript Object Notation) has been widely used in the IoT domain as a concise, human-readable, easily parseable and structured serialization syntax for exchanging, at Web-scale, general purpose data in string-oriented payloads. Thanks to these benefits and versatility, one of the few foundational standards in the DT domain, the Digital Twin Definition Language (DTDL),2 is based on an implementation of JSON. However, DTDL supports only predefined semantic annotations (i.e., extensibility is not given) and it is hard to apply to those IoT devices particularly resource constrained or those IoT applications demanding automatic data processing and binary encodings. In those situations, OPC UA Binary and Flatbuffers and, mostly, CBOR and Protobuf represent more efficient data formats for speeding- up, parsing and standardizing operations on simple binary data, which are also frequent in DT interactions (sending a command of switch on/off; getting the current temperature value, etc.). CBOR (Concise Binary Object Representation)3 is a IETF RFC 7049 binary data serialization format loosely based on JSON, making the data interchange possible without casting a schema in concrete. These features provides extensibility, velocity and simplicity to activities related to the exchange and process of simpler data (encryption keys, graphic data, or sensor values), exactly matching the needs of very simple, inexpensive IoT devices and of time-critical IoT applications. Protobuf4 also focuses on binary encodings for resource-constrained devices but, in addition, it provides cost-optimized data forwarding mechanisms to the cloud. This is especially beneficial for IIoT devices bounded to mobile broadband connections in the absence of
https://github.com/Azure/opendigitaltwins-dtdl. https://www.rfc-editor.org/rfc/rfc8949.html. 4 https://developers.google.com/protocol-buffers/docs/reference/proto3-spec. 2 3
Integration of Digital Twins & Internet of Things
211
Ethernet/WiFi. Thanks to these features, CBOR and Protobuf have recently found success also in the DT realm. DT Connectivity An effective connectivity allows the physical and information models to be synchronized as well as the DTs to faithfully mirror their real-world counterparts. The plethora of network/message protocols paved by IoT can be exploited to save the integrity of DTs models and their data according to the specific requirements in terms of bandwidth and energy consumption, latency, overhead, coverage, data rate, wired-wireless transmission etc. Within the domain of network protocols, Ethernet Time-Sensitive Networking (TSN) is the absolute leader in Legacy Ethernet-Based Connectivity for IIoT while NB-IoT, LoRaWAN, Sigfox, Bluetooth, 802.11ah, 802.11n, ZigBee, Z-Wave are all mainstream IoT-oriented wireless technologies (for a nice comparative re- view of IoT-connectivity, see [18–20]). In the case of interoperability issues due to the adoption of different technologies, IoT gateways [21, 22] have already proved their effectiveness, also relying on resource constrained and mobile devices. For the message protocols, instead, MQTT, CoAP, AMQP, XMPP are the most popular IoT- oriented solutions for providing end-to-end communication without Internet connection (see [23]). As for the IoT gateways, several IoT middlewares have been developed for the sake of interoperability and they can be successfully integrated in any DT architectures, also with additional functionalities (e.g., management). Beside protocols, proper infrastructures, i.e., cloud, edge, cellular networks (4/5/6G), have to be settled up, for further supporting the aforementioned requirements and providing additional built-in mechanisms for the sake of network partitioning, network latency, privacy and security. Just security is an aspect asking for particular attention, since the valuable DT information flow must be protected to prevent theft or interruptions. In the IoT scenario, different security mechanisms have been designed to automatically secure cloud/edge devices, to guarantee message confidentiality, to regulate mutual authentication and the access to sensitive information, to name a few. Such a mechanisms can be straightforwardly applied to protect the DT connectivity, according to the needed security level. DT Smartness The interplay between AI and IoT has provided incredible results in the past few years. Indeed, advanced but lightweight techniques have been taken and modified by well-established fields like machine learning (ML), data analytics, software agents, for processing IoT data by simultaneously meeting the IoT devices’ design constraints (limited computational resources, limited energy, limited price) and the application business goals (responsiveness, accuracy, precision, etc.) [24]. In the IoT domain, for example, there is a well- established background on data fusion over multimodal heterogeneous sources, reinforcement/deep learning, neural networks, knowledge graphs etc. Suitable for both small- and large scale systems, these techniques enable, for example, the human activity recognition in BSNs, the optimal duty cycle for Smart City’s sensors, QoS-aware tasks offloading strategies for Internet-of-Vehicles, anomaly detection for IIoT devices and energy’s loads and prices predictions in smart grids, just to name a few.
212
G. Fortino and C. Savaglio
DT can leverage on such a background for processing, locally or remotely, the wide sea of historical and sensory data and for extracting actuable insights, namely operations of practical use for optimize the production. But, going be- yond the “traditional” intelligence tasks addressed towards predictive maintenance and business continuity, the intelligent solutions developed in the IoT contexts can additionally provide adaptiveness and autonomy to DTs, enabling unsupervised objects and patterns recognition, adaptive user interfaces (sensitive to and continuously updated with the preferences and priorities of the operators) and decision-making-support for uncertainty in complex and noisy environments, just to name a few. Moreover, exactly as IoT devices have found huge synergy with the software agents [25], also DTs can be successfully “agentified” for achieving common complex goals through coordination and cooperation [26]. DTs provided with such cognitive and autonomic skills are somewhere defined “Cognitive Twins”, “Intelligent DTs” or “Next- generation DTs” [27–29] because they enable the development of innovative products and services and they pave the way towards novel research directions in the field. DT Cyberphysicality Sensors and actuators allow bridging the gap between virtual and physical worlds and closing the Sense-Plan-Act innovation loop. This holds both for IoT devices and DTs. Sensors and Actuators Networks (SANs) represent an essential element in any IoT system, constituting the so called “perception layer” upon which all the IoT architectures are built. The advancements in MEMS, wireless technology and materials have paved the way toward the development of tiny, cheap but precise sensors and actuators that can be embedded and/or deployed almost everywhere. Geographical position, atmospheric elements (temperature, humidity, wind), physical quantities (speed, acceleration, rotation, voltage), just to name a few, can be sensed and a variety of cyberphysical operations (message displaying, sound emission, led blinking, mechanic actions, etc.), aimed at humans or devices, performed. For easily accessing sensors/actuators and exchanging their data over different IoT networks, APIs like OGC SensorThings and Webinos IoT as well as specifications like XEP-0323, OGC Sensor Web Enablement (SWE) and IETF SenML have been developed and applied in commercial and industrial IoT platforms. Sensors and actuators, therefore, can be considered as a corner- stone for making IoT systems actually situated and context-aware. DTs rely on the basis of the broad experience in the IoT domain as well as on the availability of sensors/actuators for gathering data from physical assets and manipulating their operating conditions and performance in real-time. In particular, if provided with IoT API, DTs can access sensors and actuators in a transparent and secure way, implementing the concept of “Device as a Service” [30]. DT Usability Successful IoT systems have in common the usability as one of their cornerstone principles. Lots of dashboards, simulators, UI and other tools have been designed to support the different stakeholders (engineers/managers/operators/users) and provide an intuitively overview to the current and future status of an IoT device.
Integration of Digital Twins & Internet of Things
213
Given the abundance of data flowing into DTs as well as the variety of possible operations, usability covers a key role also for DTs and should be considered, by- design, as a first class property. Indeed, the history says that one of the main reasons for technologies failures or limited spread is just their poor usability. Microservices and containers are possible candidates [26] for programmatically supporting the integration and composition of several IoT devices but also of DTs, automating their deployment and simplifying the design of their complex services. To the same aims, tools like NODE-RED and Crosser adopt a data-flow- based visual programming approach for easily wiring together hardware devices. In detail, NODE-RED, adopts a web-based approach and presents an extensive set of libraries for interfacing mainstream online services, while Crosser is specifically tailored on IIoT scenarios and provides more utilities for large scale deployments and machine learning support as well as built-in security and de- bugging mechanisms. These are just two of the mainstream platforms aimed at simplifying the IoT programming and at speeding-up expert developers as well as including stakeholders with different expertise in the IoT system design. Given the multidisciplinary approach demanded for DT development, these aspect is worthy of attention otherwise production/operation and design/modeling teams will work in silos. Simulators like EdgeCloudSim, DPWSim, NS-3, instead, allow testing behaviours and performances of IoT systems deployed upon different infrastructures and according to different set-up [31]. They allow designing different application and network configurations, preliminary evaluating different metrics, and experiencing unexpected situations: in such a way, virtual testbeds, even large scale, can be rapidly settled up at no cost while the actual, expensive and time consuming IoT system deployment carried out only after extensive simulations. These simulators can be successfully exploited also for assessing DT’s models validity and predictability in a variety of conditions. Finally, advanced dashboards, monitors tools and widgets allow intuitively visualizing raw or synthetic data from real/simulated system operation at different granularity, thus enabling simple monitoring activities or complex in-depth analysis. Things- Board, Freeboard.io and NODE-RED UI are just some examples of IoT dash- boards but their user communities are absolutely wide and active. The relevance of these instruments, whereas exclusively devoted to the presentation task, is indeed pivotal; in real world applications, the lack of a direct and intuitive feed- back between a consumer and an IoT device/DT drastically reduce the overall usability.
3 IoT Platforms for DT The IoT-enabled DT functionalities, theoretically outlined in Sect. 2, are concretely implemented within the IoT Platforms [32]. These fully support stake- holders in the scalable, efficient and secure management (i.e., connection, access, protection, analysis, visualization) of heterogeneous (IoT, but not exclusively) data and devices,
214
G. Fortino and C. Savaglio
aiming to streamline development processes and provide greater value to business. To this end, an increasing number of IoT Platforms has started offering support for the creation, integration and exploitation of DTs along their entire life (from the initial DT model tuning to the periodic synchronization given by the entanglement with the physical assets), leveraging on the rich data, analytics tools and connectivity mechanism provided by the IoT Plat- form itself. Indeed, according to recent research and market studies, “the 75% of Organizations Implementing IoT Already Use Digital Twins or Plan to Within a Year” and “up to 89% of all IoT Platforms will contain some form of Digital Twinning capability by 2025”, with “61% of companies that have implemented digital twins have already integrated at least one pair of digital twins with each other”. Such a trend is particularly emphasized in the IIoT domain, where “over 92% of vendors recognize the need for IIoT APIs and platform integration with digital twinning functionality” but it is not surprising. In the following we briefly review, without any claim to being exhaustive (see [33] for a more comprehensive list), ten of the most important IoT Platforms which already integrate DTs and provide specific instruments for their development and support. Then, from the lessons learned from such analysis on very heterogeneous IoT Platforms and by pooling their commonalities, we provide an application domain neutral reference architecture of an abstract DT-oriented IoT Platform and we utilize its main technology-agnostic building blocks, depicted also in Fig. 2, as the criteria for outlining the IoT Platforms comparative framework, reported in Table 2.
3.1 DT-Oriented IoT Platforms The landscape of IoT platforms is wider every day so, in the following, we briefly review the ten most relevant ones provided both by cloud and industrial vendors. Being featured by different comprehensiveness, targeted to different stakeholders, APPLICATION LAYER TOOLS
TOOLS
MGMT
SECURED NETWORKING
Fig. 2 IoT platform for DTs
SECURED API
TOOLS
DATA MGMT
215
Integration of Digital Twins & Internet of Things Table 2 IoT platforms and DTs
Amazon Web Service AWS
Microsoft AZURE
Connectivity layer MQTT, Websocket, HTTPS, TLS. Amazon Event Bridge, AWS Gateway
Data & Resource layer Auto load balancing, auto-scaling, multizone backup
MQTT, AMQP, Websocket, REST, OPC, HTTPS, TLS +350 3rd-party Connectors, Azure IoT Edge
Azure Stream Analytics, Azure Machine Learning. Azure IoT Hub, Azure Sphere, Advisor, Monitor
IoT HTTP, MQTT, AMQP 1.0, Lo-RaWAN, TLS. Bosch IoT Gate- way Siemens MQTT, OPC, Mind-Sphere REST, HTTPS, TLS. APIs and Gateway for IIoT
Bosch IoT Bosh IoT Insights, Thing, Eclipse Ditto Predictive Learning Analitycs, Cloud Foundry, Siemens Policy
GE PREDIX
Predix Edge/Cloud for Analytics. Predix Event Hub, Cloud Foundry
BOSCH Suite
GOOGLE HITACHI
MQTT, SCADA
MQTT. Gateway Operational Technology Gate-way, IT Gateway IBM WATSON MQTT, REST, HTTPS, TLS APIs and IBM Secure Gateway SAP MQTT, REST, HTTPS, TLS SAP Cloud open connectors, SAP Gateway
Google Cloud ML Lumada Analytics, Hitachi Data Hub, Dockers, Kubernets IBM Cloud BLUEMIX, IBM Watosn, Analytics, Blockchain, IBM policy SAP HANA, SAP Analytics, Cloud Foundry
Application layer Amazon Quicksight, AWS IoT, Things Graph Data Model, 3rd- party simulators Twin Builder, DTDL, Azure IoT Device Simulation, Azure Digital Twins Graph Viewer Bosch IoT Rollouts, 3rd-party visualization tools Visual Explore tool, Mind- Sphere Closed Loop System, CAD modeling Essentials Event Console, GE Digital APM System Reliability Analysis, 3rd-party simulators Data Studio Hitachi Visualization Suite, NODE-RED IBM Digital Twin Exchange, Asset Monitoring SAP Fiori, SAP Cloud Plat-form Cockpit, SAP IoT Simulator, Thing Modeler
DT support Basic
Advanced
Advanced
Advanced
Intermediate
Basic Advanced
Advanced
Intermediate
(continued)
G. Fortino and C. Savaglio
216 Table 2 (continued) Connectivity layer PTC MQTT, REST, THINKWORX HTTPS, TLS 3rd-party API
Data & Resource layer Thinkworx Analytics, 3rd-party utilities
Application layer DT support Intermediate Mashups Vuforia, ANSYS Simulator
and aimed at pursuing different goals (e.g., reliability, security, legacy and ecosystem integration for IIoT platforms; scalability, customization, support and cost for the general purpose ones), these IoT Platforms are very heterogeneous with each others. Therefore, we first provide a high-level presentation of their main features and then we particularly focus on their provided DT solutions, which range from simple Device Twins/Device Shadows for the management of device status to fully functional Cognitive Twins/Digital Thread for advanced analytics and simulations. Amazon Web Services (AWS) Is an IoT platform based on the cloud system of Amazon providing a broad and deep catalogue of IoT services, from the edge to the cloud, targeted to industry, consumers, and commerce. Such a large audience demands a wide support to different connectivity protocols, storage technologies and AI-related tools as well as DTs to virtualize heterogeneous assets and keep track of their current/past status and performance. To this end, AWS provides support to a variety of DTs: basic Amazon “Device Shadows” are mainly focused on device management (interconnection to AWS Cloud, status reportage and identication over MQTT) while more complex Digital Threads provided by third- party (e.g., Vertex) can be easily integrated within the rich Amazon ecosystem according to the customer needs. Azure Is a ubiquitous IoT platform from Microsoft for modeling and analyzing the interactions between people, spaces, and devices. Real-time data processing and integration are the focus of Azure which provides an advanced stream analytics tool and very rich catalog of certified connectivity protocols, software connectors for enterprise systems, data formats and data models. The “Replicas” are DTs developed by Microsoft and located at the middle of the whole Azure IoT Stack: indeed, DTs are created for each IoT device connected to Azure IoT Hub so to abstract their status (by using properties, tags and metadata), man- age (synchronisation, software updates, multi-tenancy etc.) and visualize them (through the Azure Digital Twins Graph Viewer). DTs’ models, capabilities and interfaces towards the rest of the Azure Suite are provided as SaaS and described by the Digital Twins Definition Language (DTDL). It is based on open W3C standards, such as JSON-LD and RDF, which allow for easier adoption across services and tooling. Bosch IoT Suite Is an IoT platform providing advanced connectivity options and application services for manufacturing companies, supporting both software and custom gateways, different connectors, analytics tools and asset managers. For the sake of interoperability, machines, customer applications and services are virtualized
Integration of Digital Twins & Internet of Things
217
by means of DTs whose connection, back-end management and orchestration is executed in the cloud through Eclipse Ditto, a framework developed by Bosch itself. It can be seen as an open source foundational layer of Bosch IoT platform that provides API to monitor DTs’ status (“twin Channel”) and send them commands (“live channel”), thus turning real-world devices into on-cloud or on-premise services. Google Cloud IoT Core Leverages on the Google’s services space for developing and deploying IoT solutions in its cloud platform. It provides a fully managed service for interconnecting devices through mainstream industry standards and performing downstream analytics by operating on Google’s serverless infrastructure which automatically scales in response to real-time events. As for AWS, the native support for DT is limited and exclusively based on MQTT or REST endpoints; existing IoT and AI components (e.g., Google Cloud ML engine with TensorFlow) with custom development and configuration should be used to deploy and train DT so that customers can connect, store, and analyse data in the cloud and at the edge. Lumada Is an Hitachi IoT Platform, available both on-premise and cloud-based options, that collects, refines, organizes, and integrates massive amounts of heterogeneous data generated from production sites, stores them in a data lake, and then provides it to analysis applications. With such abundance of data, the entire production line as well as its related events can be re-created, simulated and assessed in a digital space by means of the Hitachi digital twin solution. It integrates the stored data and rich metadata from all the manufacturing processes and machines, and then provides the data to analytics, visualization and simulation tools for achieving continuous industrial improvements, without expert knowledge of production operations. Predix IoT platform from General Electric (GE) offers a rich catalog of composable services and an analytics tools catering to the needs of the manufacturing industry in big data analysis, anomaly detection, pattern and trend recognition etc. DTs are fully integrated within Predix, designed as microservices, implemented through the REST API, and integrated with both Predix Analytics and Predix Asset to access metadata about the assets and build analytics (supporting adjusting, filtering, merging, storing, and management of data, mainly for remote monitoring, predictive maintenance and in accordance to customer defined Key Performance Indicator and business objectives). Indeed, through analytics and DTs’ capabilities usable business insights are extracted for automating workflows, by interfaces to customers, and integrating with the control system in a customer’s facility. MindSphere Is a Siemens’ cloud-based IoT open operating system providing so comprehensive functionalities for being considered as an IoT platform. It integrates, in real-time, industrial data, devices and physical infrastructures in the cloud through a variety of secure protocols and APIs, and it provides big data analysis and mining, industrial applications, and value-added services for embracing the Industrial 4.0 concept. DTs are pivotal in this process of business transformation, providing
218
G. Fortino and C. Savaglio
important support to the predictive learning and to the simulation (MindSphere Closed Loop System Simulation) for easily assessing the impact on DT models on assets and vice-versa. SAP Cloud Platform Supports the SAP Leonardo digital innovation system in securely ingesting large volumes of IoT data for connecting things to people and processes as well as for providing insights using machine learning, edge and big data analytics by means of intuitive user interface. In particular, SAP Leonardo implements DTs to enrich the IoT data within the business context of the asset and operator—all based on a unified semantic model- and to support integration processes at different levels, i.e., “Twin-to-device integration” (e.g., mirroring and entanglement), “Twin-to-twin integration” (composability and augmentation) and “Twin-to-system-of-record integration” and “Twin-to- system-of-intelligence integration” (i.e., memorization and data analysis). ThingWorx Is an IoT platform by PTC whose missions is harvesting IIoT/IoT data, facilitating the development of customized data analytics models, and delivering valuable insights to users via an intuitive, role-based user interface. Industrial protocols conversion, management of large fleets of devices and integration with ERP systems are ThingWorx main functionalities and they are fully supported by Digital Twins models, designed by means of graphical tools and embedded in MATLAB (Simulink), SimulationX and ANSYS tools. These simulation tools promote a fast development of DT, especially in IIoT and AR contexts, and allow turning data in actionable intelligence, thus enabling companies to design smarter products and to perform predictive maintenance. Watson IoT Platform From IBM allows managing large scale systems through the data collected from millions of IoT devices, in real time and according to sophisticated security policies. The platform has several add on features: cloud based facilities, visual data analytics, edge capabilities, blockchain services and, obviously, Digital Twins. Each asset that is connected to the platform is managed through IBM Maximo and can be transformed into a DT by attaching both real-time and historic sensor data to it. Once this is done, leveraging on the wide cognitive and predictive capabilities of the platform to provide enterprise intelligence and operational models for the company’s assets. In particular, the IBM Digital Twin Exchange allows content providers to easily showcase their DTs and customers to browse, purchase and download them, exactly as movies or apps are currently sponsored on marketplaces. This is a strategic operation which can also speed-up the development and adoption of DTs.
Integration of Digital Twins & Internet of Things
219
3.2 An Abstract IoT Platform for DT In the following we outline a three-layer reference architecture for an abstract IoT Platform, domain- and technology-agnostic, that would integrate and pro- vide support to DTs. Such an architecture is in line with the ones provided by [26, 32, 34, 35] and aimed at particularly implementing the IoT-enabled functionalities reported in Sect. 2. Its main building blocks, moreover, are used as criteria to compare, in Table 2, the ten IoT Platforms previously surveyed, that would be hardly overlapping otherwise due to their heterogeneity. User applications, third-party services, local (Edge/Fog) devices as well as remote Cloud server can be interact both at Connectivity and Application layers while the internal data flow passes through the Resource&Data Layer, where analytics and management operations have place. In more details: –– Connectivity Layer: here the IoT Platform provides pre-built connectors and (REST/SOAP) APIs, suite of open and standards protocols (HTTP, MQTT, etc.) and gateway solutions aiming to connect, locally or remotely but always in a secure way, heterogeneous assets and third-party services. This foundational layer and its IoT-oriented technologies are key for DT cyberphysicality, by enabling the interfaces with sensors and actuators deployed in the real world, and DT connectivity, by enabling the DT synchronization as well as its models integrity. –– Resource&Data Layer: here the IoT Platform provides key analytics features (neural networks, pattern tracking, regression analysis etc.) as well as policy- based management features aimed at both resources (e.g., load balancing strategies, horizontal and vertical scalability support, multi-zone hosting) and data (backup-restore-recovery strategies, access rights, data formats, etc.). This layer stands in the middle between physical devices and applications and its IoT- oriented technologies are key for DT smartness, with AI techniques and other data-analysis methods providing insights from data; DT Identification by means of identity schema customized on the specific needs; and DT usability, by providing effective infrastructures for automatically handling the complex management of IoT ecosystem. –– Application Later: here the IoT Platform provides high-level tools aimed at the modeling, visualization and simulation of its data and resources, including DTs. In particular, this layer and related tools are key for DT Representation, by providing information models ranging from 3D-CAD to semantic descriptions; and DT Usability, by providing tools, ranging from advanced human-machine interfaces to basic dashboards or GUI, to intuitively display all the data and insights generated by real/simulated assets and processes and interact with them.
220
G. Fortino and C. Savaglio
4 DT for IoT If the IoT enables the key DTs functionalities reported in Sect. 2, it is also true that DTs can be used to address different issues faced by IoT developers and organizations, so to significantly decrease the complexity of IoT ecosystems while increasing efficiency. Indeed, DTs simplify IoT solution development by providing easy access to data and features of a device and by providing additional services around it, approaching hardware and software engineers as well as data scientists and other IT professionals around DT concept. IoT Device Abstraction Over the years, different abstractions (software agents, actors, services) have been developed to abstract main IoT devices’ features and functionalities from low-level specifications. Along this line, Part-Product- System- Process Twins allow modeling an IoT device (from its single physical components to its provided services) or a whole IoT systems, by hiding technical details and according to different lens. Such a customizable degree of abstraction, on one hand simplifies the understanding, documentation, explanation and sharing (i.e., organizational interoperability) of the behavior of a specific device or system; on the other hand, it allows for applications to interact with device(s) in a consistent manner, thus enabling DTs to behave like a “device proxy” and to implement a “device-as- a-service” paradigm. IoT Data Encapsulation DTs can be designed as hubs for gathering all the relevant data related to an IoT device or IoT service and generated along its entire life (also solving the issue of limited storage). Such holistic view on IoT data, simultaneously available in a common data lake and homogeneously formatted, tackles the problem of data fragmentation, typical of embedded IoT devices and distributed systems, improves situational awareness, simplifies the exploitation of data analytics as well as the training of AI models, and, finally, it fosters its integration in broader systems. IoT Remote Management DTs decouple data and services from their physical providers (simple sensors, actuators or IoT devices) so acting as a proxy. In such a way, DTs can remotely update, control and secure IoT devices using their online interface. Business units can interact directly with the DT instead of the asset while developers can easily change, replace, or evolve individual parts (e.g., further augmenting an IoT device with new sensors, introducing a novel applications) with minimal interventions. Such possibility has particular benefits for large-scale IoT scenario or difficult deployments where physically interacting with the assets is costly, dangerous, or even impossible. For example, at any time and remotely, IoT Devices can be shut down due to cybersecurity risks or provided with new firmware. IoT Device Prototyping The use of DTs thrives on the rapid collection, aggregation and analysis of data from connected technologies, thus streamlining the whole IoT production, including research, design, setting up, running and validation phases. In such a way, the prototyping phase can be achieved in an efficient and
Integration of Digital Twins & Internet of Things
221
effective way, without building any physical version of the asset. This latter aspect particularly attracts businesses, since the continuous testing over the physical model of a system has proved to be a costly and weak prototyping approach. Conversely, by simulating a DT benefitting of its real-time data, data scientists and other IT professionals can optimize deployments for multiple processes peak efficiency, reduced downtime and extended asset’s lifetime. IoT Commercialization DTs enhance product traceability processes. The majority of the data about an IoT device operational condition and performance, even not privacy-sensitive, are inaccessible to the manufacturer but only to the end user. DTs and Digital Thread facilitate the gathering of those data from an IoT device aiming at pushing its commercialization, opening new business models pivoted on the sale of asset-related operation data or on the definition pay-for-performance compensation. In such a way, a DT becomes an added value for an IoT device, by enabling service differentiation and important new revenue streams which, in their turn, provide competitive advantages for both manufacturer or customers.
5 Conclusion The principles of twinning physical assets is over 50 years old but the true rise of DTs corresponded with the IoT. Indeed, simultaneously, on the one hand the plethora of IoT solutions has enabled main DT functionalities, solving those technological and infrastructural issues that have limited DTs spread so far; on the other hand, the wide sea of heterogeneous IoT devices, data and services requires adequate abstraction, encapsulation, prototyping, management and commercialization solutions that can be successfully pivoted on the DT concept. In this Chapter we have shed light on the relationships between DT and IoT, surveying both the conceptual synergies and the playground on which they found concrete implementation, i.e., the IoT Platforms. The first main finding of such analysis is that the existing overlapping between DT and IoT is bound to increase to the same extent as the physical-digital divide is going to blur, thus making an IoT device not clearly separable in some aspects from its DT. Secondly, in such an entanglement process, the servitization of IoT devices facilitated by the DTs and the semantics will have absolute (both economic and commercial) relevance, thus motivating big players in supporting standardization activities for the sake of the highest interoperability and integration. Finally, However, also for the IoT&DT duo, the “one size fits all” principle does not apply. The rapidly evolvability of IoT devices and human involvement in their operations make similar products behaving differently and thus demanding highly customized DTs models. Then, choosing the proper IoT solution among the chaotic abundance of available technologies, protocols, standards etc. as well as choosing the DT integration pattern better matching the implementation needs of an IoT device/system/service can be challenging and it requires a wide domain knowledge for achieving the expected results. Moreover, in
222
G. Fortino and C. Savaglio
the cases of simplest IoT devices and services, DTs might result a technology overkill which introduces additional, unnecessary complexity and relevant concerns (e.g., cost, security, privacy, data quality), thus complicating instead of simplifying the IoT development processes. Exactly as for any integration of paradigms, some preliminaries considerations are, therefore, necessary but there is no doubt that senior executives, IT directors and technical leaders in general can leverage on DTs in their digital business journey across the IoT driven world, and viceversa. Acknowledgments This work was supported by the “COGITO—Sistema di- namico e cognitivo per consentire agli edifici di apprendere ed adattarsi” Project, funded by the Italian Government, under Grant ARS01 00836 and the Italian MIUR, PRIN 2017 Project “Fluidware” (CUP H24I17000070001).
References 1. Fortino, G., Savaglio, C., Spezzano, G., & Zhou, M. (2020). Internet of things as system of systems: A review of methodologies, frameworks, platforms, and tools. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 51(1), 223–236. 2. Gubbi, J., Buyya, R., Marusic, S., & Palaniswami, M. (2013). Internet of Things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems, 29(7), 1645–1660. 3. Internet of Things (IoT) market – growth, trends, COVID-19 impact, and forecasts (2021–2026). https://www.mordorintelligence.com/industry-reports/internet-of-things-moving-towards-a- smarter-tomorrow-market-industry. Accessed 30 Apr 2021. 4. Atzori, L., Iera, A., & Morabito, G. (2010). The Internet of Things: A survey. Computer Networks, 54(15), 2787–2805. 5. Qihui, W., Ding, G., Yuhua, X., Feng, S., Zhiyong, D., Wang, J., & Long, K. (2014). Cognitive Internet of Things: A new paradigm beyond connection. IEEE Internet of Things Journal, 1(2), 129–143. 6. Aftab, H., Gilani, K., Lee, J. E., Nkenyereye, L., Jeong, S. M., & Song, J. S. (2020). Analysis of identifiers in IoT platforms. Digital Communications and Networks, 6(3), 333–340. 7. Sahlmann, K., Scheffler, T., & Schnor, B. (2018). Ontology-driven device descriptions for IoT network management. In 2018 Global Internet of Things Summit (GIoTS) (pp. 1–6). IEEE. 8. Liu, C. H., Yang, B., & Liu, T. (2014). Efficient naming, addressing and profile services in Internet-of-Things sensory environments. Ad Hoc Networks, 18, 85–101. 9. Hesselman, C., Kaeo, M., Chapin, L., Claffy, K., Seiden, M., McPherson, D., Piscitello, D., McConachie, A., April, T., Latour, J., et al. (2020). The DNS in IoT: Opportunities, risks, and challenges. IEEE Internet Computing, 24(4), 23–32. 10. Franco, A. C., & da Silva and Pascal Hirmer. (2020). Models for Internet of Things environments—A survey. Information, 11(10), 487. 11. Milenkovic, M. (2020b). IoT data standards and industry specifications. In Internet of Things: Concepts and system design (pp. 225–245). Springer. 12. Fortino, G., Russo, W., Savaglio, C., Shen, W., & Zhou, M. (2017). Agent-oriented cooperative smart objects: From IoT system design to implementation. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 48(11), 1939–1956. 13. Oleg, L., Kraemer, B., Adams, C., Heiles, J., Stuebing, G., Nielsen, M. L., & Mancuso, B. (2016, September). Standard for an architectural framework for the Internet of Things (IoT) (Technical Report, p. 2413). IEEE.
Integration of Digital Twins & Internet of Things
223
14. AIOTI WG03-loT Standardisation. High level architecture (HLA). Technical specification, 2017. 15. Steinmetz, C., Rettberg, A., Fabíola Gonçalves, C. R., Schroeder, G., & Pereira, C. E. (2018). Internet of Things ontology for digital twin in cyber physical systems. In 2018 VIII Brazilian Symposium on Computing Systems Engineering (SBESC) (pp. 154–159). IEEE. 16. Milenkovic, M. (2020a). IoT data models and metadata. In Internet of Things: Concepts and system design (pp. 201–223). Springer. 17. Persson Proos, D., & Carlsson, N. (2020). Performance comparison of messaging protocols and serialization formats for digital twins in IoV. In 2020 IFIP networking conference (Networking), pp. 10–18. 18. Cheruvu, S., Kumar, A., Smith, N., & Wheeler, D. M. (2020). Connectivity technologies for IoT (pp. 347–411). Apress. 19. Ding, J., Nemati, M., Ranaweera, C., & Choi, J. (2020). IoT connectivity technologies and applications: A survey. arXiv preprint arXiv:2002.12646. 20. Vannieuwenborg, F., Verbrugge, S., & Colle, D. (2018). Choosing IoT- connectivity? A guiding methodology based on functional characteristics and economic considerations. Transactions on Emerging Telecommunications Technologies, 29(5), e3308. 21. Aloi, G., Caliciuri, G., Fortino, G., Gravina, R., Pace, P., Russo, W., & Savaglio, C. (2017). Enabling IoT interoperability through opportunistic smartphone-based mobile gateways. Journal of Network and Computer Applications, 81, 74–84. 22. Guoqiang, S., Yanming, C., Chao, Z., & Yanxu, Z. (2013). Design and implementation of a smart IoT gateway. In 2013 IEEE international conference on green computing and communications and IEEE Internet of Things and IEEE cyber, physical and social computing (pp. 720–723). IEEE. 23. Naik, N. (2017). Choice of effective messaging protocols for IoT systems: MQTT, CoAP, AMQP and HTTP. In 2017 IEEE International Systems Engineering Symposium (ISSE) (pp. 1–7). IEEE. 24. Savaglio, C., & Fortino, G. (2021, March). A simulation-driven methodology for IoT data mining based on edge computing. ACM Transactions on Internet Technology, 21(2), 1–22. 25. Savaglio, C., Ganzha, M., Paprzycki, M., Badica, C., Ivanović, M., & Fortino, G. (2020). Agent-based Internet of Things: State-of-the-art and research challenges. Future Generation Computer Systems, 102, 1038–1053. 26. Minerva, R., Lee, G. M., & Crespi, N. (2020). Digital twin in the IoT context: A survey on technical features, scenarios, and architectural models. Proceedings of the IEEE, 108(10), 1785–1824. 27. Abburu, S., Berre, A. J., Jacoby, M., Roman, D., Stojanovic, L., & Stojanovic, N. (2020). Cognitwin–hybrid and cognitive digital twins for the process industry. In 2020 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC) (pp. 1–8). IEEE. 28. Boschert, S., Heinrich, C., & Rosen, R. (2018). Next generation digital twin. In Proceedings of TMCE (pp. 209–218). 29. Madni, A. M., Madni, C. C., & Lucero, S. D. (2019). Leveraging digital twin technology in model-based systems engineering. System, 7(1), 7. 30. Borodulin, K., Radchenko, G., Shestakov, A., Sokolinsky, L., Tchernykh, A., & Prodan, R. (2017). Towards digital twins cloud platform: Microservices and computational workflows to rule a smart factory. In Proceedings of the10th international conference on utility and cloud computing, pp. 209–210. 31. Chernyshev, M., Baig, Z., Bello, O., & Zeadally, S. (2017). Internet of Things (IoT): Research, simulators, and testbeds. IEEE Internet of Things Journal, 5(3), 1637–1647. 32. Hoffmann, J. B., Heimes, P., & Senel, S. (2018). IoT platforms for the internet of production. IEEE Internet of Things Journal, 6(3), 4098–4105. 33. Rasheed, A., San, O., & Kvamsdal, T. (2020). Digital twin: Values, challenges and enablers from a modeling perspective. IEEE Access, 8, 21980–22012.
224
G. Fortino and C. Savaglio
34. Yuqian, L., Chao Liu, I., Kevin, K. W., Huang, H., & Xun, X. (2020). Digital twin-driven smart manufacturing: Connotation, reference model, applications and research issues. Robotics and Computer-Integrated Manufacturing, 61, 101837. 35. Qi, Q., Tao, F., Hu, T., Anwer, N., Liu, A., Wei, Y.,. . . & Nee, A. Y. C. (2021). Enabling technologies and tools for digital twin. Journal of Manufacturing Systems, 58, 3–21. Giancarlo Fortino (SM’12) is a Full Professor of Computer Engineering at the Dept. of Informatics, Modeling, Electronics and Systems (DIMES) of the University of Calabria (Unical), Rende (CS), Italy. He has a Ph. D. degree and Laurea (MSc+BSc) degree in Computer Engineering from Unical. He is High-end Foreign Expert of China, Adjunct Professor at the Wuhan University of Technology (China) and Senior Research Fellow at the Italian National Research Council – ICAR Institute. He has been also Visiting Researcher and Professor at the International Computer Science Institute (Berkeley, USA) and at the Queensland University of Technology (Australia), respectively. He is in the list of Top Italian Scientists (TIS) by VIA-academy (http://www.topitalianscientists.org/), with h-index=33 and 3600+ citations according to GS. His main research interests include agent-based computing, body area networks, wireless sensor networks, pervasive and cloud computing, multimedia networks and Internet of Things technology. He participated to many local, national and international research projects and currently is the deputy coordinator and STPM of the EU-funded H2020 INTER-IoT project. He authored over 300 publications in journals, conferences and books. He chaired more the 80 Int’l conferences/workshops as co-chair, organized more than 30 special issues in well-known ISI-impacted Int’l Journals, and participated in the TPC of over 400 conferences. He is the founding editor of the Springer Book Series on “Internet of Things: Technology, Communications and Computing”, and currently serves (as associate editor) in the editorial board of IEEE Transactions on Affective Computing, IEEE Transactions on Human-Machine Systems, IEEE Sensors Journal, IEEE Access, Journal of Networks and Computer Applications, Engineering Applications of Artificial Intelligence, Information Fusion. He is the recipient of the 2014 Andrew P. Sage SMC Transactions Paper award. He is co-founder and CEO of SenSysCal S.r.l., a spin-off of Unical, developing innovative IoT-based systems for e-health and domotics. He is the Chair of the IEEE SMC Italian Chapter, and founding chair of the IEEE SMC Technical Committee on “Interactive and Wearable Computing and Devices”.
Integration of Digital Twins & Internet of Things
225
Dr. Claudio Savaglio is a Researcher at the Dept. of Informatics, Modeling, Electronics and Systems (DIMES) of the University of Calabria (Unical), Rende (CS), Italy and at ICAR-CNR Institute, Italy. He received the Ph.D. degree in ICT from the University of Calabria (Unical), Italy, in 2018, and he holds the Italian Scientific National Habilitation for Associate Professorship. Dr. Savaglio was a Visiting Researcher at the University of Texas at Dallas (TX, USA) in 2013, at the New Jersey Institute of Technology (NJ, USA) in 2016, and at the Universitat Politècnica de València (Spain) in 2017. He was also Temporary Research Fellow at the University of Calabria (Unical), where he was Teacher Assistant and Adjunct Professor (2018–2020). He currently serves as Associate Editor of IEEE Transactions on Systems, Man and Cybernetics: Systems and as Guest Editor for the Journal of Computers & Electrical Engineering (CAEE), Sensors and IET journal. He has also served as a technical program committee member for many premium conferences in the areas of IoT, computer communications and networks such as GIOTS, ICC, and ICCCN. He has published more than 60 prestigious conference and journal papers. His research interests include the Internet of Things (IoT), network simulation, Edge computing, and agentoriented development methodologies.
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage Tim Kinman and Dale Tutt
Abstract Companies in all industries are facing a future of tougher competition, exacting product requirements, and growing complexity. Arguably the most disruptive trend, this growing complexity manifests itself in several ways. Products are becoming smarter and more sophisticated, with components and subsystems being sourced from multiple domains. At the same time, manufacturing systems are also growing in complexity as they become more efficient, flexible, and connected to produce advanced products and keep pace with the rapid cycles of today’s industry. As companies attempt to adapt to the challenge of growing complexity, they must also balance the operation of a successful business today. The key to managing this high-wire act is digital transformation. Digital transformation, or the widespread digitalization of processes, data flows and methodologies, provides a holistic data-centric view of the world instead of being limited to application or domain-centric silos of information. Digital transformation enables companies to manage complexity, integrating all parts of the business, to turn data into value at every stage of the product and production lifecycles: design, realize and optimize. At the heart of digital transformation is the Digital Twin, which accelerates digital transformation efforts and enables companies to design, build and optimize next-generation products faster and cheaper than ever. Furthermore, the Digital Twin can provide the structure to transform your company into a digital enterprise, unlocking new levels of process innovation and allowing companies to modernize their core business models. Those organizations that embrace digital transformation and the Digital Twin can turn the growing complexity of modern products and processes into a competitive advantage to outperform industry benchmarks. As complexity continues to
T. Kinman (*) Trending Solutions Consulting & Global Program Lead for Systems Digitalization at Siemens Digital Industries Software, Milford, OH, USA e-mail: [email protected] D. Tutt Industry Strategy at Siemens Digital Industries Software, Novi, MI, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_9
227
228
T. Kinman and D. Tutt
grow, these capabilities will go beyond a competitive advantage to become a competitive necessity. Keywords Digitalization · Digitalization case study · Digital Twin · Digital transformation · Comprehensive Digital Twin · Connectivity · Complexity · Cross-domain · IoT · Industrial IoT · Model-based systems engineering · Smart products · Smart machines
1 Introduction Companies in all industries are facing a future of tougher competition, exacting product requirements, and growing complexity. Competitors are constantly developing new products, features or services, and the cycle of innovation continues to shrink. Meanwhile, new environmental regulations call for the progressive reduction of carbon (and carbon-equivalent) emissions, often within the next decade, to meet global climate goals. Thirdly, the growing complexity—of product development, manufacturing processes, and data capture and analysis—is unsettling nearly every organization in nearly every industry. Arguably the most disruptive persistent trend, this growing complexity manifests itself in several ways. First, today’s smart products and systems—think of today’s electric cars, consumer electronics and sophisticated industrial machinery—are comprised of multiple subsystems and components from the hardware, electrical, electronic, and software (increasingly with some level of artificial intelligence or machine learning) domains. Broadly speaking, the challenge and complexity of product development increases with the sophistication (or complexity) of the product or system under development. Modern customers also expect a high degree of intelligence and customization in products, driving requirements for flexibility or modularization back into product development. To achieve this requires multiple engineering teams collaboratively innovating tomorrow’s smart products, while getting to market faster than their competition. That in turn increases the complexity of manufacturing systems as they become more efficient, flexible, and connected to produce advanced products and keep pace with the rapid cycles of today’s industry. And consumer demand for product personalization demands modular or flexible production systems to create the wide variety of possible product configurations. At the same time, connecting machinery to the industrial internet of things (IIoT) enables manufacturers to gather and analyze a wealth of data from the factory floor to harvest valuable insights on how to optimize processes, monitor machinery, and even conduct predictive maintenance to maximize the uptime of their production lines. Such connectivity certainly offers significant value, but also adds layers of complexity to the manufacturing system. Connectivity is not just limited to the factory. It is a broader trend as well, with products of all kinds connecting to cloud computing systems and the IoT. As with the IIoT, connected products offer the opportunity for companies to capture immense
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
229
amounts of data from actual product utilization in the field and harness it to drive optimization of product design, manufacturing processes and more. However, making the most of the immense data that can be gathered from connected machines and products requires an organized and robust dataflow. In other words, it’s one thing to collect terabytes (or even petabytes) of data from the manufacturing lifecycle and the product in the field, and another to analyze the data and intelligently apply it in a recursive optimization loop. The challenges of complexity—in products, production, and analysis—can be overwhelming. Companies are attempting to adapt to these future challenges while continuing to operate a successful business today. How can companies balance immediate operations with the need to grow and evolve for the future? The key is digital transformation. Digital transformation, or the widespread digitalization of processes, data flows and methodologies, provides a holistic data-centric view of the world instead of being limited to application or domain-centric silos of information. Digital transformation enables companies to manage complexity, integrating all parts of the business, to turn data into value at every stage of the product and production lifecycles: design, realize and optimize. It is an ongoing journey of optimization to achieve the digital enterprise, leveraging data and IoT insight to connect all relevant parts of the product lifecycle, from silicon to infrastructure, from requirements to deployment, from design to manufacture, and using the real and digital worlds in sync to drive continuous optimization. Those organizations that embrace digital transformation can turn this complexity into a competitive advantage to outperform industry benchmarks. As complexity continues to grow, digital transformation will go beyond a competitive advantage to become a competitive necessity.
2 The Value of the Digital Twin The comprehensive Digital Twin is the crucial, foundational capability for speeding this digital transformation, acting as a catalyst for achieving a fully digital enterprise. The term ‘Digital Twin’ seems to be everywhere these days but with slightly different meanings. Yet, these various definitions of the Digital Twin tend to share a common characteristic: the Digital Twin as a connected design or simulation model of the product. This is because, at its heart, the Digital Twin is a virtual representation of a physical object that evolves and changes over time along with the product it represents. As such, the Digital Twin merges the virtual and real world, blurring the boundaries between engineering and process domains. Beyond this core definition, things quickly get confusing regarding Digital Twins. Is the Digital Twin a data flow, or process flow? Should it attempt to predict product performance, or represent a physical product in extreme fidelity? And how far should the Digital Twin reach across the product and production lifecycle? (Fig. 1)
230
T. Kinman and D. Tutt
Fig. 1 The Digital Twin is a virtual representation of a real product that evolves along with the product it represents. (Courtesy Getty Images/Westend61)
At Siemens, our definition and understanding of the ‘Digital Twin’ is both broader and more precise. Our perspective is that there is only one Digital Twin of a product that includes and supports the numerous lifecycle phases and respective models of actual product behavior. This comprehensive Digital Twin has the following characteristics: • Offers a precise virtual representation of the product or process flow that matches the exact physical form, functions and behavior of the product and its configurations. • Participates across the product and process lifecycle to simulate, predict, and optimize the product and production system used to create the product. • Connects real-world operational data back into product design and production over the lifetime of the product to continuously improve quality, efficiency, and quickly respond to customer demands or market conditions (Fig. 2). The comprehensive Digital Twin of today’s complex, increasingly ‘smart’ products include all cross-domain models, such as mechanical CAD or CAE, software code, electronics, bills of process and materials, and more. This Digital Twin of the product evolves and is enriched over time as specific product functions are refined, interrelationships specified, test results are gathered, and engineering changes are enacted. Furthermore, IoT capabilities and data analytics create a closed-loop feedback mechanism with the production system and the product in the field, providing an integrated system to validate, compare, optimize, and even control behavior,
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
231
Fig. 2 The comprehensive Digital Twin includes and supports the numerous lifecycle phases and respective models of actual product and process behavior
bi-directionally, between the physical product and its Digital Twin. In other words, the Digital Twin, like the product it represents, has a lifecycle, and the maturity of the Digital Twin evolves as that lifecycle progresses from concept, to release, to actual performance. The comprehensive Digital Twin empowers companies to extract value from the terabytes of data generated from connected product and process lifecycles, helping to break down the silos that isolate information, processes, and people across the enterprise. This data offers ongoing benefits as re-use can help to accelerate design assessments and tradeoff analyses, leading to faster decisions. Overall, companies can shorten overall design and manufacturing lifecycles, while managing increasing product complexity and remaining adaptive to market conditions. The result is a distinct competitive advantage in meeting growing customer demands for greater performance, smarter features, higher quality, and personalized experiences. Moreover, the comprehensive Digital Twin enables your company to design, build and optimize next-generation products faster and cheaper than ever, with fewer prototypes, fewer tests, and less waste during production. But it’s about more than just product innovation. The comprehensive Digital Twin can provide the structure to transform your company into a digital enterprise, unlocking new levels of process innovation and allowing companies to modernize their core business models. Business impact, data and collaboration all become more scalable across the enterprise. As companies digitalize processes or data flows, they will start to weave digital threads; the data-flow architecture that serves as a foundation of the Digital Twin, connecting data sources created in each functional domain and stage of the product lifecycle. As digital threads are created around the organization, the Digital Twin combines these threads to create a powerful, integrated, and continuous exchange of digital information, delivering a holistic view of the physical asset being produced and providing a way to view and analyze dynamic behavior based on information models. The digital thread brings a multiplying effect to the comprehensive Digital Twin by adding value to the collaborative development of the product across the different
232
T. Kinman and D. Tutt
teams—mechanical, electrical, software and manufacturing—enabling numerous data processes across multiple systems. Merging the physical and digital worlds with a digital thread enables users to predict performance and optimize their product. With the digital thread, functions are interconnected, integrated, and linked so users can quickly access, share, and manage program details across the entire product lifecycle—at any time, from any location. It helps manage cross-domain integration and can integrate virtual validation early into the product or manufacturing development process. The digital thread can also provide traceability from initial product concept to prototypes and actual product utilization, allowing a multitude of questions and considerations to be evaluated. Moreover, the digital thread provides a digital blueprint to correctly design and manufacture the product as measured to the conceived intent. And the more accurate the virtual representation of the physical world, the more confidence teams can have while utilizing it to make decisions. Given the potential for this technology to revolutionize how companies of all sizes do business in all industries, it begs the question why organizations have been somewhat slow to get onboard? In a recent report, ABI Research estimates that industrial Digital Twins had a global penetration rate of just 11.4% in 2021 [1]. Furthermore, ABI projects this rate will not eclipse 50% for almost another decade [1]. These predictions certainly indicate growth in industrial Digital Twin adoption, but it also indicates that many companies will be entering the next decade without a robust, mature comprehensive Digital Twin to support their future product development efforts, putting them on the back foot. So, what is stopping these companies from beginning their digital transformation that starts with developing a Digital Twin? Key Points • Organizations that embrace digital transformation will turn the complexity of tomorrow into a competitive advantage. As complexity continues to grow, digital transformation will go beyond a competitive advantage to become a competitive necessity. • The comprehensive Digital Twin is the crucial, foundational capability for speeding digital transformation, acting as a catalyst for achieving a fully digital enterprise. • The Digital Twin has a lifecycle, where the maturity evolves from concept, to release, to performance.
3 Digital Twin Hesitancy Some companies recognize the potential of the Digital Twin but are put off by the investment and commitment their development demands. Comprehensive Digital Twins are highly complex in and of themselves, especially as the fidelity with which they mirror reality increases. Furthermore, such advanced Digital Twins are not off- the-shelf solutions that deliver value immediately. To adopt a Digital Twin approach
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
233
is to adopt a new way of thinking for the company. As a result, the capability, useability, and value delivered by a Digital Twin reflects the organization’s commitment during its development. Comprehensive Digital Twins require investment, in the time and effort required to adopt new approaches and train personnel, and a measure of patience as they grow to provide the maximum possible benefits to the company. In other words, the development of a Digital Twin requires a sustained and dedicated effort with a payoff that, while significant, will only be truly realized in the long-term. Faced with the proposition of instantiating new methodologies and capabilities on a project and then waiting realize its full value, many companies choose to wait. Other companies are unsure of where to start with the development of a Digital Twin or may struggle to define how a Digital Twin can best add value to their organization. Of course, without a clear strategy or development path, it does not make much sense to launch a Digital Twin initiative. In either case, it may not be a failure of these companies to understand the Digital Twin that is holding them back, but a failure of industrial software companies to demonstrate how they can help. Suppliers of industrial software solutions have long understood the potential of Digital Twin technology and have made corresponding investments in the development of connected, smart and powerful tools to support companies embarking on their digital transformations. These same suppliers, however, have also fallen into a rut when communicating how they can support digitalization efforts and the development of a modern, powerful Digital Twin. Industrial software vendors inevitably end up telling companies what they already know: the Digital Twin is the future and will enable companies to thrive in an increasingly competitive world. This does little to catalyze a potential customer into action. The narrative rut mentioned above is characterized by three themes that permeate the whitepapers, articles, blogs, and other materials discussing the Digital Twin. First, many companies assume a common understanding of the Digital Twin and its value to their customers. As we have already touched on, there are many definitions of the Digital Twin, and each of these definitions tends to capture only facets of the full complexity of modern products. With little consensus on what the Digital Twin is, companies are left to navigate a confusing, even bewildering landscape as they attempt to digitalize. They are unsure of where to start the journey, and often struggle to see how the Digital Twin can add the most value.
4 Beyond Technology—The People Behind the Processes Next, Digital Twin narratives tend to focus very heavily on technology, while doing little to consider the people and processes that do the hard work of adapting to new technologies. Industrial software companies, us included, talk a lot about integrating engineering and process domains, data continuity and more, but often forget to acknowledge the people involved in these processes. Engineering, information technology, manufacturing, and any other domain affected by digital transformation are
234
T. Kinman and D. Tutt
made up of people, all working hard and doing their job to the best of their abilities. What often gets lost is how the value delivered by the Digital Twin, in many ways, is enabled by the connection of many individuals and the work they produce. Connecting these various people can be especially difficult, leading to the oft discussed ‘silos’ between engineering disciplines, process domains (design, manufacturing, etc.) and organizations that hamper product development and optimization. As in any family, communication between members of the same organization or team is critical. Innovation is increasingly the product of collaboration among engineers from multiple domains, but these engineers are often speaking different languages and operating in different tool environments. At the end of the day, software must help individuals perform their roles more easily and effectively, but also connect people with digital threads on a global level to promote collaboration, innovation, and the generation of new ideas and new solutions to industry challenges. This is about sharing ideas, understanding other points of view, and finding the best path forward by forging new connections between designers, manufacturing engineers, electrical engineers and more. Creating a common internal language and universal connections between tools is, therefore, another part of the collaboration challenge. Finally, software vendors rarely move beyond discussion of the immediate operational benefits, such as in productivity or time-to-market, of the Digital Twin. While important, the short-term gains of the Digital Twin do not tell the whole story of what a company can accomplish through digitalization. Ultimately, this story presents a limited vision of what the Digital Twin can offer to companies that only dares to imagine the establishment of new “table stakes” for the markets of tomorrow. A broader and longer-term perspective is needed, starting with those supplying industrial software solutions. Investment in the Digital Twin, and digitalization more broadly, should be made with the goal of catalyzing your entire organization towards future innovation that goes beyond the next big product launch. It is less about an immediate bump in production efficiency or a reduction in engineering change orders, and more about establishing a robust foundation on which an organization can grow, adapt, and thrive as it adapts to continual change and unforeseen challenges. Digital transformation can, for example, enable companies to develop new revenue streams, build entirely new business models that are optimized for an increasingly online world, and enhance internal processes to become more adaptable to new product requirements or other market changes. Early success in this world will be measured by the change in behaviors deep within your organization. And, as opportunities to measure product and process performance on new terms emerge, you will find new performance indicators to track the value you are delivering to customers more accurately. So, how do you get started? One thing is certain, digital transformation and the development of a powerful Digital Twin supported by digital threads is a journey that needs to start as soon as possible. Modern companies have already started on this journey, whether they know it or not. They already produce scores of digital assets during normal operation. The question for most companies is how to identify,
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
235
secure, and manage these assets at the enterprise level, and thus build a comprehensive Digital Twin. As we have already discussed, the success of a Digital Twin is dependent on long-term investment and a high level of commitment to its development. Therefore, the sooner this journey begins, the sooner a company can expect to derive value from their investment. Key Points • Comprehensive Digital Twins require investment and patience as they grow to provide the maximum possible benefits to the company. • Forging new connections between people and teams is at least as important as the technological advantages of the Digital Twin. • The journey towards digitalization has already started whether you know it or not.
5 Digital Transformation: Small Steps Lead to Giant Leaps The specific steps and pace of the digital transformation initiative can be adapted to each customer, product, process, and business situation. Thus, while the Digital Twin is the backbone, successful digital transformation starts with a plan and a strategy on how to execute that plan (Fig. 3). When building a digital transformation plan, companies should consider a few factors. The first task is to establish their unique goals, targets, and current pain points to determine an order of priority for digitalization efforts. Next, companies should evaluate and define solutions based on their ability to bring the organization closer to achieving its goals while also remedying pain points in current processes. When evaluating and defining solutions, companies may consider the following: • How can you ensure compliance with a growing number of regulations and avoid severe, potentially catastrophic consequences? • Will you try and stretch your departments to develop more internal competencies? Or will you partner with suppliers to augment your capabilities? • Where can digital technology improve or replace current processes? • How can technology empower the people behind the processes? Indeed, the increasing sophistication of modern products, new regulations in many industries, and the growing role of advanced software to product functionality is driving many companies to look to outside partners for support as they begin to build their Digital Twins and digitalize other processes. Once a digitalization plan is in place, companies should next consider industrial software partners, with respect to their ability to provide the technology and expertise to support the established digitalization roadmap, while also adapting to specific needs and challenges. Besides offering a robust portfolio of digital capabilities, a strong software partner should include consulting and engineering services that can reduce risk in your
236
T. Kinman and D. Tutt
Fig. 3 Digitalization initiatives must consider the people behind the processes they seek to revolutionize. (Courtesy Getty Images/Westend6)
digital transformation program, while also accelerating progress to key milestones. A strong partner should also be able to deploy technologies on-premises, in the cloud, or through a hybrid cloud approach as needed by your program. Addressing each of these factors will offer a solid foundation on which a business can scale its digital transformation. While many companies already have some of this foundation in place due to years of digitalization in one or more core disciplines, now is the time to expand to digitalize all engineering domains while also scaling beyond core processes to pursue innovation, advanced technology, and more. Progression of the Digital Twin, for instance, may be grouped into ‘maturity levels’ that indicate the progress of your organization towards ‘shift left’ decision making. In the initial stages of a digital transformation, companies can start with a Digital Twin that represents the product ‘as manufactured’, meaning it represents the physical product as it comes off the production line, or out of the manufacturing facility. As the Digital Twin matures over the course of your digital transformation, the organization can progress along a shift left path of decision making and quality measures. The next level of Digital Twin maturity can be called ‘as designed’ and offers a virtual representation of the product during the design stage, including all relevant engineering domains. This level of maturity allows engineers to rapidly evaluate design alternatives through simulation, such as 3D computational fluid dynamics (CFD) or electrical systems simulation, and generative design
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
237
capabilities, enabling these different engineering teams to arrive at optimized product designs much earlier in the overall lifecycle. Further progress results in a Digital Twin that can reflect the product ‘as defined’, indicating that the Digital Twin can describe product requirements, performance targets, goals, and other definitions of what the product must accomplish. In other words, the ‘as defined’ Digital Twin captures the decisions made during the concept engineering stage. In this early-stage various product options are evaluated against customer needs, use cases or mission scenarios, and relevant regulations. Concept engineering may be the genesis of entirely new products or the expansion of existing products’ features and capabilities. Overall, the goal of concept engineering is to define the product or features that best meet customer needs, use cases, regulatory requirements, and the business case (including development cost, schedule, program scope, and expected revenues). The value becomes apparent when, in the future, the definitions and decisions made during concept engineering and captured by the Digital Twin can be reused to accelerate new product cycles. Instead of beginning from scratch, teams can leverage a fully defined Digital Twin as they work to satisfy new market requirements. Model-based systems engineering (MBSE), another hot topic in the industrial and tech world, is the crucial methodology that makes it possible to realize this digital transformation. MBSE starts with a clear understanding and definition of the problem or scope that the product is being asked to address. The initial focus is on ensuring product requirements can be represented against a set of functional capabilities and behavioral use cases. Consequently, MBSE plays a key role in the development and maturation of product requirements, functional modeling, and the optimized product architecture that occurs during concept engineering. It works with other activities such as multi-disciplinary optimization, conceptual manufacturing and product support, mission modeling, and verification planning start building the program plan. These activities then feed the business plan and can support the creation of proposals if your organization is responding to a request for proposal (RFP). As companies work to scale towards an ‘as defined’ Digital Twin, MBSE can serve as a connecting thread from initial concepts and requirements through to their fulfillment in the product design and production.
Key Points • Digital transformation is not a “one-size-fits-all” proposition. Each company should therefore start with a plan, taking company goals, targets, and current pain points into consideration. • Trusted advisors will mitigate risk and speed digital transformation.
238
T. Kinman and D. Tutt
6 The Comprehensive Digital Twin Ultimately, companies will aim to connect all these Digital Twin assets to the product in the field, enabling a comprehensive Digital Twin that offers an ‘as performs’ virtual representation of the product. This connection across the product lifecycle, from definitions through to in-field performance, ensures that data regarding the product, its attributes, performance, or functionality in the field can be traced to how it was defined, designed, and manufactured. It is not enough, however, to simply gather product data. The complexity of modern product design, manufacturing processes, and lifetime performance support ensures that they will produce immense amounts of data during operation. These quantities of data will certainly overwhelm traditional means of data analysis, preventing companies from leveraging utilization data to its full potential. Thus, the integration of AI into the data analysis process will be a critical step to turning the mountain of data into actionable knowledge and insight on product development, production performance and the product lifetime in the field. A mature ‘as performs’ Digital Twin, fed by an automated data analysis engine, can turn mountains of product utilization data into actionable knowledge and insight. This insight can be used to enable: • Proactive, not reactive, decision making in response to market conditions, product or process inefficiencies, engineering changes, and more. • Active product and process optimization based on real-world usage and performance data. • High fidelity simulation of the entire product and production process to maximize performance. Such a connected Digital Twin approaches what we at Siemens have termed as the ‘comprehensive Digital Twin’, discussed above. We view the comprehensive Digital Twin as the ultimate expression of what the Digital Twin can offer to an organization (Fig. 4). It extends from product concept through to in-field utilization, for both product design and production process, in a closed-loop that enables actual product performance data to inform product design and manufacturing design. But the comprehensive Digital Twin can also establish or deepen connections between the various engineering domains required for modern product design, and throughout the product lifecycle (Fig. 5). The comprehensive Digital Twin gives engineers, designers, product managers and other stakeholders a common vocabulary with which they can exchange information and ideas conveniently and quickly. Within product planning, the comprehensive Digital Twin can be used in business evaluations to vastly increase the options for ideation and innovation before committing engineering budget. Furthermore, robust digitalization allows new information and ideas to be incorporated directly into product or production designs without translation by the engineer responsible. Data is presented in a way that makes sense to people of all disciplines and roles, through intuitive interfaces that support insight and decision-making
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
239
Fig. 4 The specifics of each digital transformation may differ, but the first step is to identify pain points and develop a strategy
Fig. 5 The comprehensive Digital Twin is the ultimate expression of the Digital Twin, offering closed loop development, production, and performance design and optimization over the entire lifecycle
rather than cause fatigue. The result is a democratization of product models for use in a variety of new teams or departments. Moreover, the comprehensive Digital Twin can transform your company by helping to eliminate longstanding organizational silos. But it can also forge new connections externally: between customers, suppliers, software vendors and more. Secure collaboration among these various organizations has long been a necessity, but a difficult part of the product development and manufacturing process. These organizations each have unique processes, methodologies, data formats and security concerns that must be negotiated during
240
T. Kinman and D. Tutt
communication and collaboration with a partner company. As companies progress through their digital transformations, scaling the Digital Twin along the way, they collaborate more easily and securely across domains and throughout the entire supply chain. Customer companies can begin to share information with suppliers, while protecting their intellectual property and tracking which suppliers are engaging in the data exchange. A mature Digital Twin is more than functional or logical modeling. It serves as the digital backbone that unites engineering, manufacturing, supply chain, and program management activities to create a comprehensive digital thread—a composite of interwoven and interconnected data streams creating an ecosystem for program execution. And by supporting the people involved with better collaboration tools, companies can achieve not just better products, but strengthened customer-supplier relationships, and more rapid innovation for societal benefit. In short, create a better future where anyone can turn today’s ideas into tomorrow’s products and experiences. This is the vision that Siemens and its Digital Industries Software business are working towards. In service of this vision, Siemens has been collaborating with customers of varying sizes and from many different industries to develop and mature comprehensive Digital Twins for their products, manufacturing processes, lifecycle management and more. Even in these early stages of digital transformation, we have observed several the benefits expected of a fully digitalized enterprise. Following in this chapter, we present case studies of the work we have accomplished with our customers in the realms of product design, realization, and optimization. Key Points • A mature ‘as performs’ Digital Twin, fed by an automated data analysis engine, can turn mountains of product utilization data into actionable knowledge. • A mature Digital Twin is more than functional or logical modeling. It serves as the digital backbone that unites engineering, manufacturing, supply chain, and program management activities to create a comprehensive digital thread.
7 Design—Towering Cranes and Electrified Aircraft 7.1 Advanced Industrial Cranes Konecranes is a Finnish company that manufactures, services, and supports the massive cranes and other lifting equipment needed in various industries including automotive, mining, manufacturing, and ports and shipyards. With a history dating back to the early 1900s, Konecranes has built up a wealth of experience in the business of industrial cranes. Along the way, the company has evolved towards the production of comprehensive and advanced material handling solutions that incorporate automation and other new technologies. As the sophistication of these systems
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
241
increased, Konecranes found that their traditional design processes were struggling to keep up. As with many companies, Konecranes’ design, simulation, and prototype testing groups all operated in silos, separated from each of the other groups. These silos created problems during the synchronization of engineering data and versions, leading to inefficiencies in the turnaround of product designs. To resolve these issues and eliminate barriers between engineering departments, a role-based Digital Twin application was developed to aggregate data from each department. Konecranes’ Digital Twin provides a synchronized and holistic view of the overall product design lifecycle, including simulation and prototype testing results. This robust Digital Twin includes design requirements, engineering data and various simulation models and their relationships among engineering domains. Meanwhile, connected prototypes feed sensor information back into the design and simulation teams to continuously improve product models and their reflection of real-world behavior. With this comprehensive Digital Twin backbone, Konecranes has accelerated its product design and development process by reducing prototypes and increasing traceability, while simultaneously improving product quality and lowering the cost of development.
7.2 Electrifying Aircraft Propulsion Collaboration and integration in the product design phase have also proven critical for the engineers at Bye Aerospace. Founded by George Bye, Bye Aerospace is focused on building all-electric propulsion systems for its aircraft, contributing to lower operational costs, less noise, and the elimination of CO2 emissions. Bye Aerospace has two electric aircraft projects well underway in the Federal Aviation Administration (FAA) 14 Code of Federal Regulations (CFR) 23 aircraft certification process. The first and most important of these projects is the two-seat eFlyer 2 training aircraft. Aimed at reducing costs for new pilot trainees, the eFlyer 2 is on the leading edge of FAA certification for all-electric aircraft. The eFlyer 2 is an all-composites airplane with advanced aerodynamics, technology and the latest in avionics. The use of composites in the plane’s construction is an important piece of Bye Aerospace’s strategy because it reduces the weight of the aircraft and thus has a direct influence on the flying range the plane can achieve. To build the composite airframe, the Bye Structures Department has used several solutions from Siemens Digital Industries Software Xcelerator portfolio of solutions and services that enable the creation of a comprehensive Digital Twin. A composites design and manufacturing solution helps the team to optimize the structural integrity of the composite air frame, building up layers of composite from the outside inwards, while 3D mechanical design software is used to develop the outer shape of the plane. The firm also uses product lifecycle management (PLM) software for product data and end-to-end management, a computational fluid dynamics (CFD) solution and CAE software for analysis, testing and certification, and electrical
242
T. Kinman and D. Tutt
systems software for wire harness and layout. These five products enable Bye to create a seamless end-to-end process encompassing design, simulation, and lifecycle management. Every plane starts from the conceptual design, based on the company’s idea of what the plane should look like and the performance characteristics it should achieve. With the conceptual design complete, the engineers at Bye can start designing the systems inside the aircraft. “Siemens’s software plays into that because you start your conceptual design with what we call a simple optimal line, which is the shape of the aircraft in NX CAD software,” says Parijaat Malik, senior mechanical systems engineer. “You start detailing those and based on composites that you need to use and the stresses that act on the structure of the aircraft, you can go into Fibersim (the composites design and manufacturing solution) and have a layout that details what plies and what the sequence of those plies needs to be in the structure.” They are using Fibersim to optimize the structural integrity of the composite air frame, building up layers of composite from the outside inwards (Fig. 6). This data from Fibersim can then be bridged back into the mechanical environment, automatically calculating the correct thickness for the structure, and providing the surfaces on which systems will be mounted. “As a systems engineer, I feel the biggest advantage of the Siemens software is how it makes design more collaborative,” says Parijaat Malik. “The NX CAD software that we’re using right now is a technology enabler. It keeps Bye Aerospace in front,” says Bye. “The ability to capture all of the aspects of an airplane design is greatly enabled. In capturing the eFlyer 2, Siemens’s software is particularly capable of helping us to transition to what comes next.”
Fig. 6 Konecranes is leveraging the Digital Twin to develop advanced industrial cranes and material handling solutions. (Courtesy Getty Images)
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
243
7.2.1 A Foundation for Future Products The eFlyer 2 will form the basis for a family of future airplanes. Leveraging their connected ecosystem of software, Bye Aerospace will be able to directly leverage the Digital Twin of their current airplane in the definition and development of future offerings. “The eFlyer family of airplanes begins with the eFlyer 2,” says Bowen. “The effort that we’re putting into defining the eFlyer 2 in Siemens’s software is the foundation for all the future products. The typical program sequence when changing models is what we call a tear-up phase, where we take what doesn’t fit our future model and we tear it up and then we start over on our design’’. Siemens’ software allows the company to reuse, or “rubber band” as it’s referred to internally, their Digital Twins into future iterations and products. A key advantage is of such an approach is that the Digital Twin also brings linked analysis resources, allowing engineers to explore not only physical or modelling changes, but behavioral changes as well. As Bowen puts it, “when we modify the model, all the other related activities also morph with the aircraft.” The digitalized development flow at Bye Aerospace also helps with the partitioning of work and the management of design changes. “So, what helps Bye Aerospace be agile in our design process is the wave linking; the ability to create a top-down structure where you can take something as simple as a shape and break it down into pieces and pass it down to the people that are responsible for designing those pieces and add detail into it,” says Malik. “And because of wave linking, you can easily pass your design changes to other designers without having to have big meetings to communicate your design changes, to make sure that you have to change a bunch of things just to accommodate someone else’s changes. You can just open an assembly and see whatever’s linked to it and see the changes that were made on the other person’s side and how that affects your design. Then you can make the required changes to accommodate the design as a whole.” Overall, the digitalized and connected design process employed at Bye Aerospace has helped the company to reduce the time it takes to develop new products while also allowing their teams complete more design iterations. As a result, Bye Aerospace can bring products to market more quickly while using fewer resources than competitors.
Key Points • Both Konecranes and Bye Aerospace have leveraged Digital Twins to boost collaboration and integration in the product design phase. • Konecranes’ Digital Twin provides a synchronized and holistic view of the overall product design lifecycle, including simulation and prototype testing results. • Bye Aerospace has leveraged the Digital Twin to accelerate product design while also building a foundation for future product designs.
244
T. Kinman and D. Tutt
8 Realize—Digitalizing the Production Line Manufacturing processes, such as the design of new production lines or logistics and material delivery systems, are equally ripe for digital transformation. Advanced simulation tools can help validate production concepts before any physical machines are built. And once the machines are built, cloud-based analytics can gather and analyze operational data from the production floor to help manage throughput or monitor the condition of individual machines. As both product and process Digital Twins mature, they can even be combined to simultaneously design and verify the product and production process. Porsche, the famed manufacturer of sports cars, has emerged as a leader in this realm. The company has invested in the automation and digitalization of production and service processes in support of their new Taycan electric vehicle. Notably, leveraging digitalization enabled Porsche to overcome some significant challenges in the production design for their new EV. Porsche needed to construct an entirely new production facility for the Taycan electric sports sedan among the existing factory at Zuffenhausen, in record time and while allowing sportscar production to continue unhindered. Porsche knew they had to build upwards to construct the new facility, but they faced a height restriction on buildings in Zuffenhausen to minimize the impact on airflow in the city of Stuttgart. Porsche is also well known for the level of customizability it offers in its vehicles. Customers can personalize their vehicles, both inside and out, to meet every expectation. The Taycan is, of course, no different. This means that every station in the Taycan production facility had to offer a high degree of flexibility—a huge logistical challenge, since production is spread over several floors. To meet these constraints, Porsche worked with Siemens to develop a unique manufacturing concept that makes the most of every level without wasting a single centimeter of height.
8.1 Digitalization Enables a New Production Concept This concept is known as FlexiLine, a new approach to production where autonomous guided vehicles (AGVs) move car bodies from one processing station to the next. Rather than fixed conveyor belts, the AGVs, powered by Siemens’s technology, provide maximum flexibility. FlexiLine makes it possible to adapt operating cycles to actual needs and, for example, stop an AGV to perform automated tasks and then speed it up to move on to the next processing station. Utilizing a Digital Twin, the entire production system underwent virtual simulation and testing in advance of construction to ensure proper and smooth operation. The result is a highly flexible, space-optimized production system that was able to be implemented in four and a half months. But Porsche’s digitalization journey didn’t stop there. As the first fully electric series-production vehicle from Porsche, the Taycan is full of advanced technologies
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
245
and complex interconnected components. This makes it difficult for service and maintenance teams to do their work fast and reliably—the learning curve on this vehicle is steep. To better enable service and maintenance teams, Porsche is leveraging digitalization to build an augmented reality service solution. This solution, known as Porsche Augmented Reality in Service (PARiS), uses CAD data directly from the vehicle development department to create 3D animations of vehicle systems, rather than producing ever-larger service manuals. Service employees simply must hold their tablet up to the vehicle, and animated 3D data for the specific vehicle part is instantly displayed, including a description of the component, technical data, and handling instructions, such as the correct tightening torque for a bolt. Porsche’s success is a demonstration of the power of digitalization in the automotive industry. Connecting data from multiple departments, including vehicle development, manufacturing design, production and in-field service organizations illuminates new opportunities for innovation and advancement throughout the digital thread. And such success is not limited to the automotive industry. ASML, a leading provider of lithography systems for the semiconductor industry and another Siemens customer has taken advantage of integrated design, lifecycle management and plant simulation software to optimize production scenarios and bridge a gap that had existed between engineering teams.
8.2 State-of-the-Art Microchips Power Electronics of Tomorrow ASML is one of the world’s leading providers of lithography systems for the semiconductor industry, manufacturing complex machines that are critical to the production of integrated circuits and microchips. These advanced systems help ASML’s customers, the chipmakers, to reduce the size and increase the functionality of microchips within consumer electronics equipment. As a result, ASML helps to create more powerful electronics systems for consumers and industry professionals through its lithography machines. Lithography has therefore been indirectly responsible for the “digital revolution” and the continuation of “Moore’s Law,” which predicts that technology will double the number of transistors on a microchip at regular intervals. Thanks to lithography, ever-shrinking microchips have brought better, more affordable, and energy-efficient electronics and services to everyone, improving mobility, connectivity, safety, and digital entertainment (Fig. 7). Amid the global financial crisis of 2009, chipmakers sharply reduced capital expenditures. Halfway through the year, the semiconductor industry was among the first to recover, and orders picked up, leading to two subsequent years of record sales for ASML. Sales in 2010 were nearly three times those of 2009, followed by another sales increase in 2011. During this strong rebound, ASML had to ensure that it would continue to deliver its machines on time and in accord with the highest quality standards. As demand in the semiconductor industry is cyclical, ASML must
246
T. Kinman and D. Tutt
Fig. 7 Advanced lithography machines have helped spur on the digital revolution by reducing feature sizes on silicon chips. (Courtesy Getty Images/nmlfd)
continuously adjust its production capacity to meet the requirements of the market. This is a sophisticated process that involves forecasting demand, setting detailed output plans, and aligning with a large and complex supply chain. Bridging the Gap Between Product Design and Manufacturing ASML uses Siemens’ software solutions for digital lifecycle management product design. Recently, the Industrial Engineering department of ASML began using a discrete-event plant simulation tool for production throughput analysis and optimization. ASML’s Industrial Engineering team functions in between the Development and Engineering (D&E) and Manufacturing and Logistics (M&L) departments. The adoption of an advanced plant simulation tool, and the plant simulations it produces, has allowed the Industrial Engineering team to create a bridge between the D&E and M&L departments. Andreas Schoenwaldt, Industrial Specification Management (ISM) team manager at ASML describes the new process, “The production simulations that we develop using plant simulation support building a bridge between these groups. We must make many decisions regarding the establishment of new production facilities or improving existing ones. Plant simulation helps us make these decisions after simulating what-if production scenarios. Our machines must perform to high standards; they need to print very small features on silicon wafers, printing 30–40 layers exactly on top of each other, and do it extremely fast. Precision is measured in nanometers. A relentless drive to innovate is part of the ASML culture and has allowed us to meet this challenge. In this context, we had the need for a discrete-event software simulation tool to simulate and optimize our production.”
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
247
8.2.1 Digital Twin of Production Plant Shortens Machine Lead Time The machine testing phase of ASML’s production process is by far the most time- consuming, even significantly longer than the final assembly phase. In pursuit of shorter lead times, increased production capacity and lower costs, ASML investigated the machine testing phase to identify bottlenecks. In studying these challenges, data analysis showed that the late delivery of processed wafers was causing delays. Joris Bonsel, an industrial engineer on the ISM team explains, “These wafers are produced in our process lab, mainly for the purpose of machine testing. We used data from a 5-month period of actual wafer orders requested by the test department, combined with the corresponding actual delivery times. We then created a plant simulation model that delivered the same results that were collected, reflecting a surprisingly high level of accuracy’’. “Once we knew that we had a simulation model that reflected reality, we started to do the industrial engineering analysis. It was clear that adding more manpower in the lab would improve delivery performance, but we were able to simulate the actual production, and proved that there is a clear financial benefit to adding one employee to the process lab, rather than invest, for example, in a new track. Just to prove that this result is not trivial, we simulated and showed that adding a second employee was not cost beneficial. This analysis was previously done based on gut feeling, rather than a simulation, and therefore it was difficult to argue with the simulation recommendation.” Bonsel notes, “This was actually the project we used to pilot plant simulation, and we were impressed when we realized that with plant simulation, we can build a model that will accurately imitate the performance of a physical production line.” 8.2.2 A New Generation of Lithography Systems Plant simulation tools also helped ASML as they developed a new generation of lithography systems. Here again we see the power of the Digital Twin in bringing next-generation capabilities to fruition faster. These next-generation systems use extreme ultra-violet (EUV) light and will allow chipmakers to continue to shrink feature sizes on chips (Fig. 8). The first systems intended for volume manufacturing, dubbed NXE:3300B, shipped in 2012, and the engineering teams had to plan production facilities to support the rollout. This involved some difficult tradeoff decisions, as described by Maurice Schrooten, an industrial engineer on the ISM team: “One of the main questions was: ‘Which production resources were needed for manufacturing the MBMM (main body mid module), one of the major modules of this machine?’’’ ‘‘Naturally, we were looking for the most cost-effective investment, and therefore we have used plant simulation to analyze the implications of three different alternatives on clean room space consumption, throughput variation, and labor and hardware investment.” The three alternatives explored by the ISM team were:
248
T. Kinman and D. Tutt
Fig. 8 Renom has connected their windmills to a single digital user experience, enabling asset monitoring and maintenance. (Courtesy Shutterstock/Jesus Keller)
1. Cloning the current production line. 2. Engaging in a high-level qualified buy to outsource some of the production. 3. Using a new production line methodology to split the assembly work among three areas. “The simulation showed that the cloning alternative required a higher investment in equipment, so we abandoned it,” Schrooten continued. “Out of the other two alternatives, which were largely equal, we selected the one with the lowest risk for ASML. The plant simulation result is an optimized production line configuration that supports the needed throughput and reduces the investment in production resources.” 8.2.3 Plant Digital Twin Becomes Part of the Decision-Making Tool Kit The capabilities of a modern manufacturing simulation solution have made it a valuable piece of ASML’s decision-making process moving forward. For example, the ability to build a library of objects with embedded logic enables the engineers to re-use the objects for new and different models. Schrooten describes one such object that has proven extremely useful to ASML, “we created the genetic algorithm (GA) optimization object in our object library. When we simulate a manufacturing
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
249
process, we define a lot of constraints between the manufacturing steps, such as the precedence constraint, which defines which step should be done prior to another step; time constraint, which defines waiting time that is some-times needed after executing some of the manufacturing steps; and constraints that are related to physical aspects, for instance, two steps that can’t be done simultaneously, as they are done on the same physical area of the machine. The GA optimization object uses the genetic algorithm capability of plant simulation to recommend a sequence of process steps, considering all the constraints, and to assign the steps to the required production personnel.” Schoenwaldt concludes, “Maintaining world leadership is a tough task. You must be innovative and cost-effective.” He notes, “Plant Simulation will be part of our engineering decision-making process. A lot of production scenarios will be simulated virtually using Plant Simulation, before we will actually commission the production line.”
Key Points • Utilizing a Digital Twin, Porsche’s new production system underwent virtual simulation and testing in advance of construction to ensure proper and smooth operation. The result is a highly flexible, space-optimized production system that was able to be implemented in four and a half months. • ASML has used manufacturing simulation solutions to shorten lead time on machine production and support the launch of a new generation of lithography machines.
9 Optimize—Extending the Digital Twin to Assets in the Field Today, companies are beginning to connect products and production assets in the field back to their Digital Twins (if one is available) or digital tool ecosystems. As discussed earlier, these connections open new opportunities for companies to drive value for their business and for the customer through processes like predictive maintenance, active product optimization based on utilization data, and over the air feature updates. In the following case studies, we will see how Siemens’ customers are leveraging digitalization to monitor, analyze, and improve their existing products and processes.
250
T. Kinman and D. Tutt
9.1 Digital Twin Optimizes Asset Maintenance and Monitoring When people hear the term “Digital Twin”, they often think about a digital representation of a product or asset and how it behaves or is manufactured. While that is often true, one of the overlooked elements of a true Digital Twin is that it can enhance or support the process by which an asset is sustained. The US Navy is relying on this aspect of the Digital Twin as part of a major shipyard infrastructure optimization and modernization project. Siemens is actively working with the United States Navy to create a Digital Twin of their depot facilities to monitor their status during the maintenance cycles of their assets. The team is using plant simulation software from Siemens to actively build a process model to understand what takes place at those depots during maintenance of their aircraft carriers, submarines, and aircraft. This Digital Twin will provide a robust understanding of how the maintenance teams operate today. Perhaps more importantly, it will also provide a foundation for the future. As the United States Department of Defense looks to modernize their repair facilities, this same Digital Twin will be deployed to model improvements of both facilities and processes and to understand their impact to the overall repair process. In essence, these models can be used to quantify the spend for those improvements and provide a deep understanding of the return. Finally, we come to Renom, a leader in providing services for the renewable energy market, that uses the comprehensive Digital Twin to optimize wind turbine performance and energy yield. Maintaining and optimizing remote wind farms is often challenging, complex and costly. One critical challenge is predicting the remaining useful life of major components, like the gearbox. The typical gearbox in a wind turbine has a lifespan of 3–5 years and will need to be repaired or replaced several times over the service life of a wind turbine. By predicting the remaining useful life, Renom can optimize maintenance and extend the life of turbine gearboxes and other critical components, reducing costs and preventing unplanned failure in the field. Renom achieves these outcomes through a single user experience that allows operators access to real-time performance data from individual turbines or the entire wind farm. This interface is powered by a comprehensive Digital Twin that is comprised of several models such as the precise 3D definition and build materials of the turbine and its components; simulation models of key subsystems and components; and specific service requirements, schedules and even the availability and cost of spare parts. The solution also accumulates industrial IoT data and load factors to enable the tracking and monitoring of the remaining useful life of components at various levels, enabling Renom to proactively perform services and maximize energy yield.
Demystifying the Digital Twin: Turning Complexity into a Competitive Advantage
251
Key Points • The US Navy is building a Digital Twin of their depot facilities to monitor their status during the maintenance cycles of their assets. The team is using plant simulation software to actively build a process model to understand what takes place at those depots during maintenance of their aircraft carriers, submarines, and aircraft. • Renom uses the comprehensive Digital Twin to optimize wind turbine performance and energy yield. A single user experience, powered by the Digital Twin, combines several models such as the precise 3D definition and build materials of the turbine and its components; simulation models of key subsystems and components; and specific service requirements, schedules and even the availability and cost of spare parts.
10 Summing Up—The Potential of the Digital Twin Among our customers, we have observed the challenges posed by growing complexity in product and process design, manufacturing, and in-field support and maintenance. Across all industries and all company sizes, the demands of modern markets are straining the existing approaches to product development, production, and support. Companies are feeling this manifested in highly condensed development cycles (i.e., designing products of greater complexity with less time than in the past), higher costs, more errors, and a seemingly constant push for innovation. But we have also witnessed the potential of digitalization and the Digital Twin to help companies overcome the difficulties that face them, both immediate and in the future. Today, digitalization comes with tangible benefits: lower development costs, reduced errors, faster and more efficient design, verification, and validation processes, and more. Tomorrow, we believe that the construction of comprehensive Digital Twins will enable companies to build the most capable, innovative, and sustainable products or production concepts ever, while simultaneously offering better and more personalized service to their customers. The march towards this vision of the digital future, whether intentional or not, has already begun. Those companies that act now to invest in digitalization initiatives and the Digital Twin will reap the greatest rewards.
Reference 1. Martin, R., & Larner, M. (2021). Industrial digital twins: What’s new and what’s next. ABI Research.
252
T. Kinman and D. Tutt Tim Kinman is Vice President, Trending Solutions Consulting & Global Program Lead for Systems Digitalization. Tim has a demonstrated history of working in the computer software industry with over 14 years with Siemens Digital Industries Software. He is skilled in Integration, Product Management, Product Lifecycle Management, Requirements Analysis, Systems Engineering, and Enterprise Software. Tim received a Bachelor of Science in computer science and Northern Kentucky university and a master’s focused in Business Administration from Xavier University.
Dale Tutt is the Vice President of Aerospace and Defense Industry, for Siemens Digital Industries Software. He is responsible for defining the industry strategy for Siemens, leading definition of industry solutions for Aerospace and Defense customers. Prior to joining Siemens, Dale worked at The Spaceship Company, a sister company to Virgin Galactic, as the VP of Engineering and VP of Program Management, leading the development of spaceships for space tourism. He led the team on a successful flight to space in December 2018. Previously, Dale worked at Textron Aviation/Cessna Aircraft in program and engineering leadership roles.
Data and Data Management in the Context of Digital Twins Tiziana Margaria and Stephen Ryan
Abstract With the emergence of Industry 4.0, smart applications and systems are becoming the norm, transforming traditional industries and sectors towards an increasingly data-driven approach. Digital Twin technologies are one of the driving paradigms for the production, integration, sharing and reuse of data. In this context, understanding and mastering the description and formalisation of data as well as its proper, secure, compliant management are crucial for a large-scale adoption of the Digital Twin technology. We introduce the essential concepts for data, data structures and data management, and examine the importance of specific technologies, like ontologies, for the meaningful description of data and their relationships. We cover the essential modelling layers connected with data in this context, and present selected collaboration platforms for Digital Twin ecosystems in a context of interoperability as envisaged in the Digital Thread. Keywords Data · Data management · Digital Twin · Digital thread · Interoperability · Modelling · Ontologies · Frameworks · Ecosystems
1 Introduction The understanding of Digital Twins is constantly evolving, in connection with the maturation of the field and the adoption in industrial and organizational practice. This evolution has a pendant in the understanding of what data is required for such an application to be successful within an industry setting, and of the induced data management issues. Based on various questionnaires with partners both in academia and industry along with current research publications, the intent of this T. Margaria (*) · S. Ryan Department of Computer Science and Information Systems, University of Limerick, Limerick, Ireland e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_10
253
254
T. Margaria and S. Ryan
chapter is to discuss the importance and nature of the data needed to create, implement and run Digital Twins, together with some core practices around the datarelated management, governance and standards. It will also address the various categories of data that need to be considered, examine potential data outputs and how data can be used in practice to the benefit of the DT end user, especially with the accelerated adoption in Industry 4.0. It is important to showcase the synergies between the data and the Digital Twin at a finer granular level first, and move towards a more holistic understanding by providing information on the important areas of concern that should be considered when examining data within a Digital Twin. Examining the emergence of the Digital Twin with a specific emphasis on the data, we see that initially a Digital Twin was a simple replica or digital representation of a physical asset, mostly produced and used in isolation [26]. The practice has evolved considerably, and today it is essential that the physical asset and its Digital Twin can easily share data, communicate, and seamlessly coexist in an increasingly elaborate system-of-systems context. This is rendered possible by the reduction in cost of sensors and Internet of Things (IoT) devices and solutions. Data can be exchanged at different intervals, depending on the purpose of applications such as real time data, data stream processing, or sporadic ad-hoc data collection, e.g., to save energy. With the new wave of increasingly connected and tightly integrated smart manufacturing, this sharing of data relies heavily on the digital thread to connect data and processes across the boundaries of systems, representations, and technologies. This end-to-end threaded connection enables increased and improved insights into the functioning, and also the malfunctioning, of physical assets. But a deeper and integrated data understanding and management are needed in order for this to occur correctly. More importantly, data should be easily integrated across several customized digital twins for a system level use, and datasets should be easily modifiable while the Digital Twin is being developed or enhanced. Without this ease of integration and evolution, the Digital Twin will simply compile and sort a local and static collection of data. This is not sufficient to promote the holistic approach and accessibility needed for the Digital Twin technology to integrate seamlessly with other technologies, internal or external to an organisation. This chapter introduces the data lifecycle and data management, addressing data sources and formats in use, the collection of digital data, the quality criteria of the data, and its governance and standards. Specifically, Sect. 2 examines the motivation for an effective data description and formalisation in the context of Digital Twins, Sect. 3 discusses data management aspects, like data structures and formats, and ontologies that are beneficial to share a common vocabulary of concepts ad relations. Section 4 considers data in the Digital Twin ecosystem under several facets, like types and sources, also in relation to the communication capabilities. Section 5 addresses ontology creation and its use in an example Digital Twin use framework. Section 6 examines data quality, storage and security, what data to store and examines types of data flow. Section 7 sketches examples of large scale Digital Twin ecosystems and presents some past and future interoperability and exchange platforms. Finally, Sect. 8 concludes with a summary as well as ongoing and future work.
Data and Data Management in the Context of Digital Twins
255
2 Motivation: Why an Effective Data Description Is Important In the new digital age of Smart Manufacturing, as in Industry 4.0 in general, there is a clear need for an effective description and formalization of data within a Digital Twin. In that context, the vast majority of tangible goods (products and production lines) currently in use is either understood to need a Digital Twin, or could potentially benefit from such an executable model in the form of an in-silico entity on which to virtually explore design, production, quality, and lifetime maintenance etc. The potential benefits of creating a Digital Twin in a number of application areas have been explained in detail in these chapters [40], and consequently the induced need for data and data management needs to be addressed across all such areas. The evolution of industries seen through the lens of specific aspects like their digital transformation into Industry 4.0 exposes a growing importance of the connectivity of cyber – physical systems, which in smart factories happens essentially through data and commands. Accordingly, we see how entire industries become increasingly reliant on specific sources and kinds of data and federated data systems. Ten years ago, job descriptions for data scientists did not appear in an average job search, but today the evolution in industry settings has promoted data as the immaterial “oil” that fuels knowledge, ecosystems, collaboration, optimisation and ultimately profits, so that data-related skills and professions have become central to success and thus highly sought for on a market dominated by scarcity. When examining Digital Twins and data jointly, as one entity, it is important to briefly address the fundamental difference between models and things. Things, be they physical entities, cyber-physical systems, or purely software, are the actors in the real production world, and as such they are the assets responsible for generating income. Models, of whatever kind, from traditional CAD representations to any other modern and sophisticated discrete, hybrid or continuous model, are abstract proxies of those entities, and serve the purpose to describe or stand for the “thing” they virtualize. A Digital Twin is the faithful virtual proxy for a given real entity, and it serves at the same time as the interchanging mechanism across both sides. Both the communication and the evaluation occur on the basis of some form of collected datasets, either provided from measurements or from simulations or built as synthetic datasets. This leads to the observation that the role and amount of data are increasing, as inherently more data will be produced and analysed in order to achieve a better understanding of individual systems and their collaboration. According to the WPP report 2020 [46], data volumes, which are already enormous, are set to further explode in the near future. The IDC predicts that 175 zettabytes of new data will be created annually by 2025, up from 33 zettabytes in 2018 [20]. In this context, frameworks for dealing with and understanding data are going to become increasingly central to the entire data driven economy. This is not different when considering specifically Digital Twins.
256
T. Margaria and S. Ryan
In the modern Digital Twin context, also the digital thread needs to be recognised. The digital thread encompasses the concept and the mechanisms enabling end-to end interconnection and interoperability. In this context, this covers essentially the connection between real things and their twin models, but it includes also the communication networks, the design algorithms, the visualisations needed to properly work in the design, construction and operation phases within a mature industry 4.0 setting [27]. In this regard, both the twinning and the threading span the model conceptualization, representation, and implementation as well as its applications along the entire product lifecycle. Data handling plays here a central role, as it is data that “threads” the various elements of an application scenario. Data interoperability is accordingly a big topic. Simple frameworks for Digital Twin design and lifecycle consider the dimensions of fidelity, expansibility, Interoperability and scalability [39], whereby the expansibility and interoperability are core to the digital thread as well, and the fidelity and scalability directly address the data aspects: data design and data management, respectively. A modern version of the Digital Thread incorporates interoperability provisions needed to deal with data within the device and across devices and their use. To date, large gaps in sharing conceptual frameworks for Digital Twins still hinder proper discussions across different domains that use such a technology [39]. The lack of such a uniform conceptual framework can be attributed to lack of understanding of the Digital Twin itself across various domains, as well as of the data properties needed for such a representation [45]. Simply put, if a common language were introduced across data within Digital Twins, and this became an easy to use reference model, a variety of issues and misunderstandings on what is needed for proper data collection, management and use in such a virtual representation would be avoided [35]. This observation corroborates the position that a further coordinative push is needed in examining Digital Twin structures [30], and this can also be extended to the data perspective of the twin. Consequently, it appears that as of today a large amount of research has been done on creating frameworks for the overall Digital Twins, but we lack research on frameworks for the data itself, in and around the twins. If a clear and understandable framework is not introduced for data and data management within Digital Twins, there will be a large gap in understanding the functions and importance of data within such a virtual representation, hampering the potential exploitation of the Digital Twins.
3 Data Management for Digital Twins Proper data management is essential for utilising a Digital Twin in a correct manner as it develops pathways for defining the development of architectures, practices, policies and general procedures to manage the overall lifecycle of the data that will be used as input and output of a Digital Twin. In general, data management involves the procedures of collecting, storing, organizing, protecting, verifying, and
Data and Data Management in the Context of Digital Twins
257
processing essential data and making it available to the organization(s) so that it can be used within a digital twin. This understanding is also aligned with the value proposition of professional certification bodies like e.g., DAMA, the Data Management Association International, a “not-for-profit, vendor-independent, global association of technical and business professionals dedicated to advancing the concepts and practices of information and data management” whose purpose lies in “promoting the understanding, development and practice of managing data and information as key enterprise assets to support the organization” [12]. Such associations also curate and teach various topic-specific “body of knowledge” collections, like in this case the DMBoK, the data management body of knowledge. Figure 1 shows the 5 top-level phases of the data and data management lifecycle. It’s a top level guide to the users, outlying the main areas to consider at set up and design and then continuously monitor, as data is a major area of risk for any organization and any project. While scientific data management is moving toward open and FAIR data as standard practice (for Findable, Accessible, Interoperable and Reusable) [15], companies still have large proprietary datasets and fragmented, individual, data formats and management policies. While every company and sector has their own standards and practices, the “GO FAIR” community has developed “how to GO FAIR” guidelines, procedures and communities of practice that could serve as a reference for the DT community.
Fig. 1 Data and data management lifecycle – top level view
258
T. Margaria and S. Ryan
4 Data in the Digital Twin Ecosystem Critical for the successful operation of the twins are the availability of clean and operable data resources and an understanding of each data element [2]. Only with a proper use can a Digital Twin contribute to generate value for the company, and in any business, value creation is essential for the successful adoption of their products and services. Figure 2 sketches a possible structure of the connection of the data and value creation aspects associated with a Digital Twin: the Digital Twin needs data resources in order to produce internal as well as external value. The vocabulary of the concepts and their relations plus their association structure constitute the core elements of an ontology. Defining a possibly domain specific ontology is the first step in creating a framework to follow in terms of data [2]. This ontology structure can also be interpreted as a mapping of the data ecosystem centred around the Digital Twin and its use for value creation. Seeing the explicit relation that exists between data and Digital Twin -mediated value creation could also be a motivation in order to improve the data sharing across more or less tight partnerships in many industry sectors where this is not yet common practice. When data is used within Digital Twin technologies, many data management issues arise. Considering for example the manufacturing industry, which is the prevalent adopter of Digital Twin technologies, the variety and heterogeneity of is a particularly difficult issue to handle. The reason for this is that the manufacturing industry historically and naturally produces large amounts of different data across various domains. Mostly it is siloed and not integrated, thus “Such a large variety of data raises the data integration, data cleansing and data fusion issues” [44]. Considering why data management is a difficult task in a Digital Twin environment,
Fig. 2 Ontology [2]
Data and Data Management in the Context of Digital Twins
259
it is useful to look at what such a platform is used for exactly. As the Digital Twin is essentially a virtual representation of a physical object (product or tool), and that object will need to be analysed and optimized under a multitude of different aspects, such as system failures and other foreseen and unforeseen events and behaviours, massive amounts of data may need to be shared with the twin in real or near-real time in order to detect, test, and ideally the pre-empt such scenarios. This “twinned” mode of functioning is similar to Runtime Verification in System design (cite). It requires up-to-date, continuous monitoring of data feeds from (often IoT) devices, comparison and analytics with respect to historical data, and the use of various AI or machine learning (ML) models that need to be efficiently integrated with the twin. High data rates and high data volumes mandate velocity, frequency, and precision of the communication networks and of the algorithms and computational units. The pressure stemming from the data makes it more difficult to manage all the continuous changing aspects in comparison to the normal data management plans in other organisations.
4.1 Data structures Data structures have been used for many decades [8] as well as data modelling in general, however the traditional data models orient themselves to the level of description sufficient for databases, information systems, or programming environments, and lack a deep semantics rooted in the meaning and purpose within the application domain. For example, many data types of data elements are described generically, as integers, real or string, while in reality the “real” value may be a temperature, and therefore semantically incompatible (and thus incomparable) with another real value that represents, e.g., a measurement in inches or a weight in Kg. In order to understand data structure-related design issues in some more detail we need to consider what is a Data Structure and why do we need them. A data structure is a container that organizes data in a specific logical format and stores it in an appropriate memory layout. The choice of format and layout are sometimes coupled but sometimes disjoint, so that such design choices allow a data structure to be efficient in some operations - preferably the most frequent and critical ones - and inefficient in others. As in any application, also in connection with the Digital Twins it is important to understand the characteristics and trade-offs of different data structures and their implementation variants, in order to pick the optimal alternative for the specific problem at hand, the granularity and size of the data, the traits of the specific DT and of its context (e.g., its energy and communication profile), as well as its specific use. Commonly Used Data Structures The most commonly used data structures include basic structures like arrays, stacks lists, trees and graphs, but also derived and compound structures like linked lists, queues, matrices, records, hash tables, tries and many more. More recently, the
260
T. Margaria and S. Ryan
knowledge graph [9] has proven a successful representation in the context of relations between semantic concepts and attributes. Most data for Digital Twins and their environment are parametric: it is important to be able to change some parameter values in the virtual twin in order to carry out a variety of analyses. Examples are load stress, tolerance, response times, and more that arise when using the model to explore a design space, a behaviour space, and in general for any kind of “what if” analysis. Parameters are also commonly used for configuration, scalability, and for selecting options. The Semantic Web technology [4] advocated since the early 2000s new ways of managing data, with attention to the domain-specific meaning of data items and to the automated understandability of the values and their associated “meaning descriptors”: the metadata [24]. The tool of choice for the description of meaning is the ontology, which is itself a labelled graph, i.e., a data structure.
4.2 Data Description The most elementary data representation format created for the semantic approach is the Resource Description Framework (RDF). It is simple and therefore technically easy to manipulate, and it is useful when the “properties and values in the domain are defined by shared schema or ontology” [44]. As defined by Guarion [19] and Husáková and Brues [21], and shown in Fig. 2, “an ontology refers to an engineering artifact, constituted by a specific vocabulary used to describe a certain reality, plus a set of explicit assumptions regarding the intended meaning of the vocabulary words”. A semantic data description system able to express the meaning of a domain-specific vocabulary is beneficial to adoption in Digital Twins and data management. However, ontologies are difficult to design, as they represent the shared understanding of a community of practice with respect to the conceptualization of the entities they care about and their relationships. To be useful, an ontology needs to be built in a collective effort by a community that achieves consensus and is ready to adopt it for use. It is extremely costly and labour intensive to produce, and accordingly rare. Most of the times, researchers and communities define so called upper ontologies, that formalize the abstract concepts of a domain, which are more frequently unequivocal, but do not dig deeper towards the fine granular details defining concretizations useful for the case-by-case adoption in practice. For example, the ontology in Fig. 2 is a very abstract upper ontology. Unfortunately, at the detailed level of a Digital Twin we need also the middle and the lower ontologies, that refine the concepts down to the single instances for a specific technology, a specific product, or a specific equipment vendor. These finer granular ontologies mostly do not exist. Driven by the need of interoperability (i.e., by the general striving towards the digital thread), the various industries are individually progressing towards more standardized vocabularies and a progressive collectivization of the lower, value producing layers of their IT. In this sense, although we have not experienced the run to
Data and Data Management in the Context of Digital Twins
261
the Semantic Web adoption predicted by that community over a decade or two ago, in many domains steps are being taken in that direction because they solve, or at least mitigate, the burning problem of siloed systems and applications. As the combined value of the market of data and Digital Twins use is expected to grow substantially in the same manner as the data that we mentioned earlier, from 3.8bio USD in 2019 to 35.8bio USD in 2025 [29], the push towards ontologies for a shared vocabulary as well as towards proper data management is gaining momentum. The generic infrastructure starts with the data, often in a domain-independent way, it continues with the guidance principles for good and compliant data management, then reaches collections of case studies and best practices of adoption. In the specific Digital Twin context, ensuring that the data design and management methods are generic basically guides the user to analyze and choose the various techniques that are important for their domain. This puts the choice of architecture and framework back into the control of the domain experts, with guidance to support the process. According to [44], which introduced the generic model in Fig. 2, there are three interrelated layers: the Physical, the Data and the Model layers. Each layer has a predefined scope and purpose, and they are all interconnected, as shown in Fig. 3. • The physical layer handles the data traffic from/to the devices, like the data collection from sensors and their aggregation, and also the overall configuration of such IoT devices. • The data layer comprises the data repositories. They hold the active and real time data generated from the physical layer, that can be fed into the Digital Twin itself. It also includes the different (data) sources and the related enterprise systems that are to be integrated/used during the cycle [31].
Fig. 3 Information flow for a Digital Twin [44]
262
T. Margaria and S. Ryan
• The model layer holds the model along the lifecycle, behaviour models and logical models for reasoning, which delivers the predictive capability of the technology. If the data is kept in an organized and well manageable manner, then the data flow will become seamless when interacting with the digital as well as the physical twin. The following traditional, linear overall methodology provides a simple, phased guidance to the organizations on how to manage their data set up, usable also in connection with a Digital Twin. It produces ideally one or more ontologies at different granularities, but in most cases it ends producing also a well described (relational) data collection. 1. Map Phase: Analyze the functional layers of method/processes, in an asset behaviour analysis. Identify this way and map the classes of the proposed ontology model. 2. Define Phase: Define key data elements and their types for each class of the proposed ontology model. 3. Create Phase: Create the ontology model and prepare its relational implementation by converting relations between classes as object properties. Insert data elements as data properties with logical restrictions. 4. Convert Phase: Convert the ontology model into a relational data model. Apply keys and cardinality as appropriate. 5. Populate Phase: Populate the relational data model with real datasets. While phases 1 and 2 are totally generic, and often do not take place sequentially, but rather in a parallel or intertwined way, in a most real cases, the Create phase does not end with an ontology, nor a relational mapping, but with structured models stored in spreadsheets or files. This is mostly due to the popularity of such formats and techniques, which are perceived as easier to set up and maintain by non- technically versed staff. Data for digital twins plays a role in at least two distinct phases: 1. At design time, data is used for building the Digital Twin 2. At use time, output data from the digital twin is used to make predictions As data is an essential part of creating a Digital Twin, handling large amounts of possibly unstructured or uncleaned data can be complex and cumbersome. Already obtaining the data can be quite difficult, as nearly all private sector businesses and also many public organizations are often unwilling to share information deemed sensitive. It is therefore crucial to understand what data is needed in order to build and run a Digital Twin, whether it needs to be open source or privately held, and to have clear guidance on how to legally and ethically proceed for data acquisition and its subsequent governance. A Digital Twin often serves as a real-time digital counterpart to the physical system, for example for predictive analytics or to support optimal management and use of the physical counterpart. For this to happen, it may have to be fed with large volumes of data. The availability of that data, and its availability in a suitable form
Data and Data Management in the Context of Digital Twins
263
for use are thus key factors for the viability of Digital Twin adoption and deployment. From a Digital Thread point of view, proper data interoperability is a make- or-break prerequisite. Central to this feasibility are the availability of data sources as well as the data format, coding, and representation. We therefore now address those two aspects.
4.3 Primary and Secondary Sources of Data To be readily usable, data needs to be available in a form that can be analysed, presented, and easily interpreted. If the data is available, but cannot be read or efficiently represented, it is de facto inaccessible and thus useless. When gathering and sorting data, primary data sources and secondary data sources are distinguished. Primary data is data originated in the organisation itself (e.g., by own IoT sensors), while secondary data is (possibly pre-existing) data shared by other organisations, e.g., for the purpose of conducting an analysis. For example, open-source data which can be input into the twin is in this sense a secondary data source.
4.4 Types of Data Sources Data can be captured in many different shapes, some of which may be easier to extract than others. Having data in different shapes requires different storage solutions that may require management in different ways. In general, three main types of data are distinguished: structured data, unstructured data, and semi- structured data. Structured Data is data organized in well-defined structures (usually combinations of data structures defined above) that are easy to manipulate. For example, tabular data are organized in columns and rows, mostly with well-defined data types, which are thus easy to analyse. The main advantage of structured data is that it is easily stored in databases, easily entered, queried and modified. Structured data is often managed in spreadsheets, or in relational information management systems that use Structured Query Language, or SQL – a programming language created for managing and querying data in relational databases. GraphQL [47] is a now popular language for querying structured APIs. It creates a uniform API across an entire application collection without being limited by a specific storage engine. Unstructured Data is any raw form of data that does not follow a predefined structure, and it can be in any type or file. An example of would be IoT data, which normally is extracted in JSON format. This data is often stored as is in repositories of files such as cloud environments like AWS. Extracting valuable i nformation
264
T. Margaria and S. Ryan
from sources of unstructured data in order to use it for a Digital Twin can be very challenging because the structure and semantic of the data items can be underspecified and therefore already its parsing can be a challenge. Semi-structured Data is an amalgamation between structured and unstructured data. A semi-structured data may have a consistent defined format such as seen in structured data, but structure may not be very strict throughout the data set. The structure may not be necessarily tabular and parts of the data may be incomplete or contain differing types. This type of data source can be the most difficult to harness for a Digital Twin as abnormalities within the data set may not be noticed before the data is input.
4.5 Historical and Real Time Data Historical data sources contain data collections over a past period of time. They offer insights that can improve long-term and strategic decision making, thus they are useful for building or modifying predictive or prescriptive models of the physical representation of a Digital Twin. The basic definition of real-time data explains it as a data that is passed along the end-user as quickly as it is gathered. Real-time data can be enormously valuable in things like traffic GPS systems, in benchmarking different kinds of analytics projects and for keeping people informed through instant data delivery. Historical datasets can help in answering questions when decision makers would like to benchmark past data against real-time data. When building or using a Digital Twin, it is therefore very useful to have the ability to store and maintain both historical and current data about the physical object [32, 33]. In predictive analytics in a Digital Twin, both types of data sources should be given equal consideration, as both can help in predicting and identifying future trends.
4.6 The Digital Twin as Data and Communication Entity We need to consider that the Digital Twin, being a model, is itself a citizen of the data space. An organization needs therefore to consider how to organize, store and manage the files that constitute the Digital Twin itself. It may be a program, then subject to versioning, maybe managed in a repository where the Digital Twin developers organize its design, use and evolution. It may be a mathematical or statistical model, consisting of a set of equations, relations, or other similar entities. The same applies to the context, which may be formulated in terms of constraints, maybe through matrices or other data structures. The same again applies to rules and policies (e.g., for access or governance), and various properties and their relative data. Considering the communication aspects, the direction of data flow between the twins and with the context is important. Does data only flow from the Physical to the
Data and Data Management in the Context of Digital Twins
265
Digital twin, or vice versa? Or bidirectionally? In each of these cases we have a different kind of Digital Twin system: reactive, useful, e.g., to learn the behaviour of the physical twin and create a model, or predictive, or for real time monitoring and co-evolution, e.g., when AI and autonomous learning are involved. In either case, what’s the communication capability of the Digital Twin with its environment and with the other twin? [41] What is the bandwidth? In which direction? Which communication channels (inputs/ outputs) and communication technologies are available? This is essential when considering the data pressure on each of the twins. Under some circumstances the data collection and elaboration may be feasible locally or on the edge, or in the cloud. Maybe one needs to resort to a traditional distributed architecture, or if there are centralized or siloed systems, some uses or some management methods may be completely unfeasible. It is also possible to think of a servitization option, where the Digital Twin is made accessible as a service, or the data is accessed as a service. In this case, the design, efficiency and understandability of the APIs are crucial, as it is necessary to imagine and design how to manage the data, how to know what happens server side/ client side for a variety of usage scenario that may not be known at design and implementation time. [11].
5 Ontology Creation and Example Framework As previously mentioned, the overall modeling approach for Digital Twins suffers of a number of issues with the data used in Digital Twins, like the “structural heterogeneity of data, which hampers the real-time simulation and adjustment in assembly process” [1]. The enhanced approach to creating an ontology sketched in Fig 4. Noy and McGuinness [34] considers pre-existing data and ontologies and works iteratively. This process was used to produce the ontology of Fig. 2. In the building automation systems domain, an “ontology is more modular and comprehensive in comparison to existing models” in the building domain [36]. While [1] refers explicitly to “ontologies for part Digital Twins oriented to assembly”, the process of Fig. 4 is very abstract and fully generic, thus it can be followed in a wide variety of Digital Twin domains and also for non- Digital Twin ontologies. In the context of Digital Twins, we distinguish at least problem domain ontologies, that describe the knowledge and “things” in the sector under consideration (like manufacturing, built space, health and medicine), and Digital Twin ontologies, that concern the specific domain of the Digital Twin itself and its characterization, as well as the organization of the systems supporting the Digital Twin. Once a correct and functional ontology is available, it is important to consider an overall framework providing guidance in utilising data in an efficient manner in a Digital Twin. Produced by ([2].) from their empirical research, the diagram in Fig. 5 shows a graphic summary representation of a hierarchical, multifaceted analysis of the main dimensions of a Digital Twin. The three main facets identified are Data
266
T. Margaria and S. Ryan
Fig. 4 Process of creating an ontology [34]
resources, Internal value creation and External value creation. Each facet is further organized in 9 facet-specific nominal categories and hierarchical ordinal variables, This leads in total to a model with 3 × 27 = 81 different Digital Twin subcategories that can conveniently be distinguished, discussed and combined, forming the composite Digital Twin for any specific use case [2]. The framework in generic, thus it provides the user with a domain-agnostic approach to structuring a Digital Twin. Data Resources Data and information are the basis of the smart integrated manufacturing environment [3]. Insufficient data quality and quantity are an insurmountable barrier to the creation of a useful virtual representation. Given a Digital Twin, generated data and various analytics are the key to the creation of value and enable the DT’s success. The data-related resources, also occurring in the ontology previously mentioned and in Fig. 2, concern Data sources, Data categories and general Data formats. Having and properly analysing these resources is critical for creating an efficient Digital Twin [42]. External Value The external value creation facet concerns the situation and effects in the general market. There, a cooperation is needed with customers, partners and other actors in the domain. The external values considered in the model concern (a) the attributes of the services corresponding to the value proposition, (b) the level of smartness of
Data and Data Management in the Context of Digital Twins
267 at
ta Da
rm Fo
Un St
Int
Pe
rfo
rm
ali
ce
ty
tem
of
s Sy
ms
ste
Sy
m ste
Sy
dP
e ect
nn Co
art
ne ss
lity
an
Control
Optimize
Authority Sm
ab
uct
rod
Ma
tur ity
System Hierarchy Level
Qu
ail
Ca
Customer
S
Av
Product
ta
erv
ed
ed s
tem
ys
lS
Content Da
pe
co
S ice
ret
uc tur
ed
t Ex
a ern
s
tem
ys
lS
a ern
Data Source
erp
str
ruc tur
Int
s
ing
Th
teg
ory
Re Data so urc es
Ex ter Va nal lue Cr ea tio n
al ern Int lue Va ion t ea Cr n tio ea Cr chy e r u l a Va Hier
Ma
In
sta
e
End of Life
Lif
Middle of Life
Beginning of Life
nc
ste
r
on ati ov Inn / e l tur tua Fu Ac t/ n e es Pr y tor His st / Pa
Generations / Time
Ty pe
ec
ycl e
Fig. 5 Digital Twin framework [2]
the connected products, and (c) the actors on the different levels of the ecosystem. [2]. Internal Value The internal value creation is realized within the company itself, it directly impacts the company and can have a large influence on the value chain. The main areas considered are a) the lifecycle phases of products, b) the product management levels and c) the different generations of both [2]. Use of the Framework Following the above framework, users wishing to adopt Digital Twins in a product- oriented context are supported in defining what data matters, and they are guided to consider a collection of factors and effects in the organization and in the markets. This helps them to identify and implement Digital Twin -related activities and then abstractly provide evidence of these activities and their effects to diverse stakeholders. The framework serves to lead the adopters to acquire and exploit this ability, By
268
T. Margaria and S. Ryan
providing a structured approach and a simple guide to addressing the three facets, the framework helps identify and then pursue the key elements that make a Digital Twin workable and understandable, so that it provides value for the end user and the organization. Data quality and handling are here key factors.
6 Data Quality, Storage and Security 6.1 Data Quality Given the potential benefits of utilising Digital Twins in various domains, ensuring a proper data preparation and good data quality are the main objectives for extracting value when using such instruments. The same applies to new data-driven technologies like Artificial Intelligence (AI) and Machine learning (ML). As the saying goes, “there is no AI without data” [18]. As AI and ML technologies are often part of the Digital Twin infrastructure, as explained in detail in [32, 33], the understanding and infrastructure across the DT model and all its supporting technologies. Of core importance are thus the data layer and model layer previously introduced. There, data quality concerns the 6 main quality dimensions show in Fig. 6: any shortage in any of them will adversely affect the Digital Twin model, its quality and value. Accuracy Poor data accuracy can substantially impair an otherwise faithful Digital Twin. Inaccuracies in datasets create serious problems when using the Digital Twin, as they distort results and data outputs. Accurate data is error-free data sets, which are critical for the successful adoption of a DT in any organisation. Data cleansing, noise elimination, various plausibility analysis, quantitative and semantic data constraints, and many other techniques can be adopted to ensure accuracy. Fig. 6. Data quality criteria [5]
Data and Data Management in the Context of Digital Twins
269
Causes of inaccuracy can be poor data entry practices, poor regulation of data accessibility, and also simply ignoring data quality. Organizations struggle in this area due to poor data culture, data hoarding, outdated technologies, and also the sheer cost of establishing and then maintaining a good level of data accuracy. Completeness Completeness can be defined as the expected comprehensiveness of the dataset itself. Data can be complete even if optional data that is not required at this moment is missing. In general, as long as the data meets the expectations of the project the dataset is considered complete. Relevancy Data that is collected and input into the Digital Twin needs to be relevant for the task or modelling aspect under consideration. A relevant dataset is basically indisputable, and it leads to actionable analytics that is credible and outcomes that the business or organisation can trust and use. This in turn potentially leads to strong decisions and strategies, as they base on strong hypotheses. Meaningful data is also essential for optimisation: if a Digital Twin does not have meaningful data, there is no safe ground on which to decide on changes. Validity Data validity or data validation is an essential part of handling and sorting data, especially if the dataset in question is IoT data or data collected from some device. Accuracy is a precondition to validity: if a dataset is not accurate from the beginning, this step becomes obsolete. That’s why the other steps are extremely important to implement. Timeliness Timeliness of data refers to the availability and accessibility for a Digital Twin of data that is adequate in the time dimension. Ensuring the data is clean and extremely well organized ensures the Digital Twin will produce excellent data output. Violation of timeliness could for example arise from processing practices that induce delays, like batch processing or periodic updates that provide new data or produce new results by a time where the context for an action or decision has significantly changed. We talk then of stale data or stale information. Recognizing that or when information becomes stale is essential for in all the models that deal with context dependent decision or actions. Consistency Data consistency means that there is consistency in the measurement of variables throughout the data set of a Digital Twin, e.g., through standardisation. Consistency is a critical concern especially when data is aggregated from multiple sources, which frequently happens when dealing with IoT data and opensource datasets. Discrepancies in data meanings between data sources can create inaccurate,
270
T. Margaria and S. Ryan
unreliable datasets that cannot be used or, if used, are really dangerous. Examples of inconsistencies are different units of measurement (inches vs. cm), or also mismatches in the resolution of different measurements. For example, different sampling frequencies may not allow a proper correlation between data streams: in geographic applications, map resolutions that are either too large to enable finer observations, or so detailed that the abstraction of aggregated information costs so much computation that the result once achieved is already stale. The same applies to meteorological information, where the density of collection of meteorological data may be too coarse for the use, e.g., with respect to specific microclimates.
6.2 Data Storage – Distributed vs. Centralized and Information for a Digital Twin Many businesses who create a Digital Twin have multiple sets of data fragmented in some sort of silos. Data is often distributed over several internal systems and digital platforms, but also externally, on third party vendor platforms or external open- source data. The attractiveness of many independent sources is high, however the integration can be difficult, and even the discovery and availability of useful data sets can be endangered. Centralising an organisation’s data can create a powerful position of control for the future. It supports the management of completeness (this is all the data we have), and the evaluation of relevance (if it’s not there, it’s really missing) and may therefore enable a more economic and standard. The cost of course is the usual cost of centralization: every local data source must connect to the centre, possibly with local inconveniences and double management. At the most concrete level, data centralisation is about collecting and storing all the Digital Twins data assets in one easy to access location. Data sets are extracted from the data silos and stored in a repository that is made easily accessible to the entire team dealing with the implementation of the physical representation. The benefits can span –– better overview: new ideas can arise from having a full and automated dataset at disposal which can be connected to a Digital Twin for prototyping, experimental research, variant analysis, evolution testing. –– better oversight: in terms of governance, centralisation simplifies enforcing uniformity: format standardization, access rules, auditing, maintenance and other checks have a single point of action. –– easier aggregation: accurately merging and combining different datasets is essential to data aggregation. If the various datasets are cordoned off in silos, aggregation becomes very difficult and possibly very manual [25], leading to excessive effort and to a deterrent for the use. Aggregation is for example at the core of any data fusion of data series.
Data and Data Management in the Context of Digital Twins
271
–– better growth and scalability: as the use of Digital Twins in the organisation expands, so will the amount of data that is needed and the data analysis that will need to be carried out. Eliminating the silos is beneficial for the ease of aggregation and use by the facilitators as a Digital Twin project becomes larger with time. This enables the digital twin to adapt to the organisations needs along the growth and evolution of the organisation. Strategic storage decision can also support or impair data sharing (including FAIR open data) and replication, for example for benchmarking, comparison, evolution, optimisation. Here, regulatory aspects can also play a role. For example, auditing and forensic data usage is possibly only when the data is kept or it can be deterministically reproduced. Similarly, there may be requirements to make the data open and accessible, e.g., due to specific policies by public funding bodies if national or international funding is involved.
6.3 What Data to Store? The question of what data is volatile and what needs to be more or less permanently stored is a big design decision. Especially in times of cloud-based systems, where the costs depend on the size of the storage and the use of the computation resources is metered and expensive, how much and which data to store is also an economical decision. Not everything may need to be stored: established techniques like checkpointing, abstraction, summarization, may help in this context. The underlying reasoning is that one may know which parts of the computation may be deterministic, so that one can identify decision points that are cuts, which need to be documented (i.e., stored) in order to reconstruct the full execution, while deterministic segments may support easy data reconstruction by rerunning the computation from the last stored status. Given that also models evolve over time, this concerns • Storing the model (model as data) • Storing the runtime data: simulation, real live data, historic data, etc. • Versioning, basically of everything. The important point here is that early decisions are often difficult to undo, and changes of policies may impact the availability, completeness, compatibility of data sets, thus impacting the usefulness and traceability of the Digital Twin results. Therefore, it is important to consider right away at design and set up time the dimensions of evolution and growth that can be expected, in order to take decisions that make those evolutions possible with a reasonable cost and grief.
272
T. Margaria and S. Ryan
7 Data in Digital Twin Ecosystems We consider here a few selected initiatives, current and past, that aim at establishing functioning ecosystems about Digital Twins or around large-scale interoperability, which is relevant for the Digital Twin success as well as for the Digital Thread.
7.1 The Gemini Principles With the existence of an increasing number of digital twins in each sector, it is thinkable to move towards a network of cooperating Digital Twins. Initiatives in this direction are already happening: the Centre for Digital Built Britain (CDBB) at Cambridge University is working towards creating an ecosystem of connected digital twins for the built space. This may effectively function like a national digital twin, opening the opportunity to release even greater value, using data for the public good. The projection is that greater data sharing could release an additional £7bn per year of benefits across the UK infrastructure sectors [7]. The CDBB is targeting a smart digital economy for infrastructure and construction. The key to the sharing is the identification and then adoption of foundational definitions and guiding values – the Gemini Principles – and to begin enabling alignment on the approach to information management across the built environment. The Gemini Principles are reported in Fig. 7. Relevant for us is that the principles establish the foundation of proper, meaningful, safe and secure “twinning”, de facto defining the criteria for a proper interfacing and interoperation of Digital Twins that are separately created and independently maintained.
Fig. 7 The Gemini Principles [7]
Data and Data Management in the Context of Digital Twins
273
While the Purpose principles address the social and economic value of federating the Digital Twins, the Trust principles endorse enabling secure access and sharing, openness and quality. The Function principles address federation, curation and evolution. Of the 9 principles, specifically the Quality and Curation address the data quality and management around the Digital Twins, and the Federation concerns forms of standardization that enable interoperability. We see here that the federated approach faces the same challenges and tradeoffs concerning the distribution vs. centralization design choice mentioned in Sect. 6.2. So far, only the principles have been agreed. The work on the concretization, operationalization, the support tools and case studies is ongoing.
7.2 Data Security for Interoperability and the Future of the DT As data is used in Digital Twins in increasingly use new ways, the underlying IT systems must not only keep up with change, but also offer capabilities to further innovation. However, around 90 per cent of big data is unstructured, and current databases were not built to handle this type of data and this quantity [17]. The currently scalable databases were created for SQL queries within relational database systems, i.e., they were designed for structured datasets, and are limited in terms of data size they can efficiently handle and its scope of use. Many organisations rely today on third parties for data storage infrastructure, adopting cloud storage solutions. This choice that can introduce new problems, such as the growing cost involved in scaling the use, and performance issues when dealing with large amounts of data, as in the larger Digital Twin projects. Security and data privacy risks are increasingly likely, and their impact is very serious, as evidenced by a growing number of high-profile data breaches, such as the HSE breach in Ireland’s national Health system in May 2021 [37]. When such a breach occurs, the entire data base content is under threat, affecting every data user. In order to meet the evolving data needs, data storage solutions need to be efficient, extremely secure, and able to work with the increasingly stringent requirements of a Digital Twin and its environment. Distributed ledger technologies like various blockchain solutions can offer a novel approach to such requirements. The blockchain offers controllability of decentralised systems, instead of the siloed or centralised storage that currently still prevails. A distributed ledger technology allows data to be broken up and stored across a collection of nodes [22] in a secure and anytime auditable way. This prevents undesired manipulations, and helps increasing trust in the data, mostly in terms of integrity and provenance authentication. Potential benefits are advanced security, eliminating third party bottlenecks and costs, and facilitating new scaling techniques like database partitioning by sharding and data migration by swarming, that become acceptable in the moment distribution is not
274
T. Margaria and S. Ryan
anymore a carrier of excessive risk. This is an essential step forward towards enabling a new generation of logical, federated and distributed data production, management, and use.
7.3 Data Exchange and Service Brokering Platforms Many national and international initiatives are ongoing for the attempt to bootstrap a sharing economy. Beside the CDBB initiative already mentioned, for example the GAIA-X initiative (URL) aims to produce a domain-independent “next generation of data infrastructure: an open, transparent and secure digital ecosystem, where data and services can be made available, collated and shared in an environment of trust”. The organisational structure of Gaia-X is built on three pillars: the Gaia-X Association, addressing the policy making, research and institutions at the European and national level, the national Gaia-X Hubs, that serve as information points to support the adopters, and the Gaia-X Community as community of practice in the 10 addressed sectors (agriculture, energy, finance, geoinformation, health, industry 4.0/SME, mobility, public sector, Smart City/Smart Region, and Smart Living). All these application domains are very suitable for Digital Twin adoption, and the GAIA-X aim is to function as an exchange promotor, both for data and for service components. Reaching agreement and maintaining coherence across such a wide range of actors and domains is however a serious challenge [16] Previous initiatives like the Theseus initiative in Germany [6] and state aid projects [14] addressed the exchange goal by providing even new programming or interfacing languages. In Theseus, at the core of the service discovery engine within the broker was the USDL (Universal Service Description Language) [23], a generalization of the at that time WSDL language for the description of semantic web services. Also in that case, however, agreement proved to be difficult, as the national approaches in France and Germany diverged already in the early phases and led to distinct solutions [13].
8 Conclusions In this overview chapter, we described the role and importance of data in connection with a Digital Twin, limiting it on purpose to a high-level summary, and description with examples from the literature. We discussed how data can make the difference between well-functioning Digital Twins that deliver enhanced output and value in an organisation and Digital Twins where poor quality of data, data management, and data-related design and infrastructure undermine the usefulness to the point of it being noxious. We exposed the need for an effective description of the data, concerning the formalisation of data in and around Digital Twins. We stressed the importance of ontologies and
Data and Data Management in the Context of Digital Twins
275
charaterization at different abstraction levels in the physical, data and model layers of the Digital Twin itself. We showed a general framework that users can adopt to describe and analyse the value provided by Digital Twins within an organization in dependency of the data. We provided a high level, general overview on the dimensions of data-related design and decision making, like appropriate data sources, formats, and quality. The reflection on these many dimensions of data and the related design decisions and management practices has led to a survey specifically on the Digital twin data questionnaire. Preliminary results have been presented in [38], validating the structure and topic coverage. The data collection phase is now ongoing, specifically addressing Digital Twins from the manufacturing sector in Ireland, within the Confirm National Research Centre [10] and the logistics sector in Germany, within the SILE project [43]. Similarly, there is progress towards making a large-scale digital thread platform available across application domains, vendor and technology agnostic, is also progressing through collaborative effort within Confirm [28]. It clearly hinges upon the integration and management of data and a large collection of models. Continued research in this area is vital, as it will provide a multi-perspective and hopefully transdisciplinary approach to the importance of well-described, semantically expressive and interoperable data. Making the data at the core of the Digital Twin a well-managed asset and providing understandable frameworks to all the diverse kinds of users is the key to the success of the Digital Twin technology across the many industries as well as in society. This can only succeed in a cooperative way. Acknowledgements This work was supported by the Science Foundation Ireland grants 13/ RC/2094 (Lero, the Irish Software Research Centre) and 16/RC/3918 (Confirm, the Smart Manufacturing Research Centre).
References 1. Bao, Q., Zhao, G., & Yu, Y. (2020). Ontology-based modelling of part digital twin oriented to assembly. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 236(1–2), 16–28. https://doi.org/10.1177/0954405420941160 2. Barth, L. Ehrat, M. Fuchs, R. & Haarmann, J. (2020). ICISS 2020: Systematisation of Digital Twins: Ontology and conceptual framework. In Proceedings of the 2020 The 3rd international conference on information science and system. March 2020 pp. 13–23. https://doi. org/10.1145/3388176.3388209 3. Bazaz Moghadaszadeh, S., Lohtander, M., & Varis, J. (2020). Availability of Manufacturing data resources in Digital Twin. Procedia Manufacturing, 51, 1125–1131. https://doi. org/10.1016/j.promfg.2020.10.158 4. Berners-Lee, T., & Hendler, J. (2001). Publishing on the semantic web. Nature, 410, 1023–1024. https://doi.org/10.1038/35074206 5. Britan, G., & Mehdi, S. (2009). USAID, Office of Management Policy, Budget and Performance (MPBP). Performance monitoring & evaluation tips data quality standards (No. 12, 2nd ed.) http://transition.usaid.gov/policy/evalweb/documents/TIPS-DataQualityStandards.pdf. Accessed on 10 Jan 2022.
276
T. Margaria and S. Ryan
6. Bundesministerium für Wirtschaft und Technologie. (2011). Theseus Forschungsprogram (in German). Available: https://www.digitale-technologien.de/DT/Redaktion/DE/Downloads/ Publikation/theseus-forschungsprogramm-broschuere.pdf?__blob=publicationFile&v=7. Accessed on 10 Jan 2021. 7. CDBB, Centre for Digital Built Britain. The Gemini Principles. (2018). Cambridge, UK. https:// www.cdbb.cam.ac.uk/system/files/documents/TheGeminiPrinciples.pdf. Accessed on 3 Nov 2021. 8. Chazelle, B., & Guibas, L. J. (1986). Fractional cascading: I. A data structuring technique. Algorithmica, 1, 133–162. https://doi.org/10.1007/BF01840440 9. Chen, X., Jia, S & Xiang, Y. (2020). A Review: Knowledge reasoning over knowledge graph. Expert systems with applications (Vol. 141, p. 112948). College of electronic and information engineering, Tongji University. https://doi.org/10.1016/j.eswa.2019.112948 10. Confirm: Confirm smart manufacturing – Science Foundation Ireland Research Centre. Homepage: https://confirm.ie/ 11. Conde, J., Munoz-Arcentales, A., Alonso, A., Lopez-Pernas, S., & Salvachua, J. (2021). Modeling Digital Twin data and architecture: A building guide with FIWARE as enabling technology. IEEE Internet Computing, 26(3), 7–14. 12. DAMA. (2021). Mission, vision, purpose and goals, Available: https://www.dama.org/cpages/ mission-vision-purpose-and-goals. Accessed on 1 Dec 2021. 13. Deutsche Welle (DW). (2007). Germany to fund rival to Google search engine, DW. Available https://www.dw.com/en/germany-to-fund-rival-to-google-search-engine/a-2698176. Accessed on 10 Jan 2022. 14. European Commission. (2007). EU clears state aid for German multimedia search engine project, 20 July 2007. Available https://cordis.europa.eu/article/id/28084-eu-clears-state-aid- for-german-multimedia-search-engine-project. Accessed on 10 Jan 2021. 15. Go Fair. (2016). How to go FAIR. Available: https://www.go-fair.org/fair-principles/. Accessed on 24 Nov 2021. 16. Goujard, C. & Cerulus, L. (2021). Inside Gaia-X: How chaos and infighting are killing Europe’s grand cloud project, Politico, 26 Oct 2021. https://www.politico.eu/article/chaos- and-infighting-are-killing-europes-grand-cloud-project/. Accessed on 10 Jan 2022. 17. Griffith, E. (2018). Available: https://uk.pcmag.com/old-news/118459/90-percent-of-the-big- data-we-generate-is-an-unstructured-mess. Accessed on 03 Nov 2021. 18. Groger. C. (2021). There Is No AI Without Data, Communications of the ACM. Available: There Is No AI Without Data | November 2021 | Communications of the ACM. Accessed on 3 Nov 2021. 19. Guarino, N. (1998). Formal ontologies and information systems, In Proceeding 1st International Conference. (FOIS) (pp. 3–15). https://doi.org/10.1109/ACCESS.2020.2989126 20. Hawkins, O., (n.d.) WPP Open Data 2030. Available: https://www.wpp.com/-/media/project/ wpp/images/wpp-iq/pdfs/wpp-data-2030-report.pdf?la=en. Accessed on 30 Oct 2021. 21. Husáková, M., & Bureš, V. (2020). Formal ontologies in information systems development: A systematic review. Information, 11(2), 66. https://doi.org/10.3390/info11020066 22. Javed, I., Alharbi, F., Margaria, T., Crespi, N., & Qureshi, K. (2021). PETchain: A blockchain- based privacy enhancing technology. IEEE Access, 9, 1–1. https://doi.org/10.1109/ ACCESS.2021.3064896 23. Kadner, K., Oberle, D., Schaeffler, M., Horch, A., Kintz, M., Barton, L., Leidig, T., Pedrinaci, C., Domingue, J., Romanelli, M., Trapero, R., & Kutsikos, K. (2011). Unified service description language XG final report. Available: https://www.w3.org/2005/Incubator/usdl/XGR- usdl-20111027/. Accessed on 10 Jan 2022. 24. Krima, S., Barbau, R., Fiorentini, X., Rachuri, S., & Sriram, R. (2009). OntoSTEP: OWL-DL Ontology for STEP, NIST Interagency/Internal Report (NISTIR), National Institute of Standards and Technology, Gaithersburg, MD [online]. https://tsapps.nist.gov/publication/ get_pdf.cfm?pub_id=901544. Accessed on 27 Nov 2021.
Data and Data Management in the Context of Digital Twins
277
25. Kupriyanovsky, V., Pokusaev, O., Klimov, A., Dobrynin, A., Lazutkina, V., & Potapov, I. (2016). BIM on the world’s railways – development, examples, and standards. International Journal of open Information technologies, 8(5), 57–80. Available: http://injoit.org/index.php/ j1/article/view/934 26. Liu, M., Fang, S., Dong. H and Xu. C (2021) Review of digital twin about concepts, technologies, and industrial applications: Journal of Manufacturing Systems, 58Part B, 346–361, ISSN 0278-6125, https://doi.org/10.1016/j.jmsy.2020.06.017. 27. Margaria, T., & Schieweck, A. (2019). The Digital Thread in Industry 4.0. In Proceeding IFM 2019, 15th International Conference on Integrated Formal Methods December 2019, Proceedings. https://doi.org/10.1007/978-3-030-34968-4_1 28. Margaria, T., Chaudhary, H. A. A., Guevara, I., Ryan, S., & Schieweck, A. (2021). The interoperability challenge: Building a model-driven digital thread platform for CPS. In Proceeding ISoLA 2021, international symposium on leveraging applications of formal methods, verification and validation. LNCS (Vol. 13036, pp. 393–441). Springer. 29. Market Research (2020) Digital twin market research report. Available: https://www.marketsandmarkets.com/Market-Reports/digital-twin-market-225269522.html. Accessed 29 Oct 2021. 30. Meierhofer, J. & West, S. (2019). Service value creation using a digital twin. In Naples forum of service, service dominant logic, Network and systems theory and service science. Integrating three perspectives for a new service agenda, Ischia (pp. 4–7) June 2019. https:// doi.org/10.1109/SDS49233.2020.00019 31. Minerva, R., Lee, G. M., & Crespi, N.. (2020). Digital Twin in the IoT context: A survey on technical features, scenarios, and architectural models. In Proceedings of the IEEE (pp. 1–40). 10.1109/JPROC.2020.2998530 32. Minerva, R., Awan, F. & Crespi, N. (2021). Exploiting Digital Twins as enablers for synthetic sensing. Available: https://www.researchgate.net/publication/348409194_Exploiting_ Digital_Twins_as_enablers_for_Synthetic_Sensing 33. Minerva, R., Crespi, N., Farahnakhsh, R., & Aqan, F. M. (this volume). Artificial intelligence and the Digital Twin. Progression from data to knowledge. In N. Crespi, A. T. Drobot, & R. Minerva (Eds.), The Digital Twin. Springer. 34. Noy, N. F. & McGuinness, D. L. (2001). Ontology Development 101: A Guide to Creating Your First Ontology (KSL-01-05), Technical report, Stanford Knowledge Systems Laboratory. 35. Nyffenegger, F., Hänggi, R., & Reisch, A. (2018). A reference model for PLM in the area of digitization. IFIP Advances in Information and Communication Technology, 358–366. https:// doi.org/10.1007/978-3-030-01614-2_33 36. Ploennigs, J., Hensel, B., Dibowski, H., & Kabitzsch, K. (2012). BASont – A modular, adaptive building automation system ontology, In IECON 2012 – 38th Annual Conference on IEEE Industrial Electronics Society (pp. 4827–4833). 10.1109/IECON.2012.6389583 37. Reynolds, P. (2021) HSE breach latest in spider’s web of cyber-attacks. Available: https:// www.rte.ie/news/2021/0516/1222004-cyber-attack-health/. Accessed on 2 Nov 2021. 38. Ryan, S., Margaria, T. (2021). Definition, description and formalisation of Digital Twin Data structures. In 37th international manufacturing conference, Irish Manufacturing Council, September 2021. 39. Schleich, B., Anwer, N., Mathieu, L., & Wartzack, S. (2017). Shaping the digital twin for design and production engineering. CIRP Annals, 66(1), 141–144. https://doi.org/10.1016/j. cirp.2017.04.040 40. Schmitt, L., & Copps, D. (this volume). Chapter Two. The business of Digital Twins. In N. Crespi, A. T. Drobot, & R. Minerva (Eds.), The Digital Twin. Springer. 41. Schroeder, G. N., Steinmetz, C., Pereira, C. E., & Espindola, D. B. (2016). Digital twin data modeling with automationML and a communication methodology for data exchange. IFAC- PapersOnLine. 2016;49(30):12-17 42. Schweiger, L., Barth, L., & Meierhofer, J. (2020). Data resources to create Digital Twins. In 2020 7th Swiss Conference on Data Science (SDS) (pp. 55–56). 10.1109/SDS49233.2020.00020
278
T. Margaria and S. Ryan
43. Sile. (2021). Silicon Economy. Retrieved January 2, 2022. Available from https://www.silicon- economy.com/en/homepage/ 44. Singh, S., Shehab, E., Higgins, N., Fowler, K., Reynolds, D., Erkoyuncu, J. A., & Gadd, P. (2021). Data management for developing digital twin ontology model. Proceedings of the Institution of Mechanical Engineers, Part B. Journal of Engineering Manufacture, 235(14), 2323–2337. https://doi.org/10.1177/0954405420978117 45. Tao, F., Sui, F., Liu, A., Qi, Q., Zhang, M., Song, B., Guo, Z., Lu, S. C.-Y., & Nee, A. Y. C. (2019). Digital twin-driven product design framework. International Journal of Production Research, 57(12), 3935–3953. https://doi.org/10.1080/00207543.2018.1443229 46. WPP Report. (2020). Annual report 2020. Available: https://www.wpp.com/investors/annual- report-2020. Accessed 27 Oct 2021. 47. Hartig, O., & Pérez, J. (2018). Semantics and Complexity of GraphQL. Proceedings of the 2018 World Wide Web Conference. Tiziana Margaria is Chair of Software Systems at the University of Limerick. She has broad experience in the use of formal methods for high assurance systems, in particular concerning functional verification, reliability, and compliance of complex heterogeneous systems. Current application domains are to embedded systems, healthcare, and smart advanced manufacturing. She is Vicepresident of the Irish Computer Society and of IFIP WG10.5. She is a principal investigator of Lero, the Irish research centre on Software, Confirm, the Irish national Research Centre on Smart Manufacturing, of LDCRC, the Limerick Digital Cancer Research Centre, and codirector of the SF Centre of Research Training in AI. Her most recent achievement is the Immersive Software Engineering integrated BSc/MSc, which is a tightly knit ecosystem spanning education, industrial practice and research.
Stephen Ryan is a PhD candidate in the Computer Science department in the University of Limerick Ireland with a focus in business modelling and data science. His main area of research is in business modelling for supply chains and risk within a business environment. Stephen holds a bachelor’s degree in Economics and Finance and an MSc in Computational finance and machine learning both from the University of Limerick. Industry experience is mainly connected to fintech.
Hybrid Twin: An Intimate Alliance of Knowledge and Data Francisco Chinesta, Fouad El Khaldi, and Elias Cueto
Abstract Models based on physics were the major protagonists of the Simulation Based Engineering Sciences during the last century. However, engineering is focusing the more and more on performances. Thus, the new engineering must conciliate two usually opposite requirements: fast and accurate. With the irruption of data, and the technologies for efficiently manipulating it, in particular artificial intelligence and machine learning, data serves to enrich physics-based models, and the last allows data becoming smarter. When combined, physics-based and data-driven models, within the concept of Hybrid Twin, real-time predictions are possible while ensuring the highest accuracy. This chapter introduces the Hybrid Twin concept, with the associated technologies, applications and business model. Keywords Model order reduction · Machine learning · Data assimilation · Real-time · Diagnosis · Prognosis · Decision making
1 In Between All Physics and All Data Initially, the industry adopted virtual twins in the form of simulation tools that represented the physics of materials, processes and structures from physics-based models. However, the rather limited computational resources available in small and medium-sized enterprises did not allow industry problems to be simulated as quickly as engineers would have liked. Despite that, these computational tools F. Chinesta (*) Arts et Métiers Institute of Technology, Paris, France ESI Group, Paris, France e-mail: [email protected] F. El Khaldi ESI Group, Paris, France E. Cueto I3A, Universidad de Zaragoza, Zaragoza, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_11
279
280
F. Chinesta et al.
transformed the engineering practice to offer optimized design tools and became essential in almost all industries at the end of the twentieth century. In these virtual twins the main protagonist was the so-called nominal model, expected representing the observed reality, in general calibrated offline from the data provided by specific tests, enabling to predict the responses to given loadings, the last also nominal in the sense that they are expected represent the ones that the design will experience in service. Thus, their main goal is ensuring the performances related to the expected operational conditions all along the design life. For that purpose, the mathematical models consisting of quite complex PDEs— partial differential equations—strongly nonlinear and coupled, are discretized, for example by employing the finite element method that was and continues being nowadays the more widely employed numerical technology, despite of the fact that in many cases the calculations are very costly in computational resources and computing time. Today we do not sell aircraft engines, but hours of flight, we do not sell an electric drill but good quality holes, … and so on. We are nowadays more concerned by performances than by the products themselves. Thus, the new needs imply focusing on the real system (instead of on its nominal representation) at time t, subjected to the real loading that it experienced until the present time (instead of the nominal loading) in order to predict the future responses and in this manner, anticipate any fortuity event, making possible diagnosis, prognosis and the associated adequate real-time decision-making. Here, usual modeling and simulation techniques are limited, the former because of the fact that a model is sometimes no more than a crude representation of the reality, and the last because of the computational cost that its solution entails as previously discussed, avoiding the online calibration and the real-time solution of the mathematical models, except when using disproportionated computational resources (HPC, edge and cloud computing, …) that even if they constitute a valuable option, they compromise at present the deployment of simulation-based engineering in small and medium-sized industries, both being the heart of technology innovation, as well as its use in deployed platforms urgently needed in autonomous systems and mobility. It was at the beginning of the twenty-first century that data burst into engineering, when simulation had to abandon the design offices, the research and development, to go down into the work-floor, to go out on the street (autonomous mobility systems) and beyond (smart-city, smart-nation, …), accompanying the engineering products throughout their lives, … Engineering is no more concerned by products but by their performances. In such a setting, the simulation of the twentieth century, with its virtual twins, was not able to provide answers as quick as required, in real- time in many cases, except by using disproportionate computational resources that would compromise its democratization. For years, data was intensively collected and used in other areas where models were less developed or sadly lacked in precision. Nowadays, the data collected massively can be classified, analyzed, interpreted, … using artificial intelligence techniques. Thus, the correlations between the data can be removed, proving that a certain simplicity remains hidden behind a rather apparent complexity.
Hybrid Twin: An Intimate Alliance of Knowledge and Data
281
On top of that, input-output relationships have been set-up from data as the sole ingredient. The multidimensional data has found a way to show their selves to us… and the massive data was able to provide us with predictive keys, anticipating fortuity events, making possible predictive and operational maintenance, monitoring and decision-making, all them under the stringent real-time constraint. Here, we are in the realm of digital twins, where physics-based models, having to choose between precision and speed, were replaced by data-driven decision- making. However, the use of the latter requires adequate learning beforehand, as the use of models based on physics also required extensive discovery and formalization. We must compose with virtual twins offline and digital twins online in an entirely new engineering, but as we will see later, still unsatisfactory. Digital twins in engineering and industry were quickly confronted with two major recurrent difficulties, (i) the need of huge amounts of data enabling accurate and reliable predictions, without forget that data is synonymous with cost (acquisition cost and processing cost); and (ii) the difficulty of explaining the predictions that “artificial intelligence” offer, and with the latter the difficulty of certifying engineering products and decisions. Indeed, the world of engineering seemed very different from these other areas where data emerge from everywhere, models, when they exist, remain of dubious validity, and the consequences of decision-making remain far from the requirements imposed to engineering designs. At the beginning of the twenty-first century, several scientific revolutions in applied mathematics, computer science (high-performance computing) and computational mechanics, disrupted pushing forward the limits of numerical simulation. In particular, model order reduction -MOR- techniques emerged. These techniques neither reduce nor modify the model itself, they simply reduce the complexity of its resolution by employing more adapted approximations of the unknown fields, and thus transform complex and time-consuming calculations, into faster computations, sometimes enabling almost real-time responses, without impacting significantly their accuracy. In this way, these new techniques have completely transformed traditional approaches based on simulation, optimization, inverse analysis (e.g. calibration), control and uncertainty quantification and propagation, enabling real-time responses. Model Order Reduction (MOR) techniques express the solution of a given problem (e.g. a PDE) into a reduced basis with strong physical or mathematical content. Sometimes these bases are extracted from some solutions of the problem at hand performed offline (e.g. the proper orthogonal decomposition, POD, or the reduced bases method, RB). Now, when operating within the reduced basis, the solution complexity scales with the size of this basis, in general much smaller than the size of the multi-purpose approximation basis associated with the finite element method (FEM) whose size scales with the number of nodes in the mesh that discretizes the domain. Even if the use of a reduced basis implies a certain loss of generality, it enables impressive computing time savings, and as soon as the problems solution continues living in the space spanned by the reduced basis, the computed solution
282
F. Chinesta et al.
remains accurate. Obviously, there is not miracle: as soon as one is interested by a solution that cannot be accurately approximated into the space spanned by that reduced basis, the solution will be computed fast, but its accuracy is expected being very poor. As just mentioned, there is no free lunch. As soon as the computational complexity remains reduced, these MOR computational procedures can be embedded in light computing devices, as deployed systems. The main drawbacks of those technologies are: (i) its limited generality just discussed; (ii) the difficulties of addressing nonlinear models, fact that requires the use of advanced strategies; and (iii) its intrusive character with respect to its use within existing commercial software. For circumventing, or at least alleviating, the just referred computational issues, an appealing route consists of constructing the reduced basis and solving the problem simultaneously, as proper generalized decompositions (PGD) perform. However, this option is even more intrusive that the ones referred above. Thus, non- intrusive PGD procedures were proposed, that construct the parametric solution of the parametric problem from a number of high-fidelity solutions performed offline, for different choices of the model parameters. Among these techniques we can mention the SSL-PGD, that considers hierarchical separated bases for interpolating the precomputed solutions, or its sparse counterpart, the so-called sPGD. Once the parametric solution of the problem at hand is available, it can be particularized online for any choice of the model parameters, enabling simulation, optimization, inverse analysis, uncertainty propagation, simulation-based control, … all them under the stringent real-time constraint. Some papers detailing the just referred techniques are [1–4]. The interested reader can refer to them as well as to the abundant references therein. Thus, at the beginning of the third millennium a real-time dialogue with physics no longer seemed to be unattainable. However, despite an enormous success, and a certain euphoria, some difficulties soon appeared: in many cases, even a continuous calibration of the physics-based model was unable of describing and predicting the observed reality with the mandatory accuracy. It was acted that our conceptualization of reality—our models—involve in many cases a non-negligible contribution of ignorance, in some cases epistemic. So, the main question came back again: why insist on keeping the old knowledge based on physics? And the answer in many areas, including engineering and manufacturing among many others, was again the one we have already given previously: we must keep it because sometimes the data are not so abundant, because sometimes we need proceed in real-time and because in engineering we must explain prior to certify both, products and decisions. The route seems dead-end. But the big difficulties sometimes have a simple solution: what if we combine models and data? In between all physics and all physical, … we find the hybrid paradigm. But how it performs? Sometimes a nonlinear behavior is described from a quite good model with a noticeable gap, a deviation, that we also called “ignorance”, that becomes in general much less rich (less nonlinear) that the solution itself. That complexity reduction is accompanied by the reduction of the amount of data needed for describing it. Thus,
Hybrid Twin: An Intimate Alliance of Knowledge and Data
283
using a model for describing as much as possible the real behavior allows reducing the amount of data needed for enriching the model to accurately described the observed reality, that is for learning the ignorance. Thus hybrid twin “Hybrid Twin” (HT) combines two types of models: (i) the first based on physics and continuously calibrated in real-time by assimilating the collected data; and (ii) the second being a completely new type of model, more pragmatic and phenomenological, built on the fly, from (exclusively) the data collected, more precisely from the discrepancies between the predictions given the physics- based calibrated model and the measured data, the so-called deviation model [5]. The hybrid paradigm (major protagonist of the so-called augmented learning) locates somewhere between the physics-based and data-driven engineering. It consists of the usual physics-based engineering complemented with a data-driven enrichment, but without switching to a full-data based-engineering that at present seems not being the best option for the reasons previously discussed. However, we cannot conclude if this hybrid framework constitutes an “ad aeternam” framework or it is simply an intermediate point towards a deeper transformation where data manipulated with a real “artificial intelligence” will be the only protagonist. Time will bring the response. The Hybrid Twin has two other added values: • As a result of the usual smaller nonlinearity of the deviation with respect to the physics-based solution, the hybrid twin needs in general much less data than its fully data-driven counterpart, the co-called digital twin; • Due to the fact that nowadays engineering designs can be certified from the physics-based models, the contribution of the data-enrichment can be seen as a surplus, without impacting the certification process. The implementation of hybrid twins requires a real-time dialogue with the physics- based model, as well as a robust real-time learning ability to bridge the gap between predictions based on physics-based models and the measurements themselves, with data not so abundant as we would have liked, sometimes very heterogeneous (qualitative, categorical, discrete, emotional, …) and far from devoid of variability, inaccuracies and bias. This new twin needed to be effective in both areas, the one of physics and the one data. We have seen above that model order reduction allows the involvement of physics-based models in design, as in the past, but now also in online decision- making, without requiring unreasonable computing resources. On the other hand, machine learning techniques were not ready to cope with the processing speed and the lack of data. It was therefore necessary to adapt a number of techniques, and to create others, capable of operating online and even in the presence of a very small amount of data: the so-called “physically informed artificial intelligence” techniques. For that purpose, we have adapted and proposed a number of techniques, which have proven and are proving every day in many industrial applications their capabilities and performances. In what concerns “machine learning” there is neither a consensus nor a universal technique applicable in any scenario. The adequate choice of a fundamentally
284
F. Chinesta et al.
depends on the amount of data and training time available. In our works different techniques able to proceed online and in the scarce-data limit were proposed [6–14]. Of course, decision-making does not always need to create models, often the extraction and subsequent recognition of a “pattern” is enough, and for this purpose AI offers a variety of possibilities, among them: (i) visualization of multidimensional data; (ii) classification and clustering, supervised and unsupervised, where it is assumed that members of the same cluster have similar behaviors; (iii) model extraction, that is, discovering the quantitative relationship between inputs (actions) and outputs (reactions). These three applicative domains have acquired some maturity and proved their abilities in different domains from numerous success-stories. When addressing knowledge extraction, as well as the need of explaining for certifying, advances are much limited and both items need for major progresses, as the one enabling discarding useless parameters, or discovering latent variables whose consideration becomes compulsory for explaining experimental findings, or combining parameters that act in a combined manner, like the velocity, viscosity and density that in fluid mechanics act combined into the so-called Reynolds number. Discovering equations is a very timely topic because it finally enables transforming data into knowledge. All the just referred elements are subtlety integrated into the so-called twins, a sort of “avatar” that allows defining powerful DDDAS (dynamic data-driven application systems), a compulsory protagonist of the new engineering.
2 Hybrid Description of Materials Material hybrid descriptions involve different functionalities: • Fast and reliable calibration procedures applied to state-of-the-art constitutive equations, where the problem can be formulated as a problem of parametric inference. By constructing parametric coupons and the associated parametric solutions corresponding to “any” coupon geometry, “any” virtual test (loading) and “any” material expressible by the considered parametric constitutive equations (i.e. parametric geometry, parametric constitutive laws and parametric loadings), resulting in the so-called material computational vademecum, the parameters of the models are estimated in almost real-time by looking for the parameters enabling the best fit with respect to the experimental data (using extended Kalman filters, Bayesian inference, regularized inverse techniques, …). • When the best fit (best calibration of the state-of-the-art constitutive equations) is not general enough (unable to describe all tests) the gap between the best calibrated model predictions and the measurements reveals an amount of intrinsic “ignorance”, sometimes of epistemic nature. In these circumstances material hybrid descriptions look for reducing the ignorance by constructing on-the-fly a data-driven model based on the application of physics-aware artificial intelligence on the deviation data. These physics-aware modeling methodologies fulfill fundamental principles (energy conservation, positive dissipation,
Hybrid Twin: An Intimate Alliance of Knowledge and Data
•
•
•
•
• •
•
•
285
frame-indifference, convexity when required, and in general all existing well- established knowledge). When combining the just referred functionalities, physics-based and data-driven, a hybrid material description results, a sort of material Hybrid Twin, that could be integrated into a system Hybrid Twin for representing the material all along its life in operation. When materials are not well known, the first step consists of determining from collected data the intrinsic dimensionality of its behaviors (manifold learning), identifying useless parameters, informing on the existence of hidden parameters (internal or state variables, as for example the plastic deformation in elastoplasticity), helping to identify the parameters that acts in a combined manner (as velocity, density and viscosity acts combined in the so-called Reynolds number in fluid mechanics). It also helps to determine the best tests, even “the single test” able to extract all the information involved in the data-driven constitutive model. Variability is addressed by constructing probabilistic material descriptions, and the parametric uncertainty propagated by using probabilistic models (Monte- Carlo, polynomial chaos, gaussian process, …). It is important to note that within a hybrid framework, the existing models and knowledge can also serve to filter data and reduce measurement variability, to identify outliers or sensors malfunctioning, … The sensors variability can also be considered for defining its own probabilistic parametric regression. Data compression using sparse sensing allows to reduce the acquisition rate, improve the testing machine resolution as well as facilitating the data-transfer and storage. In the same way, the hybrid description enables performing data- completion to infer thermomechanical fields far from the sensor’s location. The use of space-separated representations (e.g. in-plane-out-of-plane) allows zoom-in in order to exacerbate mechanical confinement effects, and localized behaviors, like a numerical microscope. The possibility of solving 3D problems with the computational complexity of 2D problems allows de construction of a new kind of enriched shell representations, that proceeds from state-of-the-art shell models, while keeping all the 3D rich behavior (plastic striction, out-of-plane stresses and strains, 3D damage, 3D fracture, …) while keeping a 2D computational complexity: the so-called 3D resolution @ 2D cost. The adequate description of the matter, materials, architecture materials and metamaterials, with the use of advanced techniques based on the use of graphs, nets, persistent homology, …, is opening a more accurate way of relating the intrinsic nature of them with their mesoscopic and macroscopic properties and performances. Thus, the resulting models allow looking for the most appropriate composition and structures allowing for the best or optimal performances, ensuring the so-called materials by design. As just discussed, in some circumstances, the description of rich microstructures as well as highly fluctuating data series (e.g. profiles of rough surfaces) face the difficulty of choosing the parameters for their complete and concise description (at a given scale and with a given purpose) as well as the metric of analysis, the
286
F. Chinesta et al.
Euclidian proving often inadequate. New techniques based on the topology of data, with the inherent invariance properties that topology provides, are disrupting data processing, time series and images analyses. Topological data analysis (TDA), based on homology persistence has been successfully applied to the analysis of data-series, rough surfaces, data filtering (for removing the smallest scales associated to the noise). TDA proceeds by constructing from the data the so-called persistence diagram (equivalent to the so-called barcode representation) that reports the birth and death of each topological entity. The ones that persist characterize (in a topological sense) the data (with the inherent and appealing properties of invariance). The persistence diagram is further manipulated to generate the lifetime diagram and from the last, the persistence images can be extracted, the last defined in a vector space enabling its manipulation by any AI-based technique (classification, clustering, regression, …). The persistence images perfectly describe static or time-evolving microstructures related to polycrystals, composites, honeycombs, foams … This hybrid framework, with all the functionalities that we just reported, is well suited to describe the behavior of new materials whose existing physics-based models remain poor, or simply they do not exist, as well as to enrich descriptions of models that exist but their predictions differs significantly from measures. Here we revisit three applications in the field of polymers, metals and composites, where actual behavior is expressed from a model based on material physics, enriched by a data-driven model (which can incorporate physical considerations such as energy conservation, entropy production, objectivity, symmetries, convexity, and any other fundamental, sometimes universal, principles). In [15] stress-strain data associated with a visco-hyper-elastic behavior was successfully expressed as a usual hyper-elastic behavior (for which many well stablished constitutive models exist) enriched from a date-driven component enabling to represent the original data. When this enrichment is integrated within a thermodynamic setting, the viscous correction is associated with the entropy production and ensures the fulfillment of thermodynamic laws. The same rationale was used in [16] for expressing a complex plastic yield from any other simpler, like the one due to Hill, corrected for describing the measures while enforcing for example in the present case the convexity of the yield criterion. A third example addressed the short fibers interaction in a confined flow (as the one usually encountered in SMC composite processing) [17]. The fiber orientation kinematics was successfully described from a dilute suspension flow model (that ignores fibers interaction) enriched from the collected data, in the present case synthetic, because the data was provided by a fine-scale high-fidelity modelling that considered the interacting population of flowing fibers, to generate a data-driven contribution that reflects the fibers interaction effects. In some circumstances, the description of rich microstructures as well as highly fluctuating time series (e.g. dynamical system response or the profile of rough surfaces, among many other examples) face the difficulty of choosing the parameters
Hybrid Twin: An Intimate Alliance of Knowledge and Data
287
for their complete and concise description (at a given scale and with a given purpose) as well as the metrics of analysis, the Euclidian proving often inadequate. As an illustration, let’s take two photos of two trees, calculate the pixel-to-pixel difference and sum up the differences in absolute value. The distance between the two photos will be large enough to prevent us from concluding on their proximity. The extraction of the dimensionality of the manifolds where are living the physical quantities (including the solution of physics-based models) has led to the conclusion that behind an apparent complexity lies, many times, an extreme simplicity in coarser scales. Every day we operate very well in a complex environment, with efficiency and rapidity, which proves that coarser scales can be described from a reduced number of explanatory and non-correlated variables. It is well known that learning consists of undoing the observed reality of details (in relation to the observation scale) and thus retaining only the organized collective effects, known as modes in the jargon. New techniques based on the topology of data, with the inherent invariance properties that topology provides, are disrupting data processing, time series and images analyses. Topological data analysis (TDA) based on homology persistence has been successfully applied to the analysis of data-series, rough surfaces, data filtering (for removing the smallest scales associated to the noise). TDA proceeds by constructing from the data the so-called persistence diagram (equivalent to the so-called barcode representation) that reports the birth and death of each topological entity. The ones that persist characterize (in a topological sense) the data (with the inherent and appealing properties of invariance). The persistence diagram is further manipulated to generate the lifetime diagram and from the last, the persistence images can be extracted, the last defined in a vector space enabling its manipulation by any AI-based technique (classification, clustering, regression, …). The interested reader can refer to [18] and the reference therein on the application and use of TDA in the science and mechanics of materials.
3 Hybrid Description of Manufacturing Processes As previously discussed, when models represent a “rough representation” of the associated physics, a deviation between their predictions and the real evolution acquired from collected data is expected. This deviation is also expected to be biased, beyond the unbiased white noise characterizing the usual fluctuations that model parameters and measurement devices exhibit, easily addressed by using adequate filters. Indeed, the deviation (gap between the model prediction and measurements), when considering the optimal choice of the model parameters, should be used for the online construction (under the severe real-time constraint) of the data-based correction model (also referred as deviation model).
288
F. Chinesta et al.
Thus, the four main contributions involved into the Hybrid Twin are: • the pre-assumed physical contribution efficiently addressed by using MOR techniques. • a data-based modeling of the gap between prediction and measurement. • external actions to drive the model solution towards the desired target (control). • the unbiased noise filtering. The first two components are illustrated in the example that follows, that consists of filling a square mold, depicted in Fig. 1, from its central point. We here emphasize the first two contributions listed above, whilst the process is as follows: • First, the parametric flow solution (for any preform permeability) is obtained by coupling the simulation of the resin flow in ESI PAM-COMPOSITES and the PGD constructor. As soon as the parametric solution has been computed offline, it can be particularized in real-time as shown in Fig. 2. • Second, the permeability is identified by comparing the real flow front—recorded by using a camera. The identified permeability corresponds to the one used in the parametric model that allows the best fit between the predicted flow front position and the experimentally recorded one at different filling times. • By this process, the permeability has been identified successfully, however, the system ignores the fact that the permeability in the neighborhood of the mold boundary is lower than the one just been identified. Hence, the model significantly deviates from the measurement when flow reaches the regions where the permeability is reduced, as Fig. 3 illustrates. • Finally, as Fig. 4 proves, predictions become more accurate as soon as the correction model is added to the one that was calibrated, which ignores the presence of the permeability deviation.
Fig. 1 Square mold filled with an isotropic reinforcement, which contains an insert of impermeable square (black small square at the top right corner)
Hybrid Twin: An Intimate Alliance of Knowledge and Data
289
Fig. 2 Particularizing the PGD-based mold filling solution, at a given time, for three different values of the permeability: (left) low; (middle) medium and (right) high Fig. 3 Introducing a permeability reduction in the mold wall neighborhood in absence of data-based deviation model; (left) is the camera image, and (right) is model prediction
Fig. 4 Introducing a permeability reduction in the mold wall neighborhood while activating the data-based deviation model correction
4 Hybrid Description of Structures To illustrate the capabilities of a hybrid twin, consider, for instance a foam beam, shown in Fig. 5 and analyzed in detail in [19]. We constructed a hybrid twin of the beam with the help of (i) computer vision for data acquisition (ii) a classical Euler- Bernoulli-Navier (linear) beam model (iii) augmented reality for assisting in decision making processes. The twin is able to detect systematic biases among measurements (displacement) and predictions given by the simple model. By employing sPGD (sparse PGD) [20], the model can be corrected by the twin itself without any human intervention. This
290
F. Chinesta et al.
Fig. 5 A hybrid twin for the analysis and control of a nonlinear foam beam [20]
correction, thanks to the sPGD methodology, involves very scarce data and a minimum number of measurements. In addition, the twin performs data assimilation in real time. In this case, see Fig. 5, it is able to locate the position of the load (see the red arrow). The hybrid twin works, by correcting the assumed model if it renders systematic errors. The Hybrid Twin model can also be employed in structural mechanics for enriching the considered models locally or globally, and becomes especially suitable: • When a model, called nominal, expected predicting the system response from the applied loading, fails to predict the measurements carried out, due to a localized deficiency in it. This is the situation when a model is accurate enough for describing the subjacent physics almost everywhere, except in a small region where the reality differs from the assumed model. This is the case when in solid mechanics some localized damage occurs, that involves a degradation of the real mechanical properties, with respect to the ones assumed in the so-called nominal model. • When a model constitutes an approximation of the real system everywhere, however, it is not accurate enough everywhere. This is the case when modeling large structures, where the discretization is too poor for reflecting all the structural details. Thus, all these details (stiffeners holes, …) are ignored and the resulting model could constitute an approximation (sometimes a crude approximation) of the real system, valuable for certain applications, but inefficient in other cases. In the first case, some measures are available in some locations. From the other hand, the model described from its stiffness matrix K offers a prediction of the
Hybrid Twin: An Intimate Alliance of Knowledge and Data
291
displacement field everywhere (from the nodal displacements). When the predictions at the measurement locations differ from the measures themselves, a model correction C must be added, leading to the corrected model K + C, the first contribution coming from the physics and the last constructed from the discrepancy between the measured displacement and the associated prediction. Thus, by parametrizing C using the subjacent mesh and looking for the sparsest correction (the damage is assumed occurring locally) while ensuring the structure equilibrium, the hybrid paradigm allows to locate the localized correction while completing the displacement measured at some location to the entire structure. Thus, the proposed procedure enables diagnosis (from the displacement discrepancy) and prognosis (from the corrected model). The second scenario is even more challenging, due to the fact that the model should be corrected (enriched) everywhere, a correction complexity (richness) that contrasts with the few data available in general. In this second case sparsity can be used again, as soon as forces and displacements are expressed each in its reduced basis.
5 Hybrid Description of Complex Systems In the case of complex systems, usual modelling frameworks proceed by simplifying the components behaviors and then stablishing the system of a collection of interconnected 0D (algebraic models) and 1D (ordinary differential equations) represented the interconnected system components. Such models can be solved very efficiently, however, because of the multiple simplifying hypotheses introduced for reducing complex and rich transient 3D behaviors to 0D or 1D, the computed results could potentially exhibit noticeable discrepancies with respect to the observed (measured) behavior. Indeed, the computation proceeded very fast, but the results result inaccurate. Two possible routes for alleviating or circumventing the just referred issue exist. One consists of using MOR and more particularly PGD to construct a parametric solution of each system component, that other than the natural parameters will also consider the component inputs as extra-parameters. Thus, the component outputs can be expressed with high accuracy from the component parameters and the inputs, outputs obtained in almost real-time as soon as the input and component parameters ate specified. Now, it suffices combining these reduced parametric transfer functions to define complex systems conciliating efficiency (real-time responses) with accuracy. The second route, more pragmatic and cheaper, consists of proceeding within the hybrid paradigm. For that, the deviation between the solution of a dynamical system and the prediction of a coarse model of it is obtained. Then, the time series expressing the deviation is used for extracting the model that allows transforming the deviation at time t + dt from the one at t. As soon as the model is extracted, it can be used for integrating and thus, reconstructing the correction at each time.
292
F. Chinesta et al.
Figure 6 sketches the just referred procedure, where the correction model, allowing the calculation of the time evolution of the stare “z” correction is obtained from the data, and when added to the coarse physics-based model, allows obtaining a prediction in almost perfect agreement with the recorded data. However, such a model must address the parameters dependence as well as ensure the stability of the resulting integrator. Both issues were addressed in [21]. The same rationale was applied in the multi-scale description of Li-ion batteries. A sketch of such approach is presented in Fig. 7. In the case of batteries, the estimation and regulation of the state of charge remains a key issue of major relevance in the era of the electric car. In this field, the complexity of the electrochemical models involving hundreds of parameters hardly accessible, its multi-scale physics (where solid active particles are immersed in an electrolyte involved at the cells scale that at its turn compose the whole battery), the strongly coupled multiphysics (electrochemistry, transport phenomena, …), … and the complexity in the modeling and its efficient simulation, continue limiting the capability of evaluating the state and anticipating the battery behavior and its performances. State of the art battery models consist of a system of coupled parametric embedded partial differential equations (embedded in the sense that at each material point at the cell scale, at the lowest scale—the one of the active particles—other partial differential equations representing the physics at the active particles scale) allow bridging the different scales.
Fig. 6 Sketch of the hybrid parametric integrator
Hybrid Twin: An Intimate Alliance of Knowledge and Data
293
Fig. 7 Batteries Hybrid Twin
The encountered difficulties are multiple: (i) models are strongly coupled and involve too many parameters, most of them depending on the solution itself (nonlinear behaviors) and moreover one can expect that the reality remains more complex that the models expected describing it, that in most of cases were simplified for the sake of tractability; (ii) the difficulty of finely characterizing the involved physics by acceding to the accurate determination of all those numerous parameters; (iii) the numerical treatment of those models makes difficult their solution in deployed platforms under real-time constraints. It is in this scenario that hybrid twins become an appealing route making use of the physical-based models just discussed, and whose parameters could be updated online from the collected and assimilated data. In order to speed-up the solution and proceed to real-time data assimilation the parametric models will be solved offline by using the PGD. Then, noticed deviations between model predictions and measurements will be corrected by calculating online and based exclusively of the collected data, a data- based model that when added to the one based on the physics just described, allows accurately describing the measurements. The data-based model is constructed using appropriate model learners widely considered for machine learning purposes. Then, as soon as the marriage of physics and data-based models are combined the system becomes predictable and control strategies can be envisaged. Thus, the battery hybrid twin allows efficient (accurate and real-time) monitoring, predictions and control [22].
294
F. Chinesta et al.
6 Hybrid Twin™: Implementation and Application in Manufacturing Industry Fifty percent of Compound Annual Growth Rate (CAGAR) is forecasted for the Twin market (digital, virtual, hybrid, …) over the period 2021–2030 to reach ~200 B-USD. Currently, the manufacturing sector is leading with the highest revenue in this market. We will try to summarize the main drivers for this rapid deployment. As we have seen, the Hybrid approach is a typical combinational innovation resulting from federating different technologies (Data, Simulation, AI-ML, High performance computing, IoT, 5G, Cloud-Edge computing, High performance computing, VR-AR, …) This Holistic approach is particularly adapted to the manufacturing industry. Experience with early adaptors’ demonstrated the ability to overcomes most of the current shortcomings and deliver higher value vs. the simple addition of the benefits of each individual technologies. It showed that the HT is an adequate pragmatic path for the Industry 4.0 and behind, including SME as well. It enables step by step implementation, with significant benefits at each stage, lowering the technology barriers (investments/OPAX and OPEX, knowledges & competences, …) and consequently the risks in a sustainable Digital Transformation covering the main business success drivers: Performance, Process, People. Ultimately, It allows consistent coherent process to connect the 3 major phases in the manufacturing process of the supply chain: meaning Process Design considering the business requirements (for OEM and Suppliers) the validation on prototype/pilot-lines and the Production with the target performance. This will overcome the shortage of current dis-connected business process with significant value leakage (resulting in cost, delay,…) in the current business process and radically improve the interactions through the supply chain, based on reliable UpToDate information and insights, with dynamic adjustment to the real performance outcomes.
Hybrid Twin: An Intimate Alliance of Knowledge and Data
295
Enabling a virtuous cycle of efficient feedback will improve the efficacity of the overall process upstream & downstream as well, with continuous learning and knowledge upgrade. This is particularly important in the context of quicker learning and adaptation to cope with the new manufacturing challenges: tougher competition, New regulations (decarbonization, reduce environment, …), new opportunities related to greener/recycled materials (with higher material heterogeneity and variation) and circular economy business model. Technology wise, Data only (Digital twin) and the Simulation only (Virtual twin) could be considered as limitation. Therefore, in the Hybrid approach we combine both. Where the Data will be able to cover the past and present experiences to anticipate potential problems. Also, it be will used to validate the simulation models and fine tuning them to the specific manufacturing context. In the case of a New event (never experimented before), this “ignorance” simulation models will be crucial to assess the root cause of the divergence and to explore the alternative solutions and to optimize the corrective measures, in an easy and quick solution (as introduced above). This will have direct impact on the Agility and the Quality, where real time monitoring, control, and correction in continuous process, enable the implementation of corrective measures by the operator directly, from early signal of divergent. Reducing the downtime defects and scrap rate,… thanks to the ability to cope with required agility to quickly adjust the production requirement within the target quality. The current experience demonstrated the ability to implement the HT approach to easily adjusted to various production models: discreet and continuous manufacturing, Mass/Volume production or customized product. Meeting the expected outcomes performance improvement in terms of Production time availability (no downtime, …) based on optimal maintenance (predictive/preventive). Finally, it is important to mention the human and social factor. The evolution of the manufacturing sector in the direction of Digital Transformation is radically changing the perception of the sector. It regains attractiveness as a modern opportunity for work with a good business sustainability and positive social impact.
References 1. Chinesta, F., Keunings, R., & Leygue, A. (2014). The proper generalized decomposition for advanced numerical simulations. A primer (Springerbriefs). Springer. https://www.springer. com/gp/book/9783319028644 2. Chinesta, F., Huerta, A., Rozza, G., & Willcox, K. (2015). Chapitre dans l’Encyclopedia of Computational Mechanics. In E. Stein, R. de Borst, & T. Hughes (Eds.), Model order reduction (2nd ed.). Wiley. https://doi.org/10.1002/9781119176817.ecm2110 3. Borzacchiello, D., Aguado, J. V., & Chinesta, F. (2019). Non-intrusive sparse subspace learning for parametrized problems. Archives of Computational Methods in Engineering, 26(2), 303–326. https://doi.org/10.1007/s11831-017-9241-4
296
F. Chinesta et al.
4. Chinesta, F., Leygue, A., Bordeu, F., Aguado, J. V., Cueto, E., Gonzalez, D., Alfaro, I., Ammar, A., & Huerta, A. (2013). Parametric PGD based computational vademecum for efficient design, optimization and control. Archives of Computational Methods in Engineering, 20(1), 31–59. 10.1007/s11831-013-9080-x. 5. Chinesta, F., Cueto, E., Abisset, E., Duval, J. L., & El Khaldi, F. (2020). Virtual, Digital and Hybrid Twins. A new paradigm in data-based engineering and engineered data. Archives of Computational Methods in Engineering, 27(1), 105–134. https://doi.org/10.1007/ s11831-018-9301-4 6. Lopez, E., Gonzalez, D., Aguado, J., Abisset-Chavanne, E., Cueto, E., Binetruy, C., & Chinesta, F. (2018). A manifold learning approach for integrated computational materials engineering. Archives of Computational Methods in Engineering, 25(1), 59–68. https://doi.org/10.1007/ s11831-016-9172-5 7. González, D., Chinesta, F., & Cueto, E. (2019). Thermodynamically consistent data-driven computational mechanics. Continuum Mechanics and Thermodynamics, 31, 239–253. https:// doi.org/10.1007/s00161-018-0677-z 8. Ibanez, R., Abisset-Chavanne, E., Aguado, J. V., Gonzalez, D., Cueto, E., Chinesta, F. (2018). A manifold-based methodological approach to data-driven computational elasticity and inelasticity. Archives of Computational Methods in Engineering, 25(1), 47–57. https://doi. org/10.1007/s11831-016-9197-9 9. Quaranta, G., Duval, J. L., Lopez, E., Abisset-Chavanne, E., Huerta, A., & Chinesta, F. (2019). Structural health monitoring by combining machine learning and dimensionality reduction techniques. Revista Internacional de Metodos Numericos, 35(1). https://www.scipedia.com/ public/Quaranta_et_al_2018a 10. Ibanez, R., Abisset-Chavanne, E., Ammar, A., González, D., Cueto, E., Huerta, A., Duval, J. L., Chinesta, F. (2018). A multi-dimensional data-driven sparse identification technique: The sparse Proper Generalized Decomposition. Complexity, Article ID 5608286. https://doi. org/10.1155/2018/5608286 11. Moya, B., Gonzalez, D., Alfaro, I., Chinesta, F., & Cueto, E. (2019). Learning slosh dynamics by means of data. Computational Mechanics, 64, 511–523. https://doi.org/10.1007/ s00466-019-01705-3 12. Argerich Martín, C., Ibáñez Pinillo, R., Barasinski, A., & Chinesta, F. (2019). Code2Vect: An efficient heterogenous data classifier and nonlinear regression technique. CRAS Mécanique, 347(11), 754–761. https://doi.org/10.1016/j.crme.2019.11.002 13. Montáns, F. J., Chinesta, F., Gómez-Bombarelli, R., & Kutz, J. N. (2019). Data-driven modeling and learning in science and engineering. CRAS Mécanique, 347(11), 845–855. https://doi. org/10.1016/j.crme.2019.11.009 14. Reille, A., Hascoet, N., Ghnatios, C., Ammar, A., Cueto, E., Duval, J. L., Chinesta, F., & Keunings, R. (2019). Incremental Dynamic Mode Decomposition: A reduced-model learner operating at the low-data limit. CRAS Mécanique, 347(11), 780–792. https://doi.org/10.1016/j. crme.2019.11.003 15. Gonzalez, D., Chinesta, F., Cueto, E. (2019). Learning corrections for hyperelastic models from data. Frontiers in Materials - Section Computational Materials Science, 6(14). 16. Ibanez, R., Abisset-Chavanne, E., Gonzalez, D., Duval, J. L., Cueto, E., Chinesta, F. (2019). Hybrid constitutive modeling: Data-driven learning of corrections to plasticity models. International Journal of Material Forming, 12, 717–725. 17. Yun, M., Argerich, C., Gilormini, P., Chinesta, F., Advani, S. (2020). Predicting data-driven fiber-fiber interactions in semi-concentrated flowing suspensions. Entropy, 22(30). 18. Frahi, T., Chinesta, F., Falco, A., Badias, A., Cueto, E., Choi, H. Y., Han, M., & Duval, J.-L. (2021). Empowering advanced driver-assistance systems from topological data analysis. Mathematics, 9(6), 634. 19. Moya, B., Badías, A., Alfaro, I., Chinesta, F., & Cueto, E. (2020). Digital twins that learn and correct themselves. International Journal for Numerical Methods in Engineering. Accepted for publication.
Hybrid Twin: An Intimate Alliance of Knowledge and Data
297
20. Ibañez, R., Abisset-Chavanne, E., Ammar, A., González, D., Cueto, E., Huerta, A., Duval, J. L., & Chinesta, F. (2018). A multi-dimensional data-driven sparse identification technique: the sparse Proper Generalized Decomposition. Complexity. Paper 5608286. 21. Sancarlos, A., Cameron, M., Le Peuvedic, J.-M., Groulier, J., Duval, J.-L., Cueto, E., & Chinesta, F. (2021). Learning stable reduced-order models for hybrid twins. Data-Centric Engineering. Data-Centric Engineering, 2:e10, 2021. 22. Sancarlos, A., Cameron, M., Abel, A., Cueto, E., Duval, J.-L., & Chinesta, F. (2021). From ROM of electrochemistry to AI-based battery digital and hybrid twin. Archives of Computational Methods in Engineering, 28, 979–1015. Francisco Chinesta is currently full Professor of computational physics at ENSAM Institute of Technology (Paris, France), Honorary Fellow of the “Institut Universitaire de France” – IUFand Fellow of the Spanish Royal Academy of Engineering. He is the president of the ESI Group scientific committee and director of its scientific department. He was (2008-2012) AIRBUS Group chair professor and since 2013 he is ESI Group chair professor on advanced modeling and simulation of materials, structures, processes and systems. He received many scientific awards (among them the IACM Fellow award, the IACM Zienkiewicz award, the ESAFORM award, …) with his main research in the development of technologies on Model Order Reduction and Engineered Artificial Intelligence. He is author of more than 350 papers in peer-reviewed international journals and more than 900 contributions in conferences. He was president of the French association of computational mechanics (CSMA) and is director of the CNRS research group (GdR) on model order reduction techniques in engineering sciences, editor and associate editor of many journals. He received many distinctions, among them the Academic Palms, the French Order of Merit, … in 2018 the Doctorate Honoris Causa at the University of Zaragoza (Spain) and in 2019 the Silver medal from the French CNRS. He is at present the director of the CNRS@CREATE – NRF research program on intelligent modelling for decision making in critical urban systems (35 M€ and more than 160 involved researchers).
Fouad El Khaldi Currently, Director of Industry Strategy & Innovation at ESI Group, with special focus on co-creation innovation projects 35 years’ experience in the field of ComputerAided Engineering (CAE) research & industry applications. 1986: Dr-Ing. – INSA-Lyon, France. Member of several associations and industry innovation networks & Private Public Partnership: • Technology Platform for High Performance ComputingMember of Steering Board • European Big Data Value Association • European Green Vehicle Initiative Association – Member of Industry Delegation • European Factory of the Future Research Association • French innovation Cluster – Paris region: SYSTEMATICS – Member of Board • European Automotive Research Partnership Association
298
F. Chinesta et al. • ISC High Performance Computing – Member of the Steering Committee • ASSES International network focus on CAE innovation and Digital Twin – Member of the advisory committee. Authored or co-authored of numerous technical papers about CAE innovation and industry application of Virtual Prototype testing in the Engineering and Manufacturing domains. Holds several patents in the CAE and Virtual Prototyping field. Elias Cueto is a professor of continuum mechanics at the University of Zaragoza. His research is devoted to the development of advanced numerical strategies for complex phenomena. In particular, in the last years, he has worked in model order reduction techniques and real-time simulation for computational surgery and augmented reality applications. His work has been recognized with the J. C. Simo award of the Spanish Society of Numerical Methods in Engineering, the European Scientific Association of Material Forming (ESAFORM) Scientific Prize, and the O.C. Zienkiewicz prize of the European Community on Computational Methods in Applied Sciences, ECCOMAS, among others. He is a fellow of the EAMBES and IACM societies and president of the Spanish Society of Computational Mechanics and Computational Engineering (SEMNI).
Artificial Intelligence and the Digital Twin: An Essential Combination Roberto Minerva, Noel Crespi, Reza Farahbakhsh, and Faraz M. Awan
Abstract This chapter addresses the value and the synergies of combining Artificial Intelligence technologies with Digital Twins. We begin with a high level, but comprehensive review of AI technologies that may be important for Digital Twins. Then we examine the relationship of AI techniques and methods to the progression of capabilities for Digital Twins in general. The properties and architecture of Digital Twins are then analyzed and related to major AI techniques. The objective is to identify those AI technologies that provide a unique advantage in the design and implementation of Digital Twins. The impact on the Digital Twin’s ability to meet end use requirements is illustrated through a simple use case. This points to the importance of including AI from the beginning in the design and construction of Digital Twins. In considering additional use cases we map how AI can be applied to Digital Twins in more complex situations. Finally, we provide general guidelines for inclusion of AI to be incorporated during the design phase of a Digital Twin. This includes the use of AI within the Digital Twin itself and in operations where the context is the larger system that the Digital Twin supports. Keywords Algorithms · Artificial intelligence · Artificial neural networks · Computational intelligence · Data management · Decision support systems · Decision trees · Digital twin · Digital twin architecture · Digital twin use case · Inference engine · Machine learning · Ontologies · Reasoning systems · Rule based systems · Situation awareness R. Minerva (*) · N. Crespi Telecom SudParis, Institut Polytechnique de Paris, Palaiseau, France e-mail: [email protected] R. Farahbakhsh Telecom SudParis, Institut Polytechnique de Paris, Paris, France Total Energies, Paris, France F. M. Awan Telecom SudParis, Institut Polytechnique de Paris, Paris, France Urban Big Data Centre, Glasgow, United Kindom © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_12
299
300
R. Minerva et al.
1 Introduction Digital Twin design and development is deeply intertwined with the ability to model and recreate a physical artifact by a software counterpart [38]. The starting point for creating an effective representation of physical artifact is access relevant data. The data supports the modelling of the behavior of a physical entity within the environment in which it operates. The integration of modelling with AI technologies enables new ways for determining the optimal processes and actions that the physical object executes. This must also account for the constraints and events in the physical environment (situation awareness) that object experiences. AI is crucial in this context because it can provide answers and suggest a course of actions that other methods have failed to do in the past. The exploitation of AI techniques in combination with Digital Twin models can reinforce traditional methods and provide far better results by reconciling actual data with the representation of the physical object. The physical object/artifact can thus be contextualized within a digital representation of the environment in which it operates. This implies that the Digital Twin must not only model the artifact, but it must also account for a well-defined model of the operational environment. Further the interplay between the predictions that the Digital Twin makes and observations must be accounted for, and the models modified or corrected accordingly. That means that the digital/software counterpart must include in its architecture interfaces for dynamic programmability. The software representation of the object can be tuned and refined to better represent the behavior of the physical entity within the digital model of the environment and of the operator. It is important to emphasize that the software representation has constraints originating from known (real) behaviors of the physical counterpart and “selected” real world constraints represented by the physical object and the models of the environment (e.g., the laws of physics, the limitations of the capabilities of the physical object). Simple physical objects can have “simple” representations, but when complex physical objects are represented, then the size of the necessary data and the number of relationships considered are invariably overwhelming for humans. As an example, the average car contains 30,000 components and parts. A Digital Twin of a car should keep track of all these components, how they fit together and what relationships they have to each other. In addition, there are innumerable dependencies and constraints (in manufacturing a part cannot be mounted before another part is in place). The number of relationships and constraints can explode exponentially if the behavior of the car on the street is also considered. The complexity of the “real” physical object is systemic. The framework used to design, build, and operate the physical object captures this complexity. It includes some, but not all, of the situational contexts and conditions in which the object will operate during its lifecycle. Artificial Intelligence is a powerful tool in supporting humans to cope with the complexity of the Digital Twin representation necessary for many applications. AI makes it possible to consider a much wider range of conditions and situations in which the object operates than are commonly accounted for in today’s practice – because we do know how to deal with the level of complexity implied.
301
Artificial Intelligence and the Digital Twin: An Essential Combination
Parametrization and programmability of the Digital Twin (DT) may be used to dynamically represent/detect/study the expected behavior of the physical object when as it operates in wide range of environmental conditions, some of them extreme in nature. Figure 1 represents the relationships between a physical object immersed in its actual environment and the representations of the Digital Twin within parameterized software models of that environment. There are at least three different views of the Digital Twin that can be addressed by Artificial Intelligence technologies and tools: • The Resources: This includes self-management, monitoring, prediction, and intelligence of individual components of physical artifacts (examples are sensors, actuators, controls, …). • The System: How the Digital Twin system supports the consistency and operation of all the DT sub-system ensuring that the relevant properties of a DT are fulfilled, and the constraints complied with (examples are: correct execution of models, accurate correspondence of real-world data and data used by the DT, synchronization of the DT with the real object, ….). • The Application Domain: The properties, models, predictions, and intelligent functions associated with the specific application domain to be supported by the DT system. Figure 1 represents a simplified situation in which a physical object is represented by a single Digital Twin within a well-defined context. In realistic situations the Digital Twin will usually represent components and parts of the physical object, each of which will be represented as Digital Twins. We use the term composite or aggregated Digital Twin to differentiate them from a monolithic Digital Twin. The aggregation of basic Digital Twins to form and represent a composite Digital Twin is fundamental. Digital Twin are constructed by aggregation of basic DTs that
Physical Constraints in the representaon/modeling of the environment
Representaon of different condions/constraints
Sw Model (Digital Twin)
Physical Object
Physical Constraints in the representaon/modeling of the arfact
Soware Environment
Real Environment
Representaon of the actual or expected behavior of the physical arfact with respect to the current representaon of the environment
Fig. 1 The relationships between physical objects and their Digital Twin representations
302
R. Minerva et al.
expose two essential facets: (1) the representation of the basic DT features, operations, and management capabilities; and (2) how the basic DT relates to other aggregated DTs and the constraints under which the relationship operates. Artificial Intelligence is a powerful tool for orchestrating, managing and operating composite/ aggregated DTs. This requires the definition of methods and technologies for integrating, and controlling, the multitude of basic DTs that operate in different environments and that have different constraints. Without AI, reconciling the dynamics of different DTs may result in an impossibly (or at least unrealistic) complex task. The eventual choice as to how AI is used ranges between a centralized architecture or an appropriate distribution of functionality across the DTs in the aggregation. One of the values of incorporating AI in Digital Twins is that it helps humans/operators focus on important tasks identified through abstraction, leaving the details to the AI algorithms. This is important because comprehending a complete DT representation may be beyond the capability of humans. AI is also critical for overseeing repetitive and high precision tasks, and tasks that are time sensitive and happen on scales incompatible with human response. The burden this creates for AI is the necessity to validate, verify and test that it is acting properly. A Digital Twin can also be classified by the level of behavioral intelligence that is needed for achieving the goals of its application. A DT could be Passive if the DT reflects the physical object under static conditions in a specifically-defined environment. In this case, the DT is a means to represents the physical object, possibly some of its dynamics, to study or observe its current state. Another type is a Predictive DT that is capable of forecasting with a high accuracy future state of the physical object and of the related environment. This may be useful for anticipating critical conditions or events and thus offers opportunities to mitigation problems through specific actions or management policies. This type of DT can exploit a very rich tool kit of analytical, statistical, and AI methods. The next category is the Reactive DT, capable of adapting to changes in status and the environment by diagnosing conditions, taking corrective actions, or requesting an action by an operator. These reactions and adaptations are performed to prevent the physical object from breaking down or to maintain its operational status with a high degree of confidence. In this case, the DT reacts to changes in the environment, and executes preset policies for adapting to changes in conditions. The final category is a Proactive DT that is capable of understanding its situation, to “reason” about what course of action to take, and to adapt within its current condition to reach prescribed goals autonomously, essentially without human intervention. We also refer to the Proactive DT as an Autonomic DT. The highest capability DT uses reasoning, situational awareness, and advanced AI technologies to fully comprehend and adapt to conditions and the environment. It is responsible for managing the policies and actions for achieving prescribed goals and objectives. In summary there are four different types of Digital Twins with respect to the level of “autonomy” that AI and other techniques can provide: –– Passive Digital Twin: A solution focused on representing the status of a physical object based on monitoring and digital simulations of the object. Representation
Artificial Intelligence and the Digital Twin: An Essential Combination
303
and simulation are essential, and they can be based on or enhanced by AI/ML technologies. The possibility of posing the DT in a different environment requires especially high levels of modelling and constraints’ representations. –– Predictive Digital twin: a solution that can consider the available data, and then by exploiting time series and learning technologies, offering the capability of predicting (and to issue alerts if appropriate) the future states of a physical object. Interaction with humans, or at least management systems, can be beneficial for exploiting this capability. –– Reactive Digital Twin: a solution that exploits reasoning and knowledge representation to determine the current and future situation in which a physical object is operating and consequently determining and executing policies to mitigate the undesirable effects on the physical object. This type of DT helps humans to have a clear view of the issues in a situation and to prepare to react to them in an intelligent manner. –– Autonomic Digital Twin: this solution aims at building DTs capable of fully understanding the current situation and its future evolutions and to autonomously prepare and execute plans to reach their goals within a changing environment without any human intervention.
2 Artificial Intelligence for Digital Twin While Artificial Intelligence has been mentioned several times in the previous section, a discussion about AI technologies and their evolution is out of scope of this chapter. Interested readers may refer to general literature on this subject such as books and comprehensive surveys a about Artificial Intelligence, Machine Learning, and Knowledge based Systems, [17, 34, 36, 40, 49]. What is important in this paper are frame technologies and applications of AI that can be usefully exploited in combination with Digital Twins. The use of Artificial Intelligence techniques to provide functionality and manage the behavior of the Digital Twin is an initial step for understanding and exploiting data from the real world and enabling the DT to fulfill its requirements. As a second step, we are concerned with how well the Digital Twin represents its corresponding entity and how well it captures the dynamical behavior. Reasoning and learning capabilities provided by AI can be exploited to better reconcile the models and representation used by the Digital Twin with observations. The next issue is how different AI techniques can be best used in combination with other methods to support the different categories of Digital Twins: Passive, Predictive, Reactive, and Autonomic. In this section, Artificial Intelligence for the DT is the set of technologies, algorithms, and solutions capable of supporting the four categories of DTs, i.e., the means to intelligently perform tasks needed for reaching the goals of the DT. Artificial Intelligence is a large realm from which to select different techniques (often overlapping each other) in order to support the different types of Digital Twin. Figure 2 represents major AI techniques and relates them to the identified
304
R. Minerva et al.
Autonomic Digital Twin
Intelligent and autonomous behavior Predictive Digital Twin
Cognitive Intelligence Prediction
Computational Intelligence
Machine Learning Predictive Analysis BayesianNet
ANN
Problem solving
Reactive Digital Twin
Decision Making Intelligent Agents
Inference Engines
Data Analytics Model validation and consolidation Passive Digital Twin Fig. 2 AI Technologies supporting the different types of Digital Twin
types of Digital Twin. Statistical and Data Analytics technologies can support the validation, the tuning up and the improvement of the modelling behind the definition of the Digital Twin. The model of the DT designed and implemented at the definition stage can be validated and improved using data from the real world. In addition, if adaptation and parametrization is part of the modelling design, the real data, their analysis, and manipulation can contribute to tune up and improve the alignment of the model to the real world. Many AI techniques are applicable to support prediction of events in a system, Machine Learning, Artificial Neural Network and Computational Intelligence (e.g., Fuzzy Logic Genetic Algorithms and others) can fulfill the “predictive” needs of many applications of the DT. In these cases, the exploitation of real world historical data sets related to specific problem domains can be the basis for developing Predictive DTs capable of forecasting the evolution of the system and take actions in advance. On another side of the AI world, there are techniques developed along the years to help in decision-making. Techniques such as decision trees, recommendation engines, intelligent agents, as well fuzzy logic, and others are instrumental to automatically provide indication about the needed actions the DT (and the physical object) should take in order to reach its objectives in front of particular events and situations. Reactive DT could be built around the implementation (and even the integration) of one or more of these techniques. A less explored, but quite promising type of Digital Twin is the Autonomic one. It must be capable of reasoning mechanisms for understanding the situation and actuate policies in order to achieve its own
Artificial Intelligence and the Digital Twin: An Essential Combination
305
objectives. In this case, reasoning (e.g., deduction, induction, abduction), abstraction and contextualization (i.e., the ability to focus on the relevant facets of the current situation and the impact on the context/environment in which the DT is operating), are capabilities referring to Cognitive Intelligence. They are needed in order to allow the DT to autonomously act in the environment taking the right decision for reaching its own goals. This is not an exhaustive or complete analysis of the possibilities offered by the integration of AI to the Digital Twin, but it is a useful schema for understanding how to exploit the techniques in relationship to the type of DT that needs to be developed. As a summary, Artificial Intelligence from the Digital Twin point of view is a set of techniques, algorithms and solutions that enable the Digital Twin to reach the needed level of “intelligence” for fulfilling its goals (passive, predictive, reactive, and autonomic or a mix of them). The exploitation of AI techniques comes with a price, i.e., computational complexity. Many AI algorithms are complex and require operating on large data sets (many operations), with many intermediate steps (e.g., layers in Neural Networks) and with the need to spend a considerable time for “training” and increasing the accuracy of the solution. The “Big O analysis” of complexity depends on the particular algorithm chosen (e.g., for Support Vector Machine, SVM [1], for multilayer perceptron neural network [54], for random forest [32]. From a practical point of view, it is important to evaluate the complexity of the candidate AI algorithms to be exploited in a DT solution. This can be done by analyzing the size of the needed data set, the features that will be selected for analysis and for determining the prediction or decision, and the complexity of the algorithm itself in order to understand if the solutions is practicable in terms of computing power and time for getting a reply. An accurate algorithm that takes too long with respect to the reaction time expected by a DT is obviously impracticable. As an example, the pipeline for the definition, creation, tuning, and usage of a Neural Network can be considered. It could be used for predicting the behavior of a DT having a time series of relevant data (Fig. 3). The choice of the algorithm depends on the problem too be coped with, many AI algorithms can provide results with different accuracy and quality, but also with more or less consumption of computing resources and shorter or longer processing time. The selection of the algorithm should be a fundamental step with precise requirements in mind with respect to computing infrastructure and responsiveness. The algorithm itself needs to be tuned and specialized. For instance, the number of nodes and the number of layers in a Neural Network would be essential characteristics with an impact on the accuracy and on the processing requirements. In addition, the feature selection (i.e., the input data that need to be correlated) is another aspect that can add to the complexity of the solution. Having more or less features has also an impact on the accuracy of the prediction and results. Training could also be a very timing consuming (and complex) activity. Also in this case, the complexity of the algorithm, the size of the data sets, the essential features to relate affect the overall accuracy of the solution and on its training and execution time. The requirement of the DT applications in terms of responsiveness should be adequately understood
306
R. Minerva et al.
Execution Raw Data
Curation/ Preprocessing
Features Selection
Algorithm and parameters tuning
Training
Execution
Results (prediction)
Design, testing and tuning Fig. 3 A machine learning pipeline and design, tuning and execution processes
well in advance in order to avoid the possibility to have an accurate prediction that is not responsive or an inaccurate prediction with a shorter response time. From an infrastructural perspective, the coupling of modelling means, and AI technologies require large processing infrastructure in order to keep up with the expected results. Even if processing power is largely available, the complexity of the DT modelling and the application of AI algorithms can require a large processing infrastructure. In [7], the authors have made analysis on the needed processing resources for creating a sort of predictive digital twin for the meteorological data for the entire earth. The assessment is that in spite of rapid evolution of hardware, software technologies and improvements in middleware and algorithms, there is still a considerable gap between the technological needs of a complex Digital Twin representation and the available processing infrastructure. As an additional research path, the AI techniques, for instance those for classification of objects, can be the basis for dynamically create basic Digital Twins of “dumb” physical objects [39]. In this case, generic sensing capabilities can be used in order to recognize and classify objects. A sort of “signature” can be built for physical objects and a corresponding digital twin can be created. It represents the “known” features of the “dumb” physical object. Creating more signatures will augment the “knowledge” about the physical object and it can be recognized in the environment. Additional information about the physical object can be retrieved form the internet. In such a way, a car could be recognized and by crawling the internet, its “brand name” and some of its characteristics (static) could be associated to the Digital Twin. The DT could continue to collect data about the physical object in the real environment and increase the “knowledge” about the physical object. Figure 4 represents a basic functional diagram for dynamically creating Digital Twins. This approach could be used in smart cities, campus, or large enterprise for dynamically recognize and representing and monitoring the behavior of objects within the environment.
Artificial Intelligence and the Digital Twin: An Essential Combination
307
Synthetic Sensing
(AI algorithms, techniques, and tools)
Signature Search Engine (map-reduce, Hadoop, ..)
Data Collection
(data lake, formatted data, …)
Generalpurpose Sensing
Basic Sensing capabilities
Physical Physical Object Physical Object Physical Object Object
Weak Entanglement
API API API Logical Object API Logical Object Logical Object Logical Object DT features and states
Fig. 4 Dynamically creating Digital Twin by means of general purpose sensing
3 Digital Twin Properties and Their Relationships with Artificial Intelligence As stated in [38], “The Digital Twin is the constant association between a physical object and its software representations within a virtualized environment. Events, actions, features, status, data and all the other information characterizing the (physical) object are to be timely reflected by the logical (software) object (and vice versa)”. The Digital Twin fulfils the following properties: –– Modelling: representativeness, i.e., modelling and contextualization of the definition of the Digital Twin. –– Reflection: the DT represents at each instant the actual state of the physical object with respect to the intended goals of the modelling. –– Entanglement: the Digital Twin timely represents the status changes and the events occurring in the lifecycle of the physical object. –– Replication: the ability to replicate and maintain the state consistency of different copies of the physical object. –– Persistency: the property of the logical counterpart of always being available despite possible failures or breakdowns of the physical object. –– Memorization: the ability to keep the status information of the physical object as well as the representation of the context in which it is operating. Updated and fully available. –– Composability: the ability to dynamically compose or decompose the digital twin into related parts (simpler DTs) and maintain their individual consistency within the general representation of the aggregated DT. –– Accountability/Manageability: the ability to manage and keep the usage of the DT accountable. –– Augmentation: the ability to add functionalities and data to the logical counterpart of the physical object to provide new services.
308
R. Minerva et al.
–– Predictability: the ability to use the logical object to predict, understand or simulate the behavior of the physical object within the current context or in different situations. –– Ownership: the ability to identify the owner of the physical object and its logical replicas to attribute roles and responsibilities to different stakeholders; and –– Servitization: the ability to extend and create functions of the physical object by means of logical objects and their use by different users/stakeholders. Other “properties” may be considered, e.g., responsiveness, the time needed to the DT for correctly calculate the changes and new status of the physical object, or accuracy, measuring how close the model represents the real values. In any case, all these functions have relationships with Artificial Intelligence and Machine Learning technologies and usage. Representativeness refers to how to represent, how to model a physical object and its environment. Knowledge representation, ontologies and design frameworks all play a fundamental role in designing a Digital Twin. The description of the “context” in which the Digital Twin is operating is often described in terms of equations, constraints or relationships between different parts of the physical object [16, 35, 52]. The notion of context of execution of the digital twin can assume different forms, such as the “multi-physics” capabilities and integration of different components [46]. The modelling property refers to two aspects of the problem: on one side, there is the need to represent/model the physical object and its operational environment, and on the other side, there is the need to have a model for the digital twin system itself. This double requirement can lead to ambiguity and misinterpretations. As a rule of thumb, we will try to keep the problem domain modeling separate from the models and frameworks used to represent and support the DT systems [55, 56]. Ontologies are formal descriptions for representing shareable and reusable knowledge across a domain. Ontologies are useful for modelling the DT and its environment. In addition, they allow some form of “reasoning” and navigation on knowledge nodes and graphs for representing the current situation. Problem domain ontologies, for instance, will be used to represent the knowledge of the problem domain (e.g., aeronautics, naval construction) while Digital Twin Ontologies will be used to represent the organization of the system supporting the Digital Twin. As an example, the DT representation of an artifact (e.g., a statue) can benefit from different ontologies such as: description of the cultural aspects (the artistic value), a physical description (comprising location, size and material/physical/chemical composition, description of environment and security of the environment), and a functional perspective (describing the functionalities and the possible usage of it). Different ontologies can be used for representing single aspects of the physical objects, but the DT ha the merit of integrating the different views and allowing their integration. Figure 5 represents a possible set of views (they should be supported by specific ontologies) about Lorenzo Bernini’s Bust of Louis XIV in Versailles. The capability of representing and possibly integrating different views of a physical object into a DT is a great advantage because it allows a single point of “search” about the relevant information. Actually, the DT could have a proactive role by
Artificial Intelligence and the Digital Twin: An Essential Combination
309
Bernini’s Bust of Louis XIV
Physical Perspective Creation time: 1665 Height: 80 cm Weight: n.d. Location: Salon de Diane, in King's Grand Apartment Owners: Versailles Castle property Other features: …
Functional Perspective Type: Statue Offered Functions: represent a person Other features: …
Cultural Perspective Artist: Bernini Represents: Louis XIV Style: Baroque Value: grandest piece of portraiture of the baroque age Other features: …
History
Fig. 5 The Digital Twin as an integrator of different ontologies and models
crawling information about the physical object and organizing it accordingly to the different views. This enrichment is an important function and characteristic of the DT. On the other side, collecting, organizing, and structuring the information is a compelling task that requires good modelling capabilities and a powerful computing and storing platform. In addition, an automatic scrutiny of the acquired information may be needed, and this will add computational and modelling complexity. To better focus on these dual modeling needs, reflection, entanglement, and persistency properties can be considered. These properties do not inherently point to the features or characteristics of the physical object being represented by a software counterpart. They refer to: • how well the Digital Twin system is able to “reflect” the status of the physical object (how well, how thoroughly the logical object is updated with respect to the physical one), and how well the status and characteristics considered in the model are capable of describing/reflecting the physical object), and • how the system ensures that at least one logical entity is always present in the system to represent the physical object (which may be unavailable, broken or malfunctioning). Figure 6 represents the modelling, conceptualization and knowledge representation of a physical object and its operational environment. The entanglement property refers to the capability of the DT system to represent/ report the changes and the events happening to the physical object or in its environment in a timely manner. While persistency implies that at least one logical object must always be present in the system to report the last known status of the physical object or to predict its possible status. AI/ML techniques can be applied to measure the adequacy of the “reflection” or the timeliness of the entanglement between the physical object and its logical counterparts.
310
R. Minerva et al.
Ontology A Physical Object
Model B
Conceptualizaon of the Physical Object (into a DT)
Sw Model (Digital Twin)
Specific Applicaons (how
the DT interacts and contribute to the problem domain representaon)
Models/ontologies interoperability
Ontology X Ontology Y
Management Applicaons (how
Model Z
the DTs are supported by the soware system )
Conceptualizaon of the problem domain
Real World
DT System
Fig. 6 Modelling of a physical object and its environment
The replication property is also a system driven feature; it aims at providing the users/programmers with the possibility of operating on a replica of the physical object that is explicitly devoted to a subset of users or that represents a limited set of data/views of particular interest to a user. Coordination and synchronization of these copies can benefit from the application of AI/ML techniques such as predictive algorithms, data replication and derivation. Memorization means the ability to collect real time data as well as the ability to organize the historical data (e.g., time series) so that it allows the rapid execution of Machine Learning algorithms for pattern recognition and the prediction of possible outcomes. For instance, the mere identification of outlier values in a DT or one of its components can trigger checks and management policies on the physical object. Data fusion techniques exploiting the identification of relationships between different types of data can be used in order to understand how different events occurring are influencing the behavior of the physical object. The Digital Twin is evidently a point of aggregation of different data. The issue of which data to use and in which formats to store them is a major one, because the data organization can have an impact on the information extraction or the type of data analysis that can be exerted on the data and hence on the behavior of the physical object within the environment. In principle, all the possible data should be collected and stored so as to allow for algorithm variations or the application of new data fusion techniques. However, the “storing” of all the raw data is not always feasible, so some policies for the efficient storing and preprocessing of data may be necessary. Composability is a difficult subject in this context. It refers to the ability to conciliate the representation, modeling, execution, and operation of parts of the Digital Twin and their physical counterpart with an integrated and consistent representation of the entire DT (and the associated complex physical object). This may require multi-physics (i.e., the simultaneous processing/simulation of different real world aspects according to specific individual models, e.g., gravity and temperature distribution of a physical system or its component), multi-constraint representations and
Artificial Intelligence and the Digital Twin: An Essential Combination
311
a logic capable of integrating the specific models into the general one for the physical object [30]. The DT system should support this capability, and, to this end, it may exploit similar approaches and algorithms for coping with complexity in very large systems. Ambiguity about the concept of Composability may arise, and so the issues should be re-conducted to the appropriate application domain or system view. Accountability, and even more so, management, are two of the areas related to system lifecycles that could better exploit AI/ML algorithms. For instance, the above-mentioned persistency or other management functions can greatly benefit from the application of Artificial Intelligence. In an edge based solution, the replication, the prediction of management issues, the applications of policies can be greatly supported by AI-based capabilities. The self-management of DT system components offers an idea of the span of the applications and their merits. Having a DT system capable of guaranteeing the self-management of its components would be extremely relevant for applications in the Industrial Internet of Things, Smart Manufacturing, and other domains. In fact, a DT of a robot can predict the need for maintenance and could identify the best policy for smooth degradation of service limiting the impact on the production chain. The application of AI is also instrumental in facilitating the working and the exploitation of the DT system, while being slightly or not at all related to the specific problem domains at hand. Tiny AI solutions [19] can be instrumental for exploiting AI algorithms at the edge for management of DT components as well as for the execution of AI functions related to the specific problem domain. The augmentation property refers to the possibility to extend the functionalities offered by the physical object by means of the Digital Twin. The user can access to a wealth of new functionalities that are not directed implemented or provided by the physical object. AI technologies can help in transforming the physical object into an interacting object capable of understanding and accommodate the user requests and needs [9]. These functionalities can be specific to the problem domain of the DT, or they may fall into the realm of facilitation the management and usage of the DT system. In this case, a certain ambiguity can arise when extending the logical object. The functional extension of the physical object’s properties will most likely have an impact on using the real-world modelling of the solution, while the extension of supporting functions instead refer to the system view and operation. Predictability has similar facets. On one hand, the DT system supports the simulation or the ability of predicting the behavior of the physical object possibly on a parametrized representation of the environment [27]. Evidently, this simulation refers to and is intrinsically based on the models of the physical object and its environment. However, predictability may be an important aspect of the DT system itself, especially in the perspective of self-management. Understanding the future behavior of the DT system is helpful in guaranteeing and supporting the relevant properties that characterize the DT representation and execution. While the needs and the technologies for supporting these two types of predictability can be extremely different, the data used to support the physical object predictability should be those stored and related to each individual DT.
312
R. Minerva et al.
Ownership is another property that may generate ambiguity. While the ownership of a physical object is, usually, clear, the ownership of replicas and models of physical objects devoted to groups of users or to stakeholders may introduce more complexity. Therefore, it is important to distinguish between aspects particular to the applications’ domains and those related to the functioning of a large DT supporting system. A good design choice is to keep them totally separated or to offer some AI functions as services that can be specialized to work with several application domains. Finally, the servitization, as the augmentation property, refers mainly to the functionalities of the physical object that are offered to the final users as services. This may be an area for the application of many AI techniques (from prediction of users’ behavior to contextualized reasoning and situation awareness) and they should be kept decoupled from similar issues arising in the design and functioning of a large DT supporting system. The DT could be instrumental in supporting a personalized usage of the physical object according to the actual needs of the user (e.g., some AI-based smartphones are capable of decreasing the battery consumption by predicting and accommodate the expected user behavior). A typical problem in the identification of AI technologies for Digital Twin implementations is the fragmentation and specialization of the solutions. In addition, there is a sort of dichotomy in the nature of the DT between the solutions tackling the problem domain and those supporting a DT system view. AI tools and algorithm focus more on the resolution of a specific issue (e.g., the prediction of a precise phenomenon) than to predict at large the behavior of the DT and its support system. These issues point directly to the need for an integrated approach in the creation of DT-based solutions.
4 The Digital Twin and Its Dependency on Data The availability of data is a major issue for the Digital Twin definition and execution. A continuous flow of data must be available in order to update the salient characteristics of the Digital Twin and its parts with respect to the actual status of the physical object. Data should also be available in order to have an updated description of the environment in which the physical object is operating. Finally, data are needed to monitor how the DT system is supporting the properties of the DT at the level deemed necessary for creating a useful, efficient, and actionable liaison between the physical object and its logical counterparts. Four different types of data are considered in this analysis: • Object data: represent the measured data strongly related to the current status of the physical object, its parameters and adaptation values. • Environment data: represent the measures and dynamic characteristics of the environment as well as the environmental data that are directly affected by the presence and the behavior of the physical object in the environment.
Artificial Intelligence and the Digital Twin: An Essential Combination
313
• Indirect data: the data that can be correlated to the physical object and its operational environment in a dynamic or static way, also comprising the information that can be derived from the data fusion of several data sets that can be related to the physical object or the operational environment.; and • Historical data: the collected data organized to create well-formed data sets. They comprise measurements as well as location and timestamp information. They relate to dynamic as well as static features of the physical object and its evolving environment. Additionally, simulation data (as inferred data) can also be added to the historical data to map the simulations to possible situation evolution. These different types of data can be collected and elaborated locally or at the edge (especially for real-time purposes) or collected and further elaborated for simulations, post analysis and additional goals. Figure 7 represents the different sources of data in accordance with the above categorization. The realm of applicability of AI applications in this area is vast, ranging from the tuning of the modelling of the Digital Twin and the operational environment to the extraction of additional information from external data sets (e.g., social media reaction to the behavior of a physical object); from the understanding of how the physical object affects the environment to the usage of environment data and the initial status of the physical object to perform simulations. All these data must be curated and somehow certified to maintain high levels of accuracy for the time representation of the physical object and its environment. Another important aspect is the “directionality” of the (real-time) data. The flow of data between the physical object and the DT must fulfil the requirements of the Historical Data Inferred Data Correlaon Data
Object Stac Data
Indirect Data
Social Media Inferred Data Different Sources
Physical Object Data
Env. Stac Data
Object Data
Batch
Environment Data
Measured Data Dynamic object Parameters
Measured Data
Environment Data associated with the physical object (behavioral data, perturbaon data, …)
Edge
Cloud External Data Sets
Environm ent Data
Physical Object
Fig. 7 Digital Twin and available data
Environment
Real Time Data
314
R. Minerva et al.
applications that will exploit the concept of the Digital Twin. Often, real time requirements (e.g., in production systems) are very stringent and demanding in terms of processing, communications, and storage, as well as sensing and actuation. Edge intelligence, i.e., the application of AI techniques as early as possible, may be necessary to represent and understand the behavior of the physical object, as well as for modifying the behavior of the physical entity as well. The so called Tiny AI [19] is an attempt to reduce the footprint of AI algorithms and to allow their execution in edge computing resources. This will make possible to execute complex AI tasks close to where the data are generated. The advantage for DT implementations is that many decisions and predictions can be taken locally and in real-time without to depend on centralized intelligence and infrastructure (e.g., replicas of physical objects). One aspect that should be considered for the application of AI to the DT representation is to understand the type of communications capabilities and the level of programmability of the physical object. In other words, if the physical object is not capable of receiving and executing commands from the logical counterparts, the Digital Twin, and the applicable AI techniques are only of a predictive nature. Data will be collected and analyzed to better understand the future possible behavior of the physical object. While minimal feedback can be provided, limited to “programmable” logical entities, no actions are executable on the physical object. This is the typical case in which a physical object’s behavior is derived by means of sensors and related measures. For instance, in a smart city, cars are not necessarily represented as Digital Twins. By means of classification algorithms, car’s types can be identified, and simple DTs can be created. This representation can be used to understand the effects of the identified types of cars in terms of “traffic intensity” or pollution. Their footprint in terms of pollution, noise and other properties can be measured and monitored. Under similar situations, the DT representation can mainly be “predictive”, i.e., the DT will provide predictions about the expected behavior of the physical entity and its impact on the environment. There is another important aspect to consider: the sensors will provide specific measures that will partially characterize the status of the physical object. The collected data will contribute to the creation of a partial view of the status of the physical object. For instance, a crossroad in the city will be represented by the traffic intensity, the noise, and the pollution present during time intervals. Nothing can be said about the most polluting or noisy agents performing in that location. The type of data collected will therefore strongly characterize the representation of the physical object and hence the possible data correlations and information inferable with AI/ML technologies. The representation of a physical object by means of add-on sensors is an example of the problem of measuring and collecting data relating to a physical object to fully grasp its “essence”. However, there is a difference between physical objects that are designed and implemented to be fully monitored and those that are instrumented later for the purpose of monitoring some specific properties of an environment. This difference will also affect the quality of available data and the information inferring capability.
Artificial Intelligence and the Digital Twin: An Essential Combination
315
The physical object may be capable of receiving data from the logical counterpart. This capability can be useful for adjusting the status of the physical object or for tuning up its measurements and interactions with the environment in which it is operating. In this case, the physical object can be considered as updatable with respect to the “calculation” and the expectation of the logical counterparts. An additional capability is the possibility to operate in the environment and to change it according to the goals and purposes of the applications using the DT and the physical object. The actuation capability is an important feature for realizing advanced applications fully exploiting the physical – logical properties. If the physical object can receive and execute commands, then it is somehow programmable and so the development of reactive or autonomic policies based on complex AI algorithms can be feasible and effective. Actuation can be a built-in or add-on property of the physical object. Typically, if an object has built-in actuation features it will be fully instrumented with sensors, and so it can be considered connected and programmable. The communication is then bi-directional. This enables the ability to of receive commands and scripts to be locally executed or to request the execution of policies to an object. The ability to impact the environment and to interact with other objects operating in it allows for a more effective modification and control on the environment. A crossroad in a city could be instrumented with programmable traffic lights, and the environment can by affected by the chosen policies. However, the value the possible actions and executable policy is not comparable to those offered by objects that have a more granular and complete understanding of events, relationships between objects and situations occurring in the environment. The more granular understanding in conjunction with a granular capability to interact with the environment (actuation) enable a greater impact on the environment. Having actuation capabilities in the physical object (or mediated by its logical representation) makes it possible to create Reactive or Autonomic Digital Twins. Programmability is the key to this transformation, as well as the possibility to communicate with the physical object and the associated actuators. The level of applicability of AI techniques increases with these properties. In these contexts, data must be available and continually updated as well as actionable to execute policies that may affect resources and people. Another important aspect of the data is related to their relevance for similar environments. Determining correlations between data collected or represented by the DT and its environment together with the ability to abstract some idiosyncrasy introduced by the specific location makes it possible to compare data in different locations (e.g., different cities) and promotes “transfer learning” into a similar and possibly homogeneous environment [61]. For instance, collecting granular information about traffic types, meteorological data, city peculiarities and other variables can enable the creation of parametrized models that can be effective in many cities or contexts.
316
R. Minerva et al.
5 Designing a Smart Digital Twin System with AI In this section, an example of the definition of a digital twin in relations to the usage of some AI technologies is provided. This example is not exhaustive, and it is far from any actual or real implementation in the field. It is a means to explain how some techniques could be used in defining the type of Digital Twin and design it in relation to a real life object. The chosen physical object is a gas/oil/water separation as employed in the petroleum industry for the separation of different substances. There is a vast literature that presents the design, the methods, and the merits of this approach (there is an entire series in the “journal of Petroleum Technology, in here we refer to [10]. A simple gas/water/oil separator in represented in Fig. 8. The working of the “separator” can be schematically described by the fact that the mixture of gas, water and oil is forced by means of an inlet into a vessel. Here a set of physical separators can help in breaking the flow of the mixed fluid and permit to the different components to sediment and initiate the “separation”. As seen in the figure, the oil will surface over the water and it will dip into a separate area for collection, the water, will instead remain in the first part of the container. Water can also contain additional material (e.g., sand). The gas, due to the separator and the forced flow, will move to the upper part of the vessel. A digital twin of the Separator can be created in order to represent the status of the physical object and to represent its current and future states. Some sensors are needed to measure the levels of the different materials as well as the flows of the different substances. These sensors can also provide information about the current operation of the vessel and its possible issues. The sensors considered in this example are related to the measurement of the “in” and “out” flows, the pressure of the gas within the vessel and the level of liquids. These sensors can be complemented with temperature ones to determine the effectiveness of the process of separation. Additional sensors are presents in actual separators in order to control other parameters affecting the separation process. The goal is to tune up the “in-flow” in order to permit the separation to occur. The levels of
Diverter
Flow distribuon element
Separator
Gas Extractor
Flow Sensor
Inlet
Gas outlet Pressure Sensor
Flow Sensor Liquid Level Sensor
Liquid Level Sensor
Sand/Solid outlet
Sand Level Sensor
Fig. 8 A schematic gas/water/oil separator
Flow Sensor
Liquid outlet
Flow Sensor
Oil outlet
Artificial Intelligence and the Digital Twin: An Essential Combination
317
liquids and the pressure will be checked in order to keep the process within safety levels and within quality measurements of the process. For instance, maintaining determined ratio between the water and the oil makes the process more effective. For this example, the optimal ratio is 50%. The in-flow could obviously have a very different ratio between water and oil, and in addition, this ratio may be variable over the time. The representation of the vessel as a digital twin (here at a very simple level for keeping the description clear and meaningful also for people not in the problem domain field). An example of sensors used for controlling an Oil/Gas separator is found in [4]. This example will focus on control of the vessel levels based on some techniques of “reasoning”, then on techniques for the prediction of properties of the separator, and eventually on AI techniques for helping in the identification of management intervention needs. Eventually a few considerations about “transfer learning” are provided. Intelligent Control of Levels and Automatic Adjustment by Means of a Rule Based System The goal in this case is to control and govern all the “variable” in play in such a way to optimize the operation of the vessel. In [28], a fuzzy logic system is considered to optimize the gas–liquid ratio (GLR), because this variable has a direct effect on the quality of oil produced during the separation process. In the following section, some simple examples are given (they only have the value of presenting some solutions and they should not be considered by no means “solutions” to the problems). The goal of this example is to design a system that is capable of operating in such a way to regulate the in-flow and the out-flow in such a way to keep internally to the Separator a quality ratio close to the optimal one. The chosen technology is a rule based knowledge system from which to extract decision. Figure 9 depicts a simple rule based system that uses a knowledge base for taking decisions about the levels and other parameters of two vessels. In this case, there is a requirement of keeping a ration close to 50% between water and oil in the tank. The inlet liquid could have a different ratio, so there is the need to regulate how much water is in the vessels by opening and discharging water (and oil) or let more
sensor
Vessel a actuator
Sensed data
sensor
Knowledge based system
DECISION
command
Vessel b actuator
sensor
command
sensor
Control System IoT subSystem
Modelling
Decision Support mechanisms
DIGITAL TWIN
Fig. 9 A basic rule based system integrated into a control system for the management of two vessels
318
R. Minerva et al.
mixture in the tank. The ratio of 50% is arbitrary but it refers to a good mix that can result in a higher quality of the separation process. The two vessels are monitored, and the system provides data about the level of the different liquids as well as their ratio. Data are collected into a database of sensed data. By means of read operations (in the Prolog language), these data can be passed to a knowledge based system as actual facts about the status of the vessels. The decision support system will than query the knowledge base in order to “understand” what to do. In this case, a simple rule system written in Prolog is considered. It describes some facts and rules about two vessels (vessel a, and vessel b). These facts represent the level of water and oil in the two vessels and the height of the separators in the two vessels. Ratio is an indication of the mix of oil and water present in the tank. It is assumed that a level of 50% is needed in order to provide a good quality “separation”. In addition to “facts”, the knowledge system can also represent “rules”, i.e., description of concatenation of facts and other rules that describe information on a specific set of facts of the system. In this simple case, two rules are represented, the purge one indicating if there is the need to purge the water from a vessel; and the whatratio that indicates the quality of the mixed liquids, it can be used to understand if more liquid should be inlet in order to modify the mix ratio. Figure 10 represents the queries that can be asked to this simple system. It shows that purge is providing some values for purging the systems and the whatratio gives back a value about the percentage of mixed liquid to be considered. The control system can regularly query these rules. Their results are used in order to determine and enforce policies related to the regulation of the flows and liquids within the system. Obviously, a rule based system must be more complex than this, it should consider for instance the pressure, the temperature, and many other important parameters of the process. The knowledge base is a good representation (created by experts and supported by their knowledge and experience) that can be used to control the operation and to take the right decision in time for automatically optimize the processes. One of the issues could be the difficulty of managing critical situations that have not been “coded” or represented by facts or rules. One of the advantages of this method is that it is possible for humans to understand the decision (the rule and the facts) that triggered the behavior of the rule based system. Prediction of Properties of the Separator In this case, a set of sensed historical data (time series) will be the basis for an analysis of the behavior of the vessel and the prediction of its states. Different types of Neural Networks can be applicable, and they should be chosen on the basis of the quality and the availability of data as well as the target accuracy and the training and processing time needed. This analysis has to be specifically carried out in order to choose the best mechanisms with respect to the available data set. In this example, a generic neural network is considered and its major features (i.e., the input variable) are represented in Fig. 11. A part of the available data will be used for training and for specializing the neural network towards the level of precision required. In this case, the neural network will return
rules
facts
Artificial Intelligence and the Digital Twin: An Essential Combination
319
vessel(a, water, 102). vessel(a, oil, 80). vessel(b, water, 110). vessel(b, oil, 100). ratio(a, 0.47). ratio(b, 0.53). separator_height (a, 100). separator_height (b, 95). purge(X,Y)-: separator_height (X, W), vessel(X,Y,Z), Z>W, write("purge "), write(Z). whatratio(X, Q)-: ratio(X, Y), Q is-(0.5 Y)*100.
results
query purge(X,Y). purge 102 X Y a water b water purge 110
1 2
results
query whatratio(X,Q). X a b
Q 3.0000000000000027 1 -3.0000000000000027 2
Fig. 10 Simple queries to a knowledge base
Pressure level
Input Layer
Hidden Layer
Output Layer
Vessel’s temperature Inlet flow
Purge (Liquid, quanty) inlet (quanty)
Water Level Increase (temperature) Oil Level Maintenance (Sand) Sand Level
Fig. 11 A neural network solution for the vessel management
indication on the action to perform in order to optimize the working of the vessel and for predicting the need for maintenance to guarantee a high level of security and quality in the process.
320
R. Minerva et al.
The quality of the process depends on the quality and the reliability of the data set used. Unfortunately, there are not public domain data on the vessels, and it is not possible to make a numerical example on the basis of real data. Decision Tree for Management Support In this example, a small data set with levels in percentage of water, oil and sand is considered (data are not real data, they have been defined on purpose for this example). A high level of sand means that the vessel may be working in an improper way, and then a maintenance is needed in order to take out the sand and other materials. The data also show that when the percentage of water is too high (typically above 50%) then a purge of water is needed. Same procedure is valid for oil. If values of oil and water are close to 50% and there is not too much sand, no operation is needed (indicated as nil). When sand reaches a high level, the situation of the vessel can be defined as critical, and intervention is needed. In this example, a decision tree based approach is chosen. A decision tree is tool that uses a tree-like model to represent decisions and their possible consequences. Each branch represents the outcome of a decision, and each leaf node represents a class label (decision taken after computing all attributes). There are several types of algorithms for “calculating” decision trees [41]. Figure 12 represents the data (in the middle) and two decision structures determined by using the tool WEKA. The first one (on the left) is based on J48 decision tree and the second one (on the right) is a Random tree [3]. They operate on the same data, and they produced two “decision trees”. The Random tree is more detailed and results into more precise decision choices that seems to fit better with the simplified decision tree produced by the J48 algorithm. The two results can be used in a control system not too different from the one presented in Fig. 9. In this case, instead of a Knowledge Base and a decision engine, a software generating and maintaining the chosen model of the decision tree can be used in order to instruct @relation vessel2 @ATTRIBUTE waterlevelREAL @ATTRIBUTE oillevel REAL @ATTRIBUTE sandlevelREAL @ATTRIBUTE command {purge- water,purge-oil,purge-sand, nil, critical} @data J48 0.52,0.45,0.03 , purge- water 0.55,0.4,0.05 , purge- water 0.49,0.49,0.02 , nil 0.49,0.41,0.1 , purge-sand 0.4,0.45,0.15,purge -sand 0.6,0.39,0.01,purge-water 0.55,0.35,0.1,purge -sand 0.45,0.52,0.03,purge-oil 0.65,0.33,0.02,purge-water 0.6,0.2,0.2,purge-sand 0.45,0.52,0.03,purge- oil 0.55,0.4,0.05,purge -water 0.4,0.3,0.3,purge - sand 0.7,0.2,0.1,purge - water 0.8,0.1,0.1,purge - water 0.3,0.4,0.3,purge - sand 0.42,0.4,0.18,purge -sand 0.55,0.4,0.05,purge -water 0.46,0.46,0.08,nil 0.4,0.52,0.08,purge - oil 0.3,0.6,0.1,purge- oil 0.3,0.4,0.3,purge -sand 0.4,0.25,0.35,purge -sand 0.55,0.25,0.2,purge -sand 0.52,0.46,0.02,purge-water 0.49,0.49,0.02,nil 0.25,0.25,0.5,critical
Pruned Tree
Random Tree
Fig. 12 Creating decision trees for supporting the management needs of an oil water separation vessel
Artificial Intelligence and the Digital Twin: An Essential Combination
321
the control engine to take actions. In order to evaluate the applicability of the two decision trees, a longer and more accurate data set would be needed. Transfer Learning These techniques can be used “transferred” to other vessels in order to control and optimize the process in other devices and zones. Its case of transfer, it is important to check whether the data (the facts or the historical data) are consistent with the actual status and possible values of the new device. For instance, if the sand ratio is higher in the new site, maybe a careful analysis of the decision tree (and a more frequent check on the level of residual material in the vessel) may be needed. Also, the percentage may be changed in order to reflect the “quality” of the inlet flows. However, a great deal of insights as well as the basic mechanisms can be tuned up for the new site leading to satisfactory results without losing too much time for the re-training or re-organization of the data and rules.
6 Some Guidelines for Building AI Based Digital Twins The design of Digital Twin systems is not well standardized, but it is based, in many cases, on consolidated industrial, IoT, or agent platforms. A number of middleware solutions are available [26, 33, 38, 45, 47, 57] and standards are emerging [20]. Consequently, several “methodologies” or proposals for using these platforms in a fruitful manner are emerging [8, 59]. Some of these are also focusing on the need to carefully model the data supporting the effective definition of a Digital Twin (e.g., [13, 25, 52]). There is also a specific focus on the issues related to the usage of AI technologies for supporting the goals of Digital Twin systems [2, 42, 43]. The approach applied in this section is to identify a few of the functions that can be included and used in a Digital Twin platform to exploit the possibilities offered by AI, as well as some major checkpoints that can be satisfied during the design phase of a Digital Twin System. Figure 13 depicts the functionalities envisaged for a fully-fledged AI-enabled Digital Twin system.
Knowledge Representaon
Learning Funcons
Data Lake
Fig. 13 Partitioning of “intelligent AI” functions for a Digital Twin System
Security Funcons
Explainability Funcons
Visualizaon Funcons
Reasoning Funcons
322
R. Minerva et al.
The reader can recognize many of the functionalities discussed so far. These functionalities should be data-driven in the sense that the data received from the Physical Object are to be framed, processed, and “understood”; i.e., they are the basis for building the entire system. Models and learning functions can be built on top of that system in order to predict its possible future states and to understand its behavior and that of its components. The interrelations between the different parts of a DT as well as the influence of the environment can be represented and designed. Reasoning functions are additional capabilities that can be used when the “logic” of a Digital Twin must cope with uncertainty, and/or when it needs to be highly adaptive and autonomic. Depending on the type of system and the supported/expected application, these reasoning functions may be optional. A new issue in the AI realm is related to the possibility of explaining the decisions and the results of complex AI systems [29]. The Explainability functions are needed so that the findings of the AI engine can be presented to the users, and, in the case of reasoning functions, what reasoning led to the policies and strategies adopted by the system. Under this perspective the DT approach can be useful because the “explanation” of decision and results can leverage the presence of a formalized model of the object and its environment. Visualization will help users to understand the results. Security functions are needed to help a system to cope with external or internal security and trust menaces. Designers defining a smart Digital Twin system may want to follow a simple set of guidelines to understand how to proceed. Several checks on the modelling aspects and their compatibility with the data are needed to ensure that the real-world physical object data are driving the entire system. These steps could be the following: 1. What type of DT? • The type of DT (passive, predictive, reactive, or autonomic) needs to be defined early to understand if the data and technologies can support the requirements. 2. Data Availability • Are the data sufficient to describe the behavior of the object(s)? • What is the data format? How to guarantee the quality of the data? • Are there missing data? If yes, can they be inferred from the environment? From elsewhere? 3. Data Entanglement • Are the data available on time? • Are the data available at the right time? 4. Model • Do the data fit to available models and ontologies? • Are extensions needed?
Artificial Intelligence and the Digital Twin: An Essential Combination
323
5. What type of Learning? • Map the chosen algorithms with the data availability 6. What types of Reasoning? • For an optimization, then “simple” reasoning; or • For situation awareness, then more complex technologies for coping with uncertainty. 7. How complex is the environment and the DT system? • What security measure to adopt; and • How to govern the internal and external interactions. The first four steps (represented in Fig. 14) are general for a “simple” DT system (i.e., for reflecting the behavior of a physical object). It should be noted that if data are not available at the right time and in the right format, it may happen that the real basic properties of a DT are not supported (e.g., entanglement and reflection), and so the system will not represent the real status and behavior of the physical object. A second phase of the process is related to understanding what kind of “AI functions” are needed for a Digital Twin system. The distinct AI technologies are not specified here because they may be dependent upon the specific applications domain at hand. Figure 15 represents some choices that the designers should consider. The choice depends on the problem domain, as well as the envisaged future “reasoning” needs of the platform. The designer can, for instance, frame the current version of the system into a more complete version of the platform because he/she foresees the future need to make use of additional, more powerful AI functionalities. The choice of technologies also depends on the type of DT, and, as in the previous phase, on the available data that will validate the deliberations of the DT system.
7 Operational View on the Digital Twin The adoption of a Digital Twin solution has a relevant impact on enterprise’s processes and the way a company works on its products and services. From an operational perspective (i.e., the design, implementation, running and use of a Digital Twin platform), the issues are many, and overcoming them involves a well-structured approach supported by the entire enterprise and its ecosystem. Among these issues, we can list the ability to master technologies, the availability of data, the definition and adoption of well-defined processes, the ability to constantly consider the “complexity” behind the Digital Twin representation and how to act to better describe the actual behavior of the physical object. Data and the application of Artificial Intelligence techniques are central to the successful adoption of Digital Twin approaches, but their usage requires a considerable effort in terms of processing and organization. As highlighted in [31], a successful strategy for the adoption of AI
324 Fig. 14 Defining a simple DT system
R. Minerva et al.
Choose a type of DT
Data are Availa ble ?
Infer addional data
Data are onme ?
Check Reqs
Model the DT
Data are consis tent ?
Do you need AI?
(and DT) must involve the entire ecosystem and use the DT system to fulfil the needs and requirement of all the stakeholders. A few critical success factors have been identified [31] for the health sector [31], and they can be generalized to other problem domains. They include the following: • Top executive support (i.e., the direct involvement and support of the management hierarchy of the enterprise). • Current sector demand (the AI tools should meet the real needs of the people operating in the field). • Department/Enterprise consensus (i.e., a large organization within the enterprise should support the approach and find it useful). • Dedicated AI specialists (i.e., a team of specialists dedicated to capturing and representing the requirements and the skills of people operating in the field).
Artificial Intelligence and the Digital Twin: An Essential Combination
325
Is AI needed?
Learning
Hybrid AI
Algorithm fing problem
Algorithm fing problem
Predicve DT
Reacve DT
Learning and Reasoning
Does the model cope with complexity ?
Choose AI for coping with uncertainty
Autonomic DT
Choose Model Based AI
Reacve DT Fig. 15 The AI functions that may be required
• IT department support (i.e., how to use these tools should be easy and obvious for non-IT people). • Tangible benefits (i.e., each department should profit from the use of AI and DT). • Clarity of results (i.e., AI tools should offer clear explanations of their results); and • Continuous optimization of tools and solutions (i.e., self-configuring, training and other operational activities should be made transparent to users). These critical factors have technical counterpoints to consider in the definition, construction and usage of Digital Twin encompassing AI technologies. Some of the major problems are briefly listed here: • Problem domain size. It is important to know how much data is needed to adequately represent the problem domain and the different “cases” that are probable and interesting. The more complex the problem domain, and the more multiple entities are related, then the greater the amount of data that are needed. In the literature, there is a trend to consider that “big data”-based results could be biased and under these conditions their usage without a proper analysis may lead to inaccurate results (e.g., [11, 12, 14, 23]). In addition, when the amount of data is so huge, processing it may be unpractical [37]. The Digital Twin approach is, most likely, exacerbating this problem for two major reasons: the multiple facets
326
R. Minerva et al.
to be represented by a single digital twin, and the number of relationships and cross-references between cooperating/aggregated Digital Twins. Correlating many DTs and their data could generate an exponential growth in both the data and the processing time. The problem domain data size is also strongly related to the storage and processing power required to fully elaborate large quantities of data. Choosing AI algorithms that are O(n) or even worst can quickly lead to unmanageable situations with large values of n. • Availability and quality of data. This issue refers to the need to have a large quantity of well-formed and possibly labelled (i.e., marked up) data that can be readily used by AI algorithms [48]. This problem has been addressed by defining the needs for data that are curated and well-formed for being used (and reused) by different systems [62]. The lack of data is one of the issues for the successful usage of AI applications.1 There are techniques to mitigate this issue2: e.g., supervised and semi-supervised learning [53, 58], transfer learning [63], synthetic/ artificial data generation [44], ontology definitions [18], and reinforcement learning [21] help to create well-curated data. The quality of data being processed is another relevant issue, as clearly stated by Google researchers [50]. It should be mentioned that a number of well-curated data sets are available as open source. These data sets are well-formed and are used by researchers and practitioners for studies on algorithms and models. In industry, data has an intrinsic value, and must be protected and preserved from competition. Hence, data must be prepared and treated/processed internally. This requires a huge effort in terms of personnel and processes as well as processing and storage.3 In addition, the policies, and mechanisms to segment data for privacy and security reasons will introduce additional effort and costs. • Infrastructure (between cloud and fog) – Hardware issues. Large AI-based projects require a large computing infrastructure to process large amounts of data as quickly as possible. The conjunction of Artificial Intelligence and the Digital Twin approach will most likely add performance requirements to a large, distributed infrastructure. The AI project (depending on where the large part of the data is located) can reside in-house or in the cloud. In an AI-DT application, the distribution of functions close to the physical objects is a likely requirement for the collection of data and for the run-time execution of some functions. Creating a large edge-cloud infrastructure requires a major effort in terms of hardware and operational costs. Connectivity between the physical objects and their logical counterparts should be guaranteed wherever the DT is deployed and replicated (edge or cloud). The ability to process general AI functions in large clusters of computers as well as specialized AI functions in smaller edge machines can be In https://blog.bitext.com/how-to-solve-data-scarcity-for-ai, the scarcity of data and some techniques to cope with this issue are briefly discussed. 2 This blog entry https://www.tahaluf.ae/blog/data-scarcity-in-artificial-intelligence-and-how-to- mitigate-them/ addresses some of the techniques useful to create, curate and label the data. 3 The blog entry https://azati.ai/how-much-does-it-cost-to-utilize-machine-learning-artificial-intelligence/ provides a high level analysis of AI project phases and related costs. 1
Artificial Intelligence and the Digital Twin: An Essential Combination
327
envisaged and will require a substantial investment in hardware infrastructure and communication capabilities. Several strategies can be adopted for cost savings. A pay-per-use solution could be convenient if the AI-DT approach is not “pervasive” in the enterprise, while the construction of a large and capable in- house infrastructure could be considered if an AI-DT approach is fully adopted and pervades the processes of an enterprise.4 Consequently, infrastructure updates and renovation should be considered, as well as enough personnel to manage and maintain a complex distributed system. The strategy and the approach to build and maintain an AI-DT platform would thus be a major effort for a company.5 In the long run, the improvement of high-performance computing, the standardization and consolidation of AI tools and the increasing availability of stable quantum computing platforms could constitute a powerful basis for AI infrastructure [5] that could benefit the operation of AI-DT solutions. • Data driven attitude. Many enterprises have already taken steps towards the use of data analytics for improving their processes and their business approaches. Sometimes this attitude does not pervade the entire organization. The combination of AI and Digital Twin is particularly important for covering the entire life cycle of products and activities, allowing every organizational unit to be involved and proactive. This requires the measurement of activities and the input of different facets of the model behind the digital twin in terms of data. AI algorithms can improve productivity by determining the most appropriate patterns and behaviors for an entire organization. If this approach is undertaken by only a fraction of a company, many advantages could still arise, but with minor global impact on the entire enterprise. Personnel should be involved and contribute to this change of attitude to take full advantage of the AI-DT combination. • Consistency of Usage. Combining AI with the Digital Twin platform requires notable investments and changes in the attitude and possibly in the organization of an enterprise. The approach cannot be confined to a single experiment or just a few attempts or for a limited period of time. It should be supported by best practices and by a continuous push to extend the usage and the change/adaptation of processes, procedures, and organizational structure to favor the large- scale adoption of the AI-DT approach. Applying the AI-DT approach intermittently or discontinuously will lead to a waste of effort and resources and a disruption of newer and older practices. • Updates and Adaptation. The products or the physical artefacts may change over time due to new design, the introduction of new features or simply for a different usage by users. The combination of AI and DT should operate in such a way to support “situation awareness”. The situation here refers to changes in time due to The blog entry https://www.datacenterknowledge.com/machine-learning/what-s-best-computinginfrastructure-ai provides interesting considerations about the creation of a performing AI supporting infrastructure according to the major requirements of an enterprise. 5 The blog entry https://www.intel.com/content/www/us/en/developer/articles/technical/hands-on- ai-part-6-select-an-ai-computing-infrastructure.html provides a set of relevant questions or issues that could be considered in the design and operation phases of an AI based large infrastructure. 4
328
R. Minerva et al.
new conditions in which the DT is operating (these may be contingent and almost in real-time) or because of constraints imposed by new regulations (these changes may be temporary (e.g., for a limited period of time) or permanent (i.e., until the next change of laws/regulations)).The AI part should support the working of the DT and guide it to a progressive adaptation (training and learning) of the new conditions. If major changes are needed due to a different enterprise strategy, the DT should be updated and adapted (and sometimes retrained) to pursue the new strategic goals. This means that the entire enterprise will act to update, maintain, and curate the data, the procedures, the logic, and the global strategy associated with the DT approach. In this case, the AI part should facilitate the identification of situation changes, the adaptation to such changes and the retraining towards the new goals. • Coping with exceptions. A DT is a general model of an aggregated physical system. One of the desired properties is the generality of the model and analyzing and implementing too many exceptions should be avoided. Having to many exceptions is a sign that the modelling is not particularly good and hence the introduction of exceptions in order to cope with “events” that are likely to occur. The more a system is complex, the higher the possibility that something unexpected will occur. Situation awareness and symbolic reasoning can be useful to limit the bad consequences of unexpected events, as well as the identification of policies that can help to alleviate the problem. New strategies and new goals can introduce unexpected behavior in a consolidated DT system, hence the business policy for governing exceptions and fostering the smooth introduction of new goals should be constantly exerted to improve and increase the resiliency of a DT. • Adding complexity. As mentioned earlier, an AI-DT system should be planned as an “on-going” activity that will try to align the model to the actual physical system evolution it represents. Grasping the complexity of the physical system and the environment in which it is operating requires a continuous adjustment and improvement of the modelling according to the introduction of new elements in the model, the changed conditions of usage in the environment and/or the extension/change of elements and constituents of the physical system itself. These changes can be “adjustment”, i.e., changes to better represent the physical system in the environment, or improvements, i.e., changes to increase the ability to represent the complexity of a system and its environment or the addition of new aggregated elements to the system itself. Over time, the complexity of the representation will increase due to the exceptions, errors, and changes needed to reflect new strategies. DT systems are rarely intended to be permanent and static. They require a continuous improvement to be better aligned with “reality”. This will require a constant effort by an enterprise to understand how to improve or when to change it to add new means to cope with complexity. AI is of great help in this effort, as Machine Learning can help in improving the reaction to the environment, while some forms of more complex reasoning can be implemented to allow a DT to react to complexity and its unexpected effects. As seen, the applicability of AI within the context of DT is vast and profound. At an operational level, AI has a dual role with respect to the Digital Twin (see Fig. 16).
Artificial Intelligence and the Digital Twin: An Essential Combination
329
AI for product lifecycle management of a Digital Twin (PLM support)
Digital Twin model and funconalies
AI for the DT’s problem domain (direct usage)
Fig. 16 The possible applications of AI technologies within the Digital Twin context Ideaon
Design and implementaon
AI for ideaon and requirement capturing
AI helping the design and implementaon phases
Usage
Adaptaon and Extension
AI for product monitoring, usage improvement, extension, and mul-domain process decision support
Fig. 17 AI technologies for supporting the PLM of a generic Digital Twin
AI is an enabling technology (direct usage) that helps in tackling the complex issues of modelling, describing, representing, and understanding a physical object’s behavior within its environment. In this case, these AI technologies are strongly integrated within the DT platform, geared to meet the goals of the digital twin and to solve the specific issues of the problem domain. AI is also applicable to support the processes around the definition, improvement, and assessment of the Digital Twin approach [60] (process lifecycle management, PLM, support). It is important, in the application of AI technologies, to understand if the usage is for a direct definition of Digital Twin functions or for the support of the lifecycle of the DT. With respect to the PLM of a digital twin, AI technologies or approaches can be applied, as indicated in Fig. 17 at different stages of the life cycle. In fact, AI can be used to support the ideation of the DT and the definition of the organizational processes behind it [15]. AI can prove useful in the design and implementation phases [6], especially for improving the quality of the software and to help the entire software engineering process. AI can also be used to cope with issues related to the usage and the extension of the Digital Twin by reasoning on the optimization of its performance [22] as well as on increasing the sustainability of the entire solution [24, 51].
330
R. Minerva et al.
Even if products and supporting tools are emerging (e.g., Amazon’s AWS IoT TwinMaker6 or Microsoft Digital Twin platform7), there is not yet a consolidated methodology or set of processes and related tools that can guarantee a consistent and error-free path to the development and use of digital twins. There is still the need to adopt a “trial and error” approach to align the available technologies to the actual needs of an enterprise and its problem domain. Figure 18 shows some activities to perform during the Digital Twin lifetime process in relation with the described list of issues and needs for each step of a simplified PLM. The adoption of the Digital Twin approach requires flexibility and a quick response to the required changes. Fine-tuning and updating of the platform will be a continuous effort during the entire lifetime of the Digital Twin approach. A first cycle of flexibility reveals the need to adjust the platform to the changing requirements of the enterprise with respect to how its products and solutions should be designed and implemented for the intended market. The adopted data representations, algorithms and methods require continuous upgrades to match the actual conditions of the market and their alignment to the product solutions represented as Digital Twin. This continuous improvement will occur throughout the usage phase, highlighting the need to mold the processes and the organizational models of the enterprise to benefit from the Digital Twin approach as well as to adjust the data, representations and algorithms and methods to fit to the enterprise’ requirements and organization. This alignment will generate the need for changes and Ideation
Design and implementation
Usage
Adaptation and Extension
Data and Representations Processes
Complexity and Adaptation to Actual Data
Requirements and Objectives Models Algorithms and Methods
Platform Requirements and Objectives
DT / AI Technological Platform
Fig. 18 Steps for the exploitation of AI in a DT lifetime
Description available at https://aws.amazon.com/iot-twinmaker/ Description available at https://azure.microsoft.com/en-us/services/digital-twins/
6 7
Artificial Intelligence and the Digital Twin: An Essential Combination
331
improvements in the technological platform as well in the “requirement capture” and understanding. Enterprises operate in a changing environment. Crises or changes in the market are always possible and there is a constant need for adaptation and change due to new constraints or phenomena that could not have been considered in the design and implementation phases. The adaptation to complexity of the environment for which the Digital Twin approach has been designed and implemented is a key feature that can have a significant impact on the technological platform as well as on its design and implementation. The ability of an enterprise to flexibly manage the changes to their technological platform as well as the data capture and analysis, and the robustness of the enterprise’s processes supporting the usage of the Digital Twin platform are essential for the successful use of this approach. A good starting solution accompanied by a flexible strategy for improving it are key elements for its effective exploitation. Starting out with a good technological platform without planning the processes and the efforts needed to improve it is not likely to be a winning strategy. The Digital Twin approach should encompass both the technological efforts for developing and maintaining a state-of-the-art platform as well as the continuous need to align and mold the internal processes to support accurate modeling of the products, their use and evolution in the real world. The ability to react quickly and efficiently to unexpected changes in the market or in the environment is key for maintaining an effective Digital Twin approach.
8 Conclusion The relationships between Artificial Intelligence and the Digital Twin are important and their combination is an essential success factor for the enterprises willing to exploit the power of these technologies. The approach to Artificial Intelligence can be integrated or native. In the first case, the AI is integrated as an additional set of functions to the existing framework of the Digital Twin. In the latter case, the Digital Twin is built in combination with AI techniques and solutions. The difference is not trivial, in fact in DT with integrated AI the choice is about what and how to integrate new required functionalities in an existing system. The possibilities may be limited by the existing structure of the DT framework. As such the usable AI technologies or their fully capabilities may result limited. In the Combined AI and DT approach, the choice of what AI technology to use is driving the design and implementation of the DT framework in order to fully exploit the AI capabilities with respect to the requirements of the project and the expected functionalities. From a high perspective, the AI is an intrinsic part of a Digital Twin definition, design and implementation. This pushes for a combined approach. As seen, the Artificial Intelligence technologies that can be applicable are many and they related to the “type” of Digital Twin that needs to be created. Even for reduced functionality Digital Twin (e.g., Passive DT), the suggestion is to design
332
R. Minerva et al.
and develop the Digital Twin architecture with APIs and componentization of the elements n such a way to easily extend the range of AI functionalities. This will make easier to “elevate” from simple DTs to more complete ones that needs more AI functions (e.g., Autonomic DT). A consequence of this perspective is that the Enterprise should be considering as a strategic target to build processes and an internal attitude towards Artificial Intelligence and Digital Twin representations. The changes required to the organization (as well as the costs) may be relevant and they should carefully considered. In addition, as said, the DT approach is rarely confined to a small part of the enterprise. It is more a strategic and wide extent solutions with an impact on all the structures of an organization. In spite of the complexity of the combination of the technologies, the adoption of the AI within a Digital Twin framework is the right choice because of the increasing difficulty of dealing with large representations of complicated (and complex) systems made of several thousands of components and parts. AI largely contributes to managing this complexity allowing humans to focus on relevant decisions. The AI is then an essential set of instrument for the working of any Digital Twin.
References 1. Abdiansah, A., & Wardoyo, R. (2015). Time complexity analysis of support vector machines (SVM) in LibSVM. International Journal Computer and Application., 128(3), 28–34. 2. Agostinelli, S., Cumo, F., Guidi, G., & Tomazzoli, C. (2020, June 9). The potential of digital twin model integrated with artificial intelligence systems. In 2020 IEEE international conference on environment and electrical engineering and 2020 IEEE industrial and commercial power systems Europe (EEEIC/I&CPS Europe) (pp. 1–6). IEEE. 3. Ali, J., Khan, R., Ahmad, N., & Maqsood, I. (2012). Random forests, and decision trees. International Journal of Computer Science Issues (IJCSI), 9(5), 272. 4. Allahloh, A.S., Sondkar, S.Y., & Mohammad, S. (2018, September). Implementation of online fuzzy controller for crude oil separator industry based on internet of things using LabVIEW and PIC microcontroller. In 2018 international conference on computing, power, and communication technologies (GUCON) (pp. 341–346). IEEE. 5. Banerjee, S., Foster, I., & Gropp, W. (2020). Infrastructure for artificial intelligence, quantum, and high performance computing. Available at https://cra.org/ccc/resources/ ccc-led-whitepapers/#2020-quadrennial-papers 6. Barenkamp, M., Rebstadt, J., & Thomas, O. (2020). Applications of AI in classical software engineering. AI Perspectives., 2(1), 1–5. 7. Bauer, P., Dueben, P. D., Hoefler, T., Quintino, T., Schulthess, T. C., & Wedi, N. P. (2021). The digital revolution of Earth-system science. Nature Computational Science., (2), 104–113. 8. Brosinsky, C., Westermann, D., & Krebs, R.. (2018, Jun 3). Recent and prospective developments in power system control centers: Adapting the digital twin technology for application in power system control centers. In 2018 IEEE international energy conference (ENERGYCON) (pp. 1–6). IEEE. 9. Chattaraman, V., Kwon, W. S., Gilbert, J. E., & Ross, K. (2019). Should AI-Based, conversational digital assistants employ social-or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior, (90), 315–330.
Artificial Intelligence and the Digital Twin: An Essential Combination
333
10. Chin, R. (2015). The savvy separator series: Part 4. The ghosts of separators past, present, and future. Oil and Gas Facilities, 4(06), 18–23. 11. Chiolero, A. (2013). Big data in epidemiology: Too big to fail? Epidemiology, 24(6), 938–939. 12. Clarke, R. (2016). Big data, big risks. Information Systems Journal, 26(1), 77–90. 13. Conde, J., Munoz-Arcentales, A., Alonso, A., Lopez-Pernas, S., & Salvachua, J. (2021). Modeling digital twin data, and architecture: A building guide with FIWARE as enabling technology. IEEE Internet Computing, 26(3), 7–14. 14. Corbett, C. J. (2018). How sustainable is big data? Production and Operations Management, 27(9), 1685–1695. 15. Debowski, N., Tavanapour, N., & Bittner, E. A. (2022, January). Prototyping a Conversational Agent for AI-Supported Ideation in Organizational Creativity Processes. In HICSS (pp. 1–10). 16. Fonseca, Í. A., & Gaspar, H. M. (2021). Challenges when creating a cohesive digital twin ship: A data modelling perspective. Ship Technology Research, 68(2), 70–83. 17. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. 18. Guizzardi, G. (2020). Ontology, ontologies, and the “I” of FAIR. Data Intelligence, 2(1–2), 181–191. 19. Hao, K. (2020, April 2). Tiny AI. MIT Technology Review [Online]. Available: https://www. technologyreview.com/technology/tiny-ai/ 20. Harper, K. E., Ganz, C., & Malakuti, S. (2019). Digital twin architecture and standards. IIC Journal of Innovation, 12, 72–83. 21. Hernandez-Leal, P., Kartal, B., & Taylor, M. E. (2019). A survey and critique of multiagent deep reinforcement learning. Autonomous Agents and Multi-Agent Systems, 33(6), 750–797. 22. Horyń W, Bielewicz M, Joks A. (2021). AI-Supported Decision-Making Process in Multidomain Military Operations. In Artificial intelligence and its contexts 2021 (pp. 93–107). Springer. 23. Hussein, A. A. (2020). How many old and new big data V’s: Characteristics, processing technology, and applications (BD1). International Journal of Application or Innovation in Engineering & Management, 9, 15–27. 24. Kadar, T., & Kadar, M. (2020 June 15). Sustainability is not enough: Towards AI supported regenerative design. In 2020 IEEE International Conference on Engineering, Technology, and Innovation (ICE/ITMC) (pp. 1–6). IEEE. 25. Kong, T., Hu, T., Zhou, T., & Ye, Y. (2021). Data construction method for the applications of workshop digital twin system. Journal of Manufacturing Systems, (58), 323–328. 26. Kritzinger, W., Karner, M., Traar, G., Henjes, J., & Sihn, W. (2018). Digital twin in manufacturing: A categorical literature review and classification. IFAC-PapersOnLine, 51(11), 1016–1022. 27. Kuehn, W. (2018). Digital twins for decision making in complex production and logistic enterprises. International Journal of Design & Nature and Ecodynamics, 13(3), 260–271. 28. Liao, R. F., Chan, C. W., Hromek, J., Huang, G. H., & He, L. (2008). Fuzzy logic control for a petroleum separation process. Engineering Applications of Artificial Intelligence., 21(6), 835–845. 29. Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable AI: A review of machine learning interpretability methods. Entropy, 23(1), 18. 30. Liu, Z., Meyendorf, N., & Mrad, N. (2018, Apr 20). The role of data fusion in predictive maintenance using digital twin. In AIP conference proceedings (Vol. 1949, No. 1, p. 020023). AIP Publishing LLC. 31. Liu, C. F., Huang, C. C., Wang, J. J., Kuo, K. M., & Chen, C. J. (2021, June). The Critical Factors Affecting the Deployment and Scaling of Healthcare AI: Viewpoint from an Experienced Medical Center. In Healthcare (Vol. 9, No. 6, p. 685). Multidisciplinary Digital Publishing Institute. 32. Louppe, G. (2014, July 28). Understanding random forests: From theory to practice. arXiv preprint arXiv:1407.7502.
334
R. Minerva et al.
33. Lu, Y., Liu, C., Kevin, I., Wang, K., Huang, H., & Xu, X. (2020). Digital twin-driven smart manufacturing: Connotation, reference model, applications, and research issues. Robotics and Computer-Integrated Manufacturing., 61, 101837. 34. Lucci, S., & Kopec, D. (2015). Artificial intelligence in the 21st century. Stylus Publishing, LLC. 35. Luo, W., Hu, T., Zhang, C., & Wei, Y. (2019). Digital twin for CNC machine tool: Modeling and using strategy. Journal of Ambient Intelligence and Humanized Computing, 10(3), 1129–1140. 36. Marcus G, & Davis E. (2019, September 10). Rebooting AI: Building artificial intelligence we can trust. Vintage. 37. Millimetric.ai. (2020, August). What to do when there’s too much data. Available at https://www. millimetric.ai/2020/08/10/data-driven-to-madness-what-to-do-when-theres-too-much-data/ 38. Minerva, R., Lee, G. M., & Crespi, N. (2020). Digital twin in the IoT context: A survey on technical features, scenarios, and architectural models. Proceedings of the IEEE, 108(10), 1785–1824. 39. Minerva, R., Awan, F. M., & Crespi, N. (2021). Exploiting digital twin as enablers for synthetic sensing. IEEE Internet Computing, 26(5), 61–67. 40. Murphy K. P. (2012, September 7). Machine learning: a probabilistic perspective. MIT press. 41. Myles, A. J., Feudale, R. N., Liu, Y., Woody, N. A., & Brown, S. D. (2004). An introduction to decision tree modeling. Journal of Chemometrics: A Journal of the Chemometrics Society., (6), 275–285. 42. Niggemann, O., Diedrich, A., Kuehnert, C., Pfannstiel, E., & Schraven, J. (2020, October 27). The DigitalTwin from an Artificial Intelligence Perspective. arXiv preprint arXiv:2010.14376. 43. Niggemann, O., Diedrich, A., Kühnert, C., Pfannstiel, E., & Schraven, J. (2021 May 10). A Generic DigitalTwin Model for Artificial Intelligence Applications. In 2021 4th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS) (pp. 55–62). IEEE. 44. Nikolenko, S. I. (2021, January). Synthetic data for deep learning. Springer. 45. Park, K. T., Nam, Y. W., Lee, H. S., Im, S. J., Noh, S. D., Son, J. Y., & Kim, H. (2019). Design, and implementation of a digital twin application for a connected micro smart factory. International Journal of Computer Integrated Manufacturing., 32(6), 596–614. 46. Rasheed, A., San, O., & Kvamsdal, T. (2020). Digital twin: Values, challenges, and enablers from a modeling perspective. Ieee Access, 8, 21980–22012. 47. Redelinghuys, A. J., Basson, A. H., & Kruger, K. (2019). A six-layer architecture for the digital twin: A manufacturing case study implementation. Journal of Intelligent Manufacturing, 1–20. 48. Roh, Y., Heo, G., & Whang, S. E. (2019). A survey on data collection for machine learning: A big data-AI integration perspective. IEEE Transactions on Knowledge and Data Engineering, 33(4), 1328–1347. 49. Russell, S., & Norvig, P. (2002). Artificial intelligence: a modern approach. Pearson Education, Inc. 50. Sambasivan N, Kapania S, Highfill H, Akrong D, Paritosh P, & Aroyo LM. (2021, May 6). Everyone wants to do the model work, not the data work: Data Cascades in High-Stakes AI. In proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–15). 51. Schebesch, K. B. (2019, September 20). The Interdependence of AI and Sustainability: Can AI Show a Path Toward Sustainability? In Griffiths School of Management and IT Annual Conference on Business, Entrepreneurship and Ethics (pp. 383–400). Springer. 52. Schroeder, G. N., Steinmetz, C., Pereira, C. E., & Espindola, D. B. (2016a). Digital twin data modeling with automationml and a communication methodology for data exchange. IFAC- PapersOnLine, 49(30), 12–17. 53. Sen, P. C., Hajra, M., & Ghosh, M. (2020). Supervised classification algorithms in machine learning: A survey and review. In Emerging technology in modelling and graphics 2020 (pp. 99–111). Springer. 54. Serpen, G., & Gao, Z. (2014). Complexity analysis of multilayer perceptron neural network embedded into a wireless sensor network. Procedia Computer Science, (36), 192–197. 55. Singh, S., Shehab, E., Higgins, N., Fowler, K., Reynolds, D., Erkoyuncu, J.A. & Gadd, P. (2020). Data management for developing digital twin ontology model. In Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, p. 0954405420978117.
Artificial Intelligence and the Digital Twin: An Essential Combination
335
56. Steinmetz, C., Rettberg, A., Ribeiro, F.G.C., Schroeder, G., & Pereira, C.E. (2018, November). Internet of things ontology for digital twin in cyber physical systems. In 2018 VIII Brazilian Symposium on Computing Systems Engineering (SBESC) (pp. 154–159). IEEE. 57. Tao, F., Sui, F., Liu, A., Qi, Q., Zhang, M., Song, B., Guo, Z., Lu, S. C., & Nee, A. Y. (2019). Digital twin-driven product design framework. International Journal of Production Research, 57(12), 3935–3953. 58. Van Engelen, J. E., & Hoos, H. H. (2020). A survey on semi-supervised learning. Machine Learning, 109(2), 373–440. 59. Villanueva, F. J., Aceña, O., Dorado, J., Cantarero, R., Bermejo, J. F., Rubio, A. (2020, September 7). On building support of digital twin concept for smart spaces. In 2020 IEEE International Conference on Human-Machine Systems (ICHMS) (pp. 1–4). IEEE. 60. Wang, L., Liu, Z., Liu, A., & Tao, F. (2021). Artificial intelligence in product lifecycle management. The International Journal of Advanced Manufacturing Technology, 114(3), 771–796. 61. Weiss, K., Khoshgoftaar, T. M., & Wang, D. (2016). A survey of transfer learning. Journal of Big Data, (1), 1–40. 62. Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J. W., da Silva Santos, L. B., Bourne, P. E., & Bouwman, J. (2016). The FAIR guiding principles for scientific data management and stewardship. Scientific Data, 3(1), 1–9. 63. Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., & He, Q. (2020). A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1), 43–76. Roberto Minerva, associate professor at Telecom SudParis, Institut Polytechnique de Paris, holds a Ph.D in Computer Science and Telecommunications, and a Master Degree in Computer Science. He currently conducts research in the area of Edge Computing, Digital Twin, Internet of Things and Artificial Intelligence applications. During 2014–2016, he was the Chairman of the IEEE IoT Initiative. Roberto has been for several years in TIMLab as research manager for Advanced Software Architectures. He is authors of several papers published in international journals, conferences, and books.
Prof. Noel Crespi holds Masters degrees from the Universities of Orsay (Paris 11) and Kent (UK), a diplome d’ingénieur from Telecom Paris, and a Ph.D and an Habilitation from Sorbonne University. From 1993 he worked at CLIP, Bouygues Telecom and then at Orange Labs in 1995. In 1999, he joined Nortel Networks as telephony program manager, architecting core network products for the EMEA region. He joined Institut MinesTelecom, Telecom SudParis in 2002 and is currently Professor and Program Director at Institut Polytechnique de Paris, leading the Data Intelligence and Communication Engineering Lab. He coordinates the standardization activities for Institut MinesTelecom at ITU-T and ETSI. He is also an adjunct professor at KAIST (South Korea), a guest researcher at the University of Goettingen (Germany) and an affiliate professor at Concordia University (Canada). His current research interests are in Sotwarisation, Artificial Intelligence and Internet of Things. http://noelcrespi.wp.tem-tsp.eu/.
336
R. Minerva et al. Reza Farahbakhsh Reza Farahbakhsh (Member, IEEE) received the Ph.D. degree from Paris VI (UPMC) jointly with the InstitutMinesTelecom, Telecom SudParis (CNRS Lab UMR5157), in 2015. He is currently Adjunct Assistant Professor at InstitutMines Telecom, Telecom SudParis, and Lead Data Scientist at TotalEnergies SE. His research interests include AI and data science in scale, online social networks and users behavior analysis.
Faraz Malik Awan Dr. Faraz Malik Awan was born in Islamabad, Pakistan. Currently he is working as a Data Scientist at Urban Big Data Centre (UBDC), UK. He did his Ph.D. from Institut Polytechnique de Paris. He completed his master’s degree, majoring in System Software, from Chung-Ang University (CAU), South Korea, and bachelor’s degree in Computer Science from COMSATS University Islamabad, Pakistan. He is the author/coauthor of five publications. His area of interest includes Smart Cities, Intelligent Transportation Systems, Natural Language Processing, and the application of Artificial Intelligence in different fields, including sociology & demography, and manufacturing engineering.
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical Systems Thierry Coupaye, Sébastien Bolle, Sylvie Derrien, Pauline Folz, Pierre Meye, Gilles Privat, and Philippe Raïpin-Parvedy
Abstract The intent of this chapter is to demonstrate the value of a transversal (i.e., cross-verticals and multi-actor) digital twin platform for Internet of Things (IoT) applications and complex cyber-physical systems at large (e.g., large-scale infrastructures such as telecommunication or electricity distribution networks) around the Thing in The Future experimental digital twin platform developed at Orange. Several real-life illustrative use cases in various domains — smart building, smart factory, smart city, and telecommunication infrastructures — developed by Orange and partners, are introduced. Main design, architectural and technological choices, which sustain this ambition, are discussed: graph-based structural and semantic modelling of systems of systems, large scale graph storage, platform distribution and federation. Keywords Digital Twin · Internet of Things · IoT · Systems of systems · Graph modelling · Graph database · Semantics · Ontology multisided platform
1 Introduction This chapter reports on experiments with the Orange “Thing’in the Future” [1] experimental digital twin platform (where we use Thing’in for shorthand in the following text). The intent is to discuss, and hopefully demonstrate the value of a universal, multi and even cross-verticals, multi-actors digital twin platform for Internet of Things (IoT) applications (e.g., in Smart Building, City, Industry, Transports and Logistics, Agriculture …) and complex cyber-physical systems at large (e.g., telecommunication infrastructures). T. Coupaye (*) · S. Bolle · P. Folz · G. Privat Orange Innovation, Grenoble, France e-mail: [email protected] S. Derrien · P. Meye · P. Raïpin-Parvedy Orange Innovation, Rennes, France © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_13
337
338
T. Coupaye et al.
When looking at the historical emergence of the concept of digital twin, essentially in an industrial context associated to computer-assisted design and manufacturing (CAD/CAM) technologies, and at its more recent explosion (cf. other chapters in this book), greatly associated to the advent of the IoT, it is noticeable that most digital twins today are constructed in an ad-hoc fashion, or thanks to specialized products on vertical markets such as manufacturing (e.g. airplanes, automotive, shipbuilding), building construction and technical management (e.g. nuclear reactor design, energy management), city and territories (e.g. transports, gas, water and electrical networks), health and life science. A growing fragmentation of digital twin technologies could hinder their further technological development and preclude the kind of multi-actors’ collaboration that is required when addressing complex multilevel systems of systems such as factories, buildings or cities at large. Interoperability and the support of various business interactions, materialized by the exchange of technical (digital twin) data between actors inside a vertical domain, and between different vertical domains, with a single platform, which specialized vertical digital twins’ platforms cannot support, is at the core of the experimentations with the Thing’in platform, and the core subject of this chapter. This chapter is organized as follows. Section II motivates the need and value of a graph-based, cross-vertical, and multi-actor digital twin platform. Section III illustrates this value thanks to several real-life use cases experimented by Orange and partners in different vertical domains: smart building, smart factory, smart city, and telecommunication infrastructures. Section IV and V respectively discuss the main technical elements in terms of graph-modelling of digital twins of systems of systems, and then the Thing’in main platform design and implementation choices which sustain this ambition of a cross-vertical and multi-actors digital twin platform.
2 Rationale for a Cross-Domain Multi-sided Digital Twin Platform 2.1 An Experimental Natively Multi-Level Graph-Based Digital Twin Platform Orange vision of the future of the Internet of Things goes far beyond the mere extension of the current Internet to so-called connected objects (sensors, actuators) but designates a much deeper fusion of the current digital and physical worlds into a brand new world of digital services deployed all around us in the physical world (an “ambient intelligence”), interacting with the physical world and humans in their daily activities at home, at work, in transports, in the city and the countryside, etc. Digital twins are a cornerstone, of this vision of a cyber-physical world, for they represent a bridge between the physical and digital worlds. They allow for a digital description of the physical environment in which sensors and actuators are deployed. The initial development of Thing’in was seen to experiment with new technologies, use cases, and possibly business models, related to digital twins and the IoT.
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
339
Thing’in is an online platform (portal and APIs) that exposes a graph (more details in Section IV) of digital twins of entities (objects and systems) of the physical world, where the graph itself constitutes a higher-level “aggregated” (“multi- level”) digital twin. Users can create and manipulate (the graph of) these digital twins and associated information (function, properties, state, location, shape, behavior, etc.), as well as get access to the physical objects they represent through sensors and actuators). Above all, this multi-level graph captures structural relationships between these objects and the systems they make up, at multiple levels (single entities, systems, systems of systems), together with corresponding semantic knowledge. Thing’in provides the “knowledge base” which describes in a homogeneous way the physical world (e.g., buildings, cities) in which sensors and connected objects are deployed. The platform also offers an extensible catalog of tools for service developers (e.g., loading of data from other platforms, 2D and 3D visualization, projection on OpenStreetMap and other cartographic supports, reasoning, learning and inference of new information) to help them build new vertical services above the platform that will improve the efficiency of the processes and systems considered (e.g., buildings, factories, cities, logistics chains, urban mobility).
2.2 A Multi-actor and Cross-Vertical Platform An analysis of the market for the ongoing new wave of digital twins devoted to the smart city vertical [17], reveals that the positioning of providers of digital twin technologies is strongly correlated with their original business domain1: digital twins are seen as a natural extension of their activities. Actors coming from industry (manufacturing) are typically centered around 3D models of products, machines, components, and systems coming from CAD/CAM and Product Lifecyle Management (PLM) technologies. Actors coming from the building domain are centered around Building Information Models (BIM) which are basically (digital) plans of buildings. Actors coming from Geographic Information Systems tend naturally to build digital twins above cartographic representations, i.e., maps of the physical world. Actors coming from the utility domain (water, gas, electricity) base their digital twins on infrastructure planification tools, often based again on 2D or 3D modelling. It is worth noticing the current digital twin market [2] is very different from the more mature IoT market where the need for generic IoT platforms that can manage connected devices and raw sensor data independently from the types of devices, data, and protocols, and finally from the applicative/vertical domain concerned (industry, building, city, etc.), has become obvious over time, leading to almost only generic (although rather low level) IoT platforms on the market today.
At the exception of IT actors such as Microsoft with its Azure Digital Twins platform. We come to that hereafter. 1
340
T. Coupaye et al.
When experimenting use cases of digital twins in multiple vertical domains in parallel with the Thing’in platform,2 the need for sharing a common description format for digital twins and sharing and exchanging digital twins themselves between actors (e.g., different professions in a building or during different phases in the construction and management, maintenance of a building), emerges as a pivotal requirement in these many apparently different use cases. It is this observation that has guided the development of the platform towards a transversal platform that can target and link different verticals domains (e.g., building and city, building and industry, industry, and transport/logistics, etc.), and that can enable multiple actors to interact within, or between, vertical ecosystems to share digital twins (and associated data). This need is illustrated and further analyzed in Section III, while the technical impacts on the design of the platform is discussed in Sections IV and V.
3 Cross-Vertical and Multi-actor Use Cases: Building, Industry, City, Telecom 3.1 Digital Twins for Smart Building 3.1.1 Context Smart Building covers many aspects. Among them, logistics and maintenance (from statically scheduled to on-demand organization), building user comfort and wellness (from today building based on simple automatisms to benevolent buildings able to consider contextual situations) and energy management (for example energy consumption in building domain takes 44% of the overall energy consumption in France). Operational building management is currently not as efficient as it could be because each technical team uses its own Building Management System (BMS) independently. Teams work in their own separate “silos”, with expertise in one area but with poor knowledge of and little to no interaction with the other areas. These silos need to be opened so that teams can collaborate better and with greater operational efficiency. 3.1.2 Aggregating and Sharing Digital Twins Between Multiple Building Professions and Services Most recent solutions in the Smart Building domain are based on Building Information Modelling (BIM). BIM is well designed for modelling the infrastructure (walls, rooms, windows, floors, roof, etc.). However, a building is also a
And perhaps also having IT and IoT background such as Microsoft.
2
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
341
composition of technical systems (energy, HVAC, water, IT, etc.) which needed to be included in the building Digital Twin. These different dimensions of the building cannot be easily captured in a single BIM. What is needed is a multi- dimensional representation of the building including and making explicit the relations between the different dimensions. An additional value can also be brought if building is not only considered as a single entity but also integrated and interacting with its environment: other buildings or facilities in the neighborhood (e.g., buildings on an industrial site), city infrastructures the building is connected to, etc. A homogeneous representation of such global digital twin, composed of several “sub digital twins” is needed. Both multi-dimensional representation of the building itself, and its integration and interaction with its neighborhood, requires a unique platform. As a multi-actor and cross-domain platform, such a platform would enable Digital Twin sharing across the building professions and services. 3.1.3 Experimentations Thanks to the Internet of Things, Smart Buildings are more and more instrumented with various sensors and actuators related to comfort for building users (temperature, shutter, or light management, etc.), energy efficiency for building managers, and security for all. Such connected environments provide a first level of automation, with simple static and pre-defined scenarios, based for example on rules designed by a building technical manager. Thanks to a building Digital Twin, more advanced automatic features can be provided, benefiting from the contextual information of the Digital Twin. Several experimentations of a “benevolent building” have been implemented in an Orange Labs building in Meylan, France, with the Thing’In platform: –– Managing security and safety of users: this use case demonstrates how a Digital Twin can be the informational base for ambient intelligence. The security and safety services are based on Artificial Intelligence (AI) software components distributed at the Edge in the building. Contextual information of the Digital Twin can feed these AI, for example information like the rooms to be secured (e.g., close room doors) when an event occurred in such part of the building. –– Monitoring comfort of building users: this use case demonstrates the complementarity between a Digital Twin platform and classical Internet of Things (IoT) platforms dedicated to data collection from sensor. The comfort monitoring service aggregates in the Digital Twin a comfort status for each room. The status is based on Digital Twin structural graph (e.g., building topology), contextual information and raw data provided by temperature and humidity sensors through an IoT platform. –– Checking automatically building compliance to norms and regulation: this use case demonstrates how to take advantage of the semantic graph of the Digital Twin. Assessing building conformance can be a time-consuming and
342
T. Coupaye et al.
c omplex process. Moreover, norms are evolving as well as the building itself and its usage. The experimentation has focused on security conformance related to number and capacity of secured waiting areas including accessibility constraints. In the experimentation the automatic checking is performed by reasoning on the knowledge graph extracted from the building Digital Twin. –– Sharing objects for predictive maintenance: this use case demonstrates the benefit of a multi-actor Digital Twin. Currently, most sensors are deployed in a building for a vertical need. For example, presence sensors can be deployed by the team in charge of building energy for an energy efficiency service based on presence information. This presence information can also be useful for another team, the facility management team, to identify the most used rooms or areas in the building. Facility management team can optimize its planning based on this information coming from another vertical of the building. These experimentations are examples of the benefit of the multi-level aspect of Digital Twin graph, either the structural one (e.g., topology of the building), the semantic one (e.g., semantic reasoning) or both.
3.2 Digital Twins for Smart Industry 3.2.1 Context As the industry 4.0 takes shape, human operators must deal with complex daily tasks, new environments, and immersive technologies relying on virtual or augmented reality contents. On the one hand, this can be stressful because it requires from operators’ agility and flexibility to adapt themselves to this new environment. On the other hand, it can help operators in their daily tasks. They can learn from the digital twin, practice, and simulate actions before doing them in reality … Finally, the digital twin helps operators become better, more efficient, and more confident in their daily work in a complex environment. 3.2.2 Modelling the Smart Factory Buildings, Production Chains and Flows Inside and Outside the Factory Equipped with multiple sensors and actuators, a production line is today fully manageable and adaptable to the demand. The digital twin is not only a model of a complete production line, but it also includes the flows of incoming and outgoing raw materials or goods. The digital twin is not only a model of a factory containing multiple (connected) equipment, but also more generally a composition of digital twins of buildings, furniture, roads, parking lots… i.e., connected, and non- connected objects which are linked to each other with different kind of relations. Modelling homogeneously the smart factory building, its main production
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
343
areas, its furniture, boxes, pallets, machine tools as well as the stakeholders (i.e., suppliers, subcontractors), is key to be able to get a fully defined digital twin of a global industrial site made of heterogeneous elements. A digital twin platform such as Thing’in offers all the required features to model, homogeneously in a pivotal graph-based model, the factory digital twin with its different elements: buildings, the production chains equipment — including the digital equipment e.g., for tracking objects (products and tools) during production cycles, roads, and parking lots, etc. Digital twin tracking in Thing’in is time and location- aware. Combining structural and semantic graph with fine-grained Building Information Model capabilities, Thing’in allows for indoor and outdoor device tracking. It detects when the status of a twin changes and notify its operators, for instance when a production piece changes location or when its processing deadline has exceeded. All the locations of the digital twins are historicized to improve the traceability of the produced parts. The temporal graph of Thing’in allows the analysis of past events that could make the production more efficient. All the digital twins are managed by a set of interconnected Thing’in servers that are geo-distributed in the network infrastructure (cf. Section V). The digital twin platform enforces cross-enterprise collaboration through sharing (parts of) digital twins between users of different Thing’in instances. It allows the digital twin owners to share their private objects with their sub-contractors or their suppliers by extending their visibility. 3.2.3 Experimentation An experimentation is in progress in real industrial context within a factory located in Brittany (France). The main goals to achieve with the digital twin during the experimentation, are: • improve the productivity, optimize the quality and On Time Delivery (OTD KPI), improve availability ratio (predictive maintenance, optimized maintenance cycle durations), • help the operators and all the people who work in the factory in their daily tasks without adding stress related to the digitization of the factory. • Easily share the digital twin information with many actors (sub-contractors, suppliers of raw materials, suppliers of mechanical parts for machine tools, customers, recyclers) Before launching a production, the production manager must check that the required materials for manufacturing are available (in stock). Once it is launched, he must be able to always keep an eye on the work orders and locate the produced objects whether they are being manufactured in the factory or at a subcontractor level. Thanks to the digital twin of the production plant, he can search and locate the workorders on a 2D plan, receive notifications whenever an object stays for too long in an area. As the digital twin is part of a temporal graph, each event is recorded with a timestamp. For example, the beginning and the end time of the cleaning cycle of
344
T. Coupaye et al.
a production machine are stored in its digital twin so that optimized maintenance cycle durations can be planned. By counting the cumulative machining times, it’s possible to organize some predictive maintenance and improve the availability ratio. Touch pads have been introduced in the plant to allow operators to easily view, modify or annotate the work orders, consult the plan of the part that is being machined, check the program to use and keep up to date all the documentations. Alarms generated by the production machines are managed in real time by an IoT platform and the digital twin status is updated accordingly. A list of reported breakdowns is stored with their causes to improve the quality. Operators can use the tablets to read the list of faults that had occurred. The list of failures can possibly be manually enriched by the operators. The experimentation also includes the collection of waste containers by recyclers (e.g., metal chips collector) in a multi-actors and cross-domain scenario (manufacturing and recycling) — cf. Fig. 2. As soon as the digital twins of the waste containers are 80% full, the information is shared by the production manager with its recyclers so that they are immediately notified they have some containers to collect with all useful data (e.g., location of the containers on site) thanks to the shared digital twin.
3.3 Digital Twins for Telecommunication Infrastructures 3.3.1 Context As a telecom operator, Orange oversees various fixed and mobile networks, in various countries. Altogether Orange manages millions of pieces of telecom equipment. With the evolution of technologies, networks are more and more complex and so
Fig. 1 Objects deployed in a room mapped on 2D map of a building (left) and the graph representation (right)
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
345
Fig. 2 Dashboards (applicative view) presenting information about waste containers as seen by the factory (left) and by recyclers (right) based on shared data within the digital twin
their management complexity is increasing. The evolution of the ecosystem brings an additional complexity with several telecom operators per country, new actors or competitors and local partnerships between telecom operators and public organizations to provide broadband connectivity to all in the territories they oversee. Managing networks is no more the exclusive concern of one national operator but the collaboration between various actors either operator internally or in an ecosystem gathering telecom operators, public organizations and territories, private service providers or contractors. New approaches and systems are needed to face this increasing complexity.
346
T. Coupaye et al.
3.3.2 Aggregating and Sharing Digital Twins in a Telecom Infrastructure Ecosystem An example of sharing of information on network between different actors comes with the access network which connects customer premises to network services. Orange oversees the management of a subset of the French access network. It is composed of multiple equipment such as telephone poles, network cabinets, trapdoors for underground chambers… About 100 telephone poles fall every day. In cities, fallen poles are quickly signaled, but it may not the case in the countryside. Moreover, depending on where the fallen telephone pole is located it can have severe consequences. For example, a fallen telephone pole on a u-curve on the road can cause car accident. In the same way a telephone pole connecting a hospital is more critical than a telephone pole connecting a private residential house. Orange share information about network cabinets with other Internet Service Providers (ISP) or subcontractors. Currently cabinets have mechanical locks which can be open with a key. When Orange implant a fiber optic street cabinet, it gives a copy of the key to other ISP so they or they subcontractor can access the cabinet. However, given the multiplicity of intervention from different companies it is difficult to know who performed the last operation. Without considering incivilities or malicious acts in 2020, around 200 have been signaled in a French department. With trapdoors Orange faces the same kind of issue: the difficulty to know if a trapdoor is open or not. With trapdoors, technicians have underground access to copper and fiber cables. They can access it to do maintenance operation or to add new cables. However, once a trapdoor is open everyone can have access to it and some thieves take this opportunity to steal copper cable. These robberies generate millions of euros of damage and client’s loose access to the service. Issues presented above with telecom equipment can be first addressed with connected objects to monitor object status: tilt sensors for telephone poles inclination, connected locks for network cabinets to notify when the door is open and have a log of open actions, a door sensor for trapdoor to know when the trapdoor is opened. But sharing this information between Orange and partners or contractors is needed. This can be done thank to a shared/multi-actor telecom digital twin storing the description of the network equipment and their status. Moreover, if a city shares with Orange the digital twin of the city (cf. III.D), the combination of those two data sources can enrich the context of the telecom equipment. For example, it can help in evaluating the impact of an issue on a telephone pole near a hospital or a school. The approach can of course be extended to other telecom operators. Previously Orange was France Telecom, the historical national telecom operator, but today telecom networks can be shared between several telecom operators. Therefore, Orange is no longer in charge of all the telecom equipment in the French network, the responsibility is shared between the different telecom operators. However, for historical reason, an incident on a telecom equipment is often reported to Orange, even if the telecom equipment is not under the responsibility of Orange. Being able to share information between telecom operators through digital twins managed by
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
347
the Thing’in platform could help solving this issue. In addition, sharing digital twins of telecom equipment can benefit to both other telecom operators and city stakeholders, cf. next section III.D on shared digital twins for incidents handling in a city.
3.4 Digital Twins for Smart City and Territory 3.4.1 Context Equipment from the public space in French cities, like roads, trees, benches, telephone poles, electricity poles, etc., relies on numerous actors/operators: public sector entities at different administrative layers (city council, metropolis, country, region), private sector providers, such as telecom operators, electricity, water, and gas operators, etc. Usually in cities the roads are under the competency of the metropolis or the county. Depending on their location, trees oversee the city, the metropolis, public or private parks, whereas telephone poles or electricity poles are maintained by telecom operators and electricity operators. When someone wants to report an issue on an equipment from the public space, because of the multiplicity of actors, it is difficult for citizens or even for some providers to know who oversees what and to whom address their requests. 3.4.2 Aggregating and Sharing Digital Twins Between Multiple Sectorial Actors A generic and multisided platform such as the Thing’in platform can help to solve the issue of multi actor responsibilities. It can help to break silos between actors on a same territory. Indeed, each actor can share information about equipment it oversees with other actors on the same territory, and even broader (to user to whom access rights on the Thing’in platform are given). Equipment from the public space is represented within a city digital twin on Fig. 3. A digital twin of an equipment will describe the characteristics of the equipment, for example a bench will have properties such as composition (wood, stone, metal…) and state of use (good, medium, bad, …). Each equipment’s digital twin also has a property about the it’s
Fig. 3 Telephone pole (left), network cabinet (middle) and underground chamber and its trapdoor (right)
348
T. Coupaye et al.
Fig. 4 Equipment data in an area in Meylan’s city (streetlamps in light blue, and telephones poles in orange) coming from information systems of multiple actors and aggregated in a shared digital twin of the city
manager. Beyond that, this shared digital representation of the city allows third parties to develop services on a territory, rather than having a service by provider or by usage. 3.4.3 Experimentation An experimentation is going on in the city of Meylan in France. The city of Meylan provides the Thing’in digital twin platform with data from their Geographic Information System (GIS) describing equipment in the city such as: streetlights, trees, benches … A prototype was developed to ease incident reports by citizen of the city. Digital twins of the equipment provided was created in Thing’in, with for each digital twin, the actor in charge of the equipment. Data about the same territory coming from our GIS describing telephone poles (more detailed in next section) was also injected inside Thing’in. Therefore, citizen living or working in Meylan have a unique entry point to report any incident on equipment in the public space, as shown in Fig. 4, the user can directly see all the equipment around him. Thanks to the data stored in Thing’in risen incidents are directly routed to the right actor.
4 Advanced Graphs Modelling for Systems of Systems Digital Twins 4.1 NGSI-LD as the Basis for Digital Twin Graphs With classical (3D-inspired) digital twins in sight, graphs might not appear as an obvious modelling choice. Yet they have a storied and multifarious record, extending over almost three centuries, as structural models for all kinds of systems, across
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
349
many branches of science and engineering. A more recent development, of truly revolutionary import, has been their use, in both natural and social sciences, as models for complex systems, spawning the new transdisciplinary field of network science [3]. Their use for capturing multi-level structural models of systems of systems [4], viewed as the engineering counterparts of complex systems, is a key element of the approach proposed here. Taking this broad view of graph models, they are the common denominator model of choice for Digital Twins, assuming the emphasis on open systems and cross-silo information sharing, as outlined in the previous sections of this chapter. For this we need to draw upon a wide-ranging and long-established lore of graph-modeling know-how, to represent and directly match the structure of physical systems as multi-level digital twins. These cyber-physical graphs [5] capturing system structure should make up the core of the graph and platform, yet they should still just be a kind of skeleton for holding and giving access to more classical pieces of data, meta-data, attached or referenced information that come to “flesh out the skeleton”. These requirements set out the framework for choosing the best graph model for capturing digital twins in the sense we envision [7, 8]. 4.1.1 Knowledge Graphs and Entity-Attribute Values: Not Expressive Enough for Digital Twins In view of the huge spectrum of applications addressed by graph models, it is somewhat baffling that their most recent and narrow derivatives used for knowledge representation may have come to acquire such a pregnancy across information technology. RDF graphs have emerged as the de facto common-denominator model for this, especially through their recommended use for linked open data datasets, the “web of data”. Yet it should be clear that RDF graphs do not model physical systems, as a digital twin should: instead, they just store weakly structured information about these systems, as sets of logical predicates. RDF graphs cannot as such make up the core of a digital twin graph, yet they may and should be associated to DT graphs as a “semantic overlay”, in a way that we describe in subsection C below (Section dealing with Semantics). Coming from either basic IoT or lowly physical models’ side, data semanticization is a “low-hanging fruit” for information interoperability, which should already be taken for granted. This means that, at the very least, all devices, things and systems being represented must be categorized by reference to classes formally defined in shared ontologies, not with ad hoc types or labels. This two-level overlay amounts, as presented in subsection C, to the association of generic non-contingent knowledge (about classes/concepts as defined in a “Terminology box”/Tbox) to contingent information (about the association of individuals/instances to these classes, as laid out in an “Assertion box”/Abox). Starting again from the IoT side, the Entity-Attribute-Value (EAV) meta-model was an early step towards basic structuration of IoT information, as it was captured by, e.g., the first versions of the OMA NGSI information model of FIWARE [4]. Lowly and weak as these models may appear, they are still a qualitative improvement from the binary payloads used in some IoT devices and networks.
350
T. Coupaye et al.
An evolutionary path can be traced from this kind of object-like modeling to the properties attached to property graphs, which are just a different way to capture attributes in key-value style. It may appear paradoxical to call these “Property Graphs”, because their properties, akin to owl:datatypeProperties in the RDF model, are not represented as arcs of the graph proper: they are embedded as inner structuration within both vertices and relationships of Property Graphs, which would thus better deserve to be called “Propertied Graphs”. The values of these properties may be defined in structured types, such as arrays, thus richer than mere RDF literals. The skeleton of PGs corresponds to relationships between vertices which stand for all kinds of entities. There is another evolutionary path from the relations used in the long-established Entity-Relation (ER) model, where entities correspond to entire tables in relational data models), whereas PG entities would correspond to individual rows in these tables. PG relationships have some similarity to owl:ObjectProperties in the RDF model, but a crucial difference is that PG relationships are instantiated and identified individually on a per-instance basis, contrary to RDF properties which are un-instantiated and identified only as generic logical predicates. They are, just like vertices/entities, “first-class citizens” of the PG data model. PGs allow properties for both vertices (nodes that stand for entities) and arcs that stand for physically-based relationships: this corresponds to the minimal “fleshing out of the skeleton” supported by the PG model. 4.1.2 NGSI-LD Graphs: The Best Choice for Digital Twins The NGSI-LD information model [9, 10], as standardized by the ETSI CIM (Context Information Management) group, provides a formalized basis for Property Graphs (PG), with a few extensions on the customary use of the PG model by most existing graph databases. As such, though defined based on RDF/ RDFS/OWL, the NGSI-LD graph model has a higher-expressivity than Description Logics (DL) or First-Order Logic (FOL), bringing it closer to second-order logic. The counterpart is that it may lead to undecidability, and it should not be used with formal reasoning tools geared to DL or FOL. From our viewpoint, NGSI-LD brings the best of three worlds: a “structural skeleton” inherited from entity- relationship models, key-value properties attached to both entities and relationships and, crucially, a semantic web grounding that makes it possible to overlay an NGSI-LD graph with an RDF knowledge graph, as explained in subsection C. As laid out in [5, 6], an NGSI-LD graph supports properties of relationships and properties of properties, which do not exist in RDF. It may be converted into an RDF graph, serialized, and exported as a linked-data dataset, after applying reification. NGSI-LD graphs are the best common denominator for DT graphs as envisioned here, we explain in the following how they are used for this.
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
351
4.2 Multi-level Structural Twinning of Cyber-Physical Systems 4.2.1 Thing-Twins (TTs): Atomic Devices & Physical Entities as Graph Vertices Most IoT platforms already maintain minimal “digital twin” — understood as a mere proxy — for the devices they take charge of. These “thing-twins” make sense as the lowest rung of a ladder, nested within higher-level and larger-scale graph-based-twins of the environments they fit into. These “atomic” systems/entities/devices are represented directly through graph vertices because they need not or cannot be decomposed further internally into sub-systems. They will have an assortment of fixed and variable attributes of their own, possibly capturing part of their state in the sense of dynamical system theory. For the connected devices most IoT platforms deal with, these proxies may provide a network interface supporting direct interaction of applications with the device. Now, the Internet of Things should not be limited to connected devices, sensors and actuators: it may, viewed as a graph, extend to the non-connected, passive or legacy “thing” that is sensed by sensors or acted upon by actuators [11, 12]. These non-connected things are also represented by vertices in the Thing’in graph, with corresponding “phenotropic” or “stigmergic” [12] links to the sensors and actuators, respectively, which may be used as network intermediaries to them. These atomic devices and things are deemed to be well-characterized by their external relationships with other entities at the same level, or with the larger systems which they are part of. In a revealing analogy with social networks, just as a person called John Smith can be made unique by a tiny number of relationships of his own, a few external graph links may be enough to make a standard-issue entity (like a home appliance, or a piece of furniture) unique among many identical copies, without the need to characterize it internally. 4.2.2 System-Twins (STs): Self-Contained Systems as Rooted Subgraphs Well-defined subgraphs of the overall graph will capture the key structural links that make up the scaffolding of a “classical” self-contained system, in the intuitive sense of a physically-enclosed contraption, tacking together a set of parts/subsystems which are its direct constituents. These constituent subsystems may either be captured as “thing-twin” atomic vertices as described before, or decomposed further, recursively, into subsystems which may themselves be described by the same kind of rooted subgraphs. Figure 1 gives an example of this for an apartment captured as a subsystem of a building, decomposed further into rooms, which are themselves composed of entities considered here as atomic and captured as “thing-twins”. These subgraphs have a “root” node, with type NGSI-LD:system, standing for the subsystem being described by the subgraph (building, apartment, room), but the overall connection
352
T. Coupaye et al. BuildingB hasPart hasDirectPart Apartment A
hasDirect Part
Fire Sensor FS1 Room R2
ects conn
To
Corridor C1
co
nn To ects
Room R1
adjacentTo hasDirect Part
Closet C1
isContainedIn
isContainedIn isContainedIn
Bed B1
serv isOb Y edB
Camera C1
isO b edBserv Y
Table Lamp TL1
Teddy Bear TB1
Fig. 5 Description of a building as self-contained system, with two-level nesting of subsystems
pattern is not limited to a directed rooted tree (a.k.a. arborescence). Typically, they will “look like” rooted trees, with added transversal links, mostly undirected edges, between the “vertical” branches connected to the root, as shown in Fig. 5 (with NGSI-LD relationships drawn as diamonds, à la ER diagram, and entities as rectangles). The relationships between the nodes that make up self-contained systems like these are physically local; they may correspond to: • vertical top-down links between the root system and its constituent parts (with type NGSI-LD:hasPart/hasDirectPart), when these parts are designed to be included in the overall system and this system cannot work if these parts are removed • vertical bottom-up links (with type NGSI-LD:isContainedIn) between parts and systems that would not always be captured as such, like the sets of all things (furniture, appliances, etc.) contained inside a room. This type of relationship may also be used to capture an even more informal and purely contingent set-based location [13], without implying a “systemic” relationship. • transversal (possibly undirected) links (with type NGSI-LD:ConnectsTo) to capture the way through which one may go from one room to another, or a room is near another (with type NGSI-LD:AdjacentTo)transversal directed links between a sensor and what it observes (with type sosa:isObservedBy), or an actuator and what it acts upon (with type sosa:isActedOnBy)
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
353
4.2.3 Systems of Systems Twins (SoSTs): Capturing Distributed and Complex Systems This level of representation does also capture systems as subgraphs of an overall reference graph. It is used for a less obvious, but critically important type of systems that we label here “Systems of Systems” (SoS) for short, even if not all of them, by far, are SoST in a strict sense [4]. For these more complex systems, it would make no sense, or be impractical, to have a regular direct relationship between a “root” node and the constituent parts of the systems, as proposed for the simple self- contained systems described before. The reasons why this kind of system must be captured in this special way may be one or several (but not all) of the following: • the system is physically distributed, potentially on a very large scale, with many direct constituents, • the system belongs to the broad category of physical infrastructure networks, whose connections correspond to actual physical links: road/street networks, electrical grids, water distribution networks, gas distribution and transport networks, telecom networks (at the physical infrastructure level), etc. • the system is a “system of systems” in the strict sense of the customary definition [4]: a bottom-up assemblage of subsystems which are operationally independent and have not been designed to work together, but which do happen to work together to provide a functionality that is more than the sum of those provided by the individual subsystems separately; like the Internet at large, or a city, • the system exists mostly as an informational abstraction, grouping subsystems that are physically independent, or have a loose connection, like e.g., a logistical network, a waste collection network, or, in different vein, a sharing system that federates a set of physical assets for rental or lending, • relationships between the parent system and its constituent subsystems correspond neither to the “NGSI-LD:hasPart” relationship characteristic of a classically-engineered top-down system, nor the more informal “NGSI- LD:isContainedIn” relationship of a bottom-up, “informal” system. These subgraphs are clusters of nodes, together with the relationships that bind them and the properties that characterize them. They are impersonated by higher- level “hypernodes” with type “NGSI-LD:graph”. The relationship between constituent nodes and their parent hypernode are captured by a special “NGSI-LD:isNodeOfGraph” relationship. The graph “hypernodes” may themselves be matched to nodes within this “hyper-structural” graph, as subgraphs of this graph, with a relationship of type “NGSI-LD:isSubGraphOf”. An example of this type of systems-of-systems grouping is illustrated in Fig. 6 below for the infrastructure of a smart city, with the hyper-structural graph shown in red and the structural graph in black, showing obviously a tiny sample of the kinds of nodes that each of these systems would comprise.
354
T. Coupaye et al.
Fig. 6 Subsystems of city infrastructure, captured as separate subgraphs & represented by graph “hypernodes”
4.3 Semantics for Digital-Twin Graphs As stated in subsection IV.A, knowledge graphs are fundamentally distinct from the kind of physically-matched DT graph models presented before. Yet, even if semantics as expressed by RDF graphs is not at the core of the proposed vision, RDF graphs and semantics have a role to play: • in supporting the interoperation of these models with third party data sources, • in supporting the actual use of the information maintained in these graphs by applications which do not, natively, understand PG/NGSI-LD models. In both cases, the RDF metamodel may be used as a lowest common denominator on which to fall back if the higher expressivity of PG/NGSI-LD graphs cannot be directly addressed. This is used by: • the import of data in RDF serialization formats (Turtle) from our Thing’in platform, • the NGSI-LD API (itself used by FIWARE) based on JSON-LD, another RDF serialization format, with semantic matching captured and compressed in context files. Even if DT graphs have a semantics of their own, which is global to the whole graph and not reducible to “per resource” semantics as used by RDF, RDF-style semantics may still be applied in a very rich and detailed way to individual NGSI-LD entities and relationships (as already exemplified above), because their formal definition is itself grounded in the RDF/RDFS/OWL meta model.
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
355
Besides their physical matching, another distinction between DT graphs as proposed here and generic knowledge graphs should be clear: knowledge graphs, since they capture ontologies or taxonomies in a description-logics (“Tbox”) are meant to capture generic knowledge as relationships between the concepts/classes of these ontologies, whereas DT graphs capture a more contingent type of information at the individual/instance level. The kind of information captured by an “Abox” (link between an instance and its concept/category/class), is closer to what DT graphs capture, and would correspond to a rdf:type property arc between a vertex or relationship of an NGSI-LD graphs and a class-level resource drawn from a relevant ontology. As we had asserted previously [5], RDF graphs may and should be used to complement and overlay the Property Graphs capturing multi-level digital-twins with such an Abox. In doing this, we propose to maintain the distinction between instance-level information (the NGSI-LD graph) and class/concept-level knowledge corresponding to the Abox with relevant parts of the TBox. This distinction may have been obscured using RDF graphs and triple stores to capture both, jointly and indistinctly. Figure 7 shows how this pans out, with classes drawn from relevant ontologies (pictured as green, rounded rectangles) overlaid on an NGSI-LD graph
Fig. 7 Semantic graph overlaid upon NGSI-LD graph (Abox with dotted arcs, Tbox with dashed arcs)
356
T. Coupaye et al.
with rdf:type typing links pictured as dotted green lines connecting the two. These Abox arcs or the green RDF graph represent “raw” RDF properties and are of a completely different nature from the relationships and properties of an NGSI-LD graph because, contrary to those, they are not identified as individual instances, but as generic predicates. Additional Tbox arcs (dashed green lines) describe subclassing (inheritance) relationships between these classes. This thin Tbox overlay, a minuscule subset of the overall graph in terms of number of nodes, is the only part which fits the strict definition of a knowledge graph as capturing conceptual (non- instance-related) information.
5 Thing’in Digital Twin Platform Design and Structuring Implementations Choices 5.1 Platform Functional Architecture Overview The platform functional architecture is showed in Fig. 8 Functional architecture (Thing’in instance).
Fig. 8 Functional architecture (Thing’in instance)
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
357
The bottom layer is composed of infrastructure fundamental technical services: graph database, time series database, search engine. The search engine stores the ontologies defined and used in the platform, the abilities of the search engine ease the lookup of the different concepts according to their syntax and semantics. The Ontology server converts an ontology described in RDF to documents for the search engine. The Ontology API provides to the user an HTTP Rest interface to query the Ontology Server. The Digital Twins Core Server manages digital twin storage and management (Create, Read, Update, Delete). It provides advanced search functionalities based on semantics, structure, context, or geolocation of the digital twins. It can manage also the historization of some properties of the digital twins. The core server is warrantor of the security and the privacy of the digital twins: it manages the identity, the roles, and the rights of the users inside the platform. The Digital Twins Core API provides to the user an HTTP Rest interface to request the Digital Twins Core Server. The three bottom layers components collectively constitute the Thing’in Core. Above this core are built some enablers facilitating the management and usage of digital twin applications. Examples are components that convert information about physical objects from open data or from Orange LiveObjects IoT platform into digital twins, graph discovery and graph traversal functionalities, viewer components such as 2D/3D visualizations and projections on geographic maps.
5.2 Graph Storage: Mapping NGSI-LD Graphs onto Graph Databases The Thing’in core property graph introduced in the previous section is stored and manage thanks to a graph database management system (DBMS) (currently the ArangoDB open source multi-model DBMS). As the targeted physical objects/systems are very heterogeneous — since Thing’in seeks to be agnostic towards vertical application domains — the data model needs to be very flexible and extensible. Thing’in relies on the property graph flexibility provided by a schema-less DBMS to avoid being locked inside a data model. Thing’in supports semantics description of digital twins, in addition to structural description, using ontologies. Thing’in stores RDF entities inside the property graph (these entities can be exported back from Thing’in in an RDF format). As explained in previous section, the property graph of Thing’in is more than just a RDF graph, therefore Thing’in does not use an RDF Store. Thing’in properties of nodes and edges can store any type of information, and not just concepts from ontologies-related information as in RDF stores. For instance, all the information about ownership, access control security, attached contents (like icon, plan, file …) could be part of the graph. Furthermore, the use of a graph DBMS allows for functionalities of graph traversal and graph pattern matching. Graph traversal allows a query to reach any
358
T. Coupaye et al.
Fig. 9 Example of graph pattern
vertices that are connected to a starting vertex by taking steps. This is very adapted to discover a digital twin and its context. Graph pattern matching allows the finding all the parts of the graph that respect a pattern. This can be linked up this with the semantic description, i.e., allow for semantic graph pattern matching. For instance, in the global graph, one can make a query such as “find all the connected objects owned by Alice located in a room having a window”. Fig. 9 Example of graph pattern, illustrates this pattern.
5.3 Scalability and Federation of Multiple Platform Instances Considering the complexity of the target systems considered (building, city, industry, telecommunications infrastructures…), and the multi-actor and cross-vertical ambitions, the implementation of a platform such as Thing’in must face two major challenges: scalability and governance/control of (possibly shared) digital twins. In the long run, Thing’in may have to manage billions of digital twins provided and used by millions of object owners and service providers, so the platform must be designed to be highly scalable allowing massive storage and intensive IO to support massive queries. Also, some user’s owner of Digital Twins would better like to host by themselves (“on-premise”) their graph of digital twins, be it to ensure more security and privacy, or to improve the latency by bringing closer the digital twin’s platform to their physical objects, or to have better control over the hosting infrastructure (and associated costs). To fulfill these requirements, Thing’in offers 2 levels of scalability and governance thanks to a federated technical architecture: 1. multiple instances (called “Tipods”) can be deployed in the cloud or on premise, 2. these instances are linked and coordinated by a federator. At a Thing’in instance level, all the functional components (cf. Fig. 1) can be replicated to consider more digital twins or more requests to handle local scalability. At the global level, scalability is ensured by the federation architecture itself, i.e., by the number of deployed instances.
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
359
To ensure that the platform maintains one single (logical) graph in which relationships may be established between any single Digital Twins, the platform enforces some constraints of the deployment of instances (Tipods), e.g., restrictions on the naming of the digital twins to avoid conflicts. Before installing a Tipod, a user should have a certificate produced by the federator (entity that control the identities of the Tipods). Associated to this certificate, the user obtains the right to name his digital twins with a prefix chosen by this user and accepted by the federator. This rule creates a trusted graph where only authorized Tipods collaborate to build the unique global graph.
5.4 Multi-level Security Sharing (parts of) Digital Twins between different actors imply a sophisticated security model. The Thing in platform implements a multi-level security model with security mechanisms at different levels for different purposes. First, the security architecture lays on identification of the users. Each user who wants to use the Thing’in API must have an access token resulting from his authentication. This identification and authentication are compliant with the Oauth protocol [RFC 6749] and some identity providers like Orange, Live Objects or Google are trusted (this list can be extended). The access token forged by a Tipod is a Json Web Token containing some information about the user and the platform: user id, user role, token expiration time, Tipod id (that has forged it), and a signature computed with the private key of the Tipod. When a user is enrolled within the platform, the administrator gives him dedicated role(s) (among basic user, provider, service manager, supervisor, administrator) used to ensure the access to the API according to Role-Based Access Control (RBAC). At the digital twin level, Thing’in uses Access Control List (ACL) to specify the grants of the users. This ACL is concretely based on an Attribute Based Access Control (ABAC) implementation allowing to define the right at the level of the digital twin properties. For instance, with the access control mechanism, a user can restrict the reading of the geolocation attributes of his digital twins to the users who belong to a specific group or can allow the update of an attribute to the users who have given a special key (in the request). With this kind of control, the users have free hands to define any security for their services. Users’ identity is also federated. A user registered in a Tipod A, can request a Tipod B, since he uses a signed access token to request the Tipods. The verification of this signature can be checked everywhere upon the verifier can retrieve the certificate of the Tipod that has emitted the token. The role of a user registered in a Tipod A is not valid in another Tipod B, he can only request Tipod B as a guest.
360
T. Coupaye et al.
6 Conclusion, Ongoing and Future Works Digital Twin is quickly emerging as a very important matter in many economic sectors, together with Cloud Computing, Internet of Things (IoT) and Artificial Intelligence (AI). The field of application is immense from buildings, cities, and territories, to industry, transport, logistics, energy, telecommunications and other large-scale infrastructures, or medicine, biology, and science in general. The development of Digital Twin is currently fragmented and siloed in terms of technology and usages. The work presented in this chapter advocates for a more transversal and open vision of digital twins, underpinned by the development of the Orange Thing’in platform which seeks to experiment the value, and the actual implementation, of cross-vertical and multi-actor usages of digital twins. From these objectives and illustrative use cases, this chapter mentions main structuring platform design choices: advanced multi-level graph modeling and security, graph database core implementation, federative and distributed architecture. The intent of this chapter was to focus on the use cases rather than on the technology. However, there are many technical works behind the development of the Thing’in Digital Twins platform on the subjects just mentioned here and others, works which cannot be detailed here due to lack of space. Some examples are the historicization of the digital twin graph or the synchronization of the digital twins with their physical counterparts, which are major issues tackled by ongoing and future works. Acknowledgments The authors wish to thank the whole Thing’in team inside Orange research for contributions to the design and development of the platform and use cases prototypes and demos — especially Fabrice Blache, Benjamin Chevallier, Fano Ramparany, Thomas Hassan, David Crosson and Alain Dechorgnat, Cyprien Gosttein and Maria Massri, Michel Giordani, and Sylvie Tournoud, Yvand Picaud and Regis Esnault… Many thanks also for support and fruitful discussions with Nicolas Demassieux, Roxane Adle, Guillaume Tardiveau, Adam Ouorou, Lyse Brillouet, François-Gaël Ottogalli, Julien Riera, Bruno Gallier, Laurent Marchou, Jean-Marc Lafond, Jean-Michel Ortholland, Alain Chenavier, Emmanuel Routier… any many others. The authors acknowledge the contributions of partners of the ETSI CIM (Context Information Management) Industry Specification Group with whom we have specified the NGSI-LD property graph information model presented in sect. IV.
References 1. Orange Thing’in Digital. Twin platform [Online]. Available: http://thinginthefuture.com/. 2. IDATE. (2020, January ). DigiWolrd. Digital Twin cities. 3. Brandes, U., Robins, G., McCranie, A., & S.Wasserman S. (2013). “What is network science”? Network Science, 1(1), 1–15. 4. Privat, G. (2018, September ). The “Systems of Systems” viewpoint in telecommunications. In Orange research blog. 5. Wu, Z., et al. (2020). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4–24.
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
361
6. Angles, R. (n.d.) The property graph database model. Coeur-WS.org vol 2100. 7. Privat, G., & Abbas, A. (2019). Cyber-physical graphs vs.RDF graphs. In W3C Workshop on web standardization for graph data, Accessed on March 2019. 8. Li, W., Privat, G., Cantera, J. M., Bauer, M., & Le Gall, F. (2018, June). Graph-based semantic evolution for context information management platforms”. In 2018 Global Internet of Things Summit (GIoTS) (pp. 1–6). https://doi.org/10.1109/GIOTS.2018.8534538 9. ETSI GS CIM 006 V1.1.1 (2019, July) Context Information Management (CIM) Information Model (MOD0). 10. Abbas, A., Privat, G. (2018, October). Bridging property graphs and RDF for IoT Information Management. In International Semantic Web Conference (ISWC 2018), Workshop on Scalable Semantic Web Knowledge Base Systems. 11. Privat, G. (2012a). Extending the internet of things. Communications & Strategies, Digiworld Economic Journal, 87, 101–119. 12. Privat, G. (2012b, August). Phenotropic and stigmergic webs: The new reach of networks. Universal Access in the Information Society, 11(3):1–13. https://doi.org/10.1007/ s10209-011-0240-1 13. T. Flury, G. Privat, F. Ramparany: “OWL-based location ontology for context-aware services”. AIMS 2004, Artificial Intelligence in Mobile Systems; 09/2004, 1989. Thierry Coupaye is head of research on Internet of Things (IoT) inside Orange, and Orange Expert on Future Network. He completed his PhD in Computer Science in 1996 and his habilitation (HDR) in 2013 from the University of Grenoble (France). He had several research and teaching positions at Grenoble University, European Bioinformatics Institute and Lion Biosciences (Cambridge, U.K.) and Dassault Systems. He joined Orange in 2000 where he had several research expert, project manager, project and program director positions in the area of distributed systems architecture, autonomics, cloud/fog/edge computing and networking. He has been an initiator of Orange activities in autonomic computing, server and network virtualization, cloud/fog/ edge computing and networking, and digital twins. He is the author of more than 75 refereed articles and has participated in multiple program and organization committees of international conferences in these areas. He has been involved in several collaborative projects and is a regular expert for French and European research agencies. Thierry is member of the steering committee of the Inria and Orange joint laboratory IOLab, member of the board of the Edge Intelligence professorship inside the Multidisciplinary Institute in Artificial Intelligence (MIAI), and currently contributes to the Digital Twin Working Group of the French Alliance for the Industry of the Future (AIF). Thierry is one of the creators of the Fractal software component model and more recently the Thing’in digital twin platform. His current research interests include Digital Twins and Edge Intelligence (AI@Edge) for Cyber-Physical Systems
362
T. Coupaye et al. Sébastien Bolle is a researcher in the area of Internet of Things. Graduated of IMT Atlantique in 1992, he joined Orange Innovation to work first on management of wide area networks and related standards, afterwards on information systems and software architecture projects till 2002. From 2003 to 2009, he held various responsibilities in Orange subsidiaries: Euralba (IS and IT services for B2B market) and Almerys (R&D in e-health services). He went back to Orange Innovation in 2010 and since he is involved in research in Internet of Things. His work and activities are related to open architectures, semantic interoperability, device management and digital twins for various use cases: Smart Building, Logistics, Smart Home and Network Management. He is also involved in collaborative projects (BIM2TWIN, ASAP), open source (Eclipse Foundation, FIWARE) and standardization (oneM2M, IOWN Global Forum).
Sylvie Derrien holds an engineering degree in computer science from the ENSSAT School in Lannion. She worked for several years on the qualification and performance measurement of various services marketed by Orange. She has a strong expertise in networks and virtual infrastructures, as well as in cloud storage and cloud hosting. In 2015, she joined the Orange Labs division as a senior project manager. Now, she is leading a research team in the Internet of Things domain. Her team works on Thing in the Future, an innovative universal and cross-domain platform designed and developed by Orange to foster new usages and new services based on digital twins for industry 4.0, smart building, smart cities and many other ecosystems. Next to this work, Sylvie contributes to BIM2TWIN European project and to collaborative projects funded by EMC2 specialized in manufacturing technologies. She is deeply involved in the promotion of the digital twin platform (Orange research exhibitions, IoT world 2020, MWC2022).
Dr. Pauline Folz received her PhD from the University of Nantes (France) in 2017 under a Cifre contract with Nantes Métropole. Her domain of interests are Semantic Web, database and Collaborative systems. Since 2018, she is a research scientist at Orange Innovation, she works on the design of a Digital Twins platform and contribute to projects, using the Digital Twins platform, in various use cases: Smart Building, Smart Cities and Logistics.
A Graph-Based Cross-Vertical Digital Twin Platform for Complex Cyber-Physical…
363
Dr. Pierre Meye has received his PhD from the University Rennes 1 (France) in 2016 under a Cifre contract with Orange Labs. His domains of interests are storage systems, distributed systems, NoSQL databases, and semantic Web technologies. From 2015 to 2017 he worked as a Software Engineer at Seagate Technologies on remote access technologies for various storage devices. Since 2017, he is a R&D Engineer at Orange Labs Rennes where he works on the design and development of Digital Twins platforms.
Gilles Privat received engineering and doctoral degrees in systems theory from Télécom Paris/ Institut Polytechnique de Paris. He has been a research associate, research engineer and head of research group with CNRS and later with CNET, a public telecom research institute. He now works as a senior scientist for Orange Innovation, IT& Services, with a current focus on graph-based models for Digital Twins and Cyber-Physical Systems D. He has authored or co-authored a hundred of peer-reviewed and invited publications and holds 15 patents.
Dr. Philippe Raïpin-Parvedy Research program manager at Orange Labs. Dr. Philippe Raïpin Parvedy received his Phd from the University of Rennes (France). He focuses his research on distributed systems and security. He investigates theoretical problems like agreement protocols and technical solutions like data storage. He published her work in international journals and conferences. Dr. Philippe Raïpin Parvedy worked at Orange since 2007, where he held several positions from 3GPP and TCG standardization representative, cloud architect-developer and research project leader. Today he is a research program manager in the field of Internet of Things. he contributes to collaborative national and European research projects that build digital twins (ASAP, BIM2TWIN). His team has worked on the design and development of Digital Twin platforms. One of the most import is Thing in the future, a platform that helps the service designers to model and reference their digital twins and that enable the sharing between the services.
Cybersecurity and Dependability for Digital Twins and the Internet of Things Vartan Piroumian
Abstract Digital twin technology is poised to become an ubiquitous addition to the global technology landscape. As with every other technology or computing capability, platform, environment or ecosystem, risk always accompanies benefit. In order to mitigate risk and effect robust, safe, secure computing environments and capabilities, one must first identify and comprehend the implications of unmodulated risk. Digital twin technology is no exception—in fact quite the contrary. As a consequence of digital twins being intrinsically associated with physical objects, the potential for negative outcomes is greater than for many other computing applications. Because digital twins will be employed in applications that interact with and control real-world, physical objects, they will also affect human beings who use or rely on those very real objects that are ubiquitous in our everyday physical world. This chapter discusses the specific areas of cybersecurity and dependability risk in digital twin environments and applications. Dependable systems must also be secure and safe. There is both interplay and interdependency between elements of dependable systems and elements of secure systems. The challenge of making systems dependable and secure is exacerbated in situations where components are physically more exposed and, therefore, potentially vulnerable to attack by external agents or entities. Such is the case for systems that employ digital twins—engendering a serious imperative to address cybersecurity and dependability. Neglecting to do so invites more serious consequences—not the least of which is harm to humans—as these systems involve physical, real-world objects with which human beings will interact. Keywords Availability · Confidentiality · Cybersecurity · Dependability · Deployment environments · Integrity · Maintainability · Monitoring · Safety · Static and dynamic models · Trust
V. Piroumian (*) Global Technology Consultant and Enterprise Architect, Los Angeles, CA, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_14
365
366
V. Piroumian
1 Introduction This chapter discusses cybersecurity and dependability in the context of systems, environments and ecosystems that plan to employ digital twin (DT) and Internet of Things (IoT) technology. Cybersecurity (herein also referred to as just security) and dependability are always crucially important considerations for any system and, as this chapter will show, are perhaps even more critically important for any environment which employs DT and IoT technologies. Cybersecurity and dependability are two of the areas that comprise the set of concerns called non-functional aspects of system and software architecture [1]. Non-functional concerns represent the gamut of aspects related to a system’s performance: the measure of how well a system does what it is intended to do [2]. Other aspects of system performance are availability, integrity, maintainability, safety, confidentiality, throughput, and latency [3, 4]. This chapter focuses only on the two aforementioned aspects of system engineering—security and dependability. Engineering best practices dictate that all non- functional aspects related to system design and engineering should be included in the consideration and planning of any system from the very beginning, a practice that is called quality-focused design [2] Nevertheless, security and dependability might very well be the two most important non-functional aspects of system design and planning. A failure in security or dependability can lead to potentially disastrous outcomes. Although security and dependability are distinct areas of systems architecture and engineering, at some level they are intrinsically related. Security vulnerabilities can negatively affect system dependability, and shortcomings in dependability can result in less secure systems. Ultimately, lack of robustness in either area can deleteriously impact the performance and effectiveness of any system. This chapter explores the relationship of these two aspects in more detail later. Digital twins and IoT are two areas of technology that have an inherent vulnerability to the risks associated with security and dependability—two aspects of the area called trust. This chapter aims to give the reader a basic understanding from the strategic level of why one must consider security and dependability from the very beginning of any program or initiative that involves DT and IoT technology.
2 Overview of Digital Twin Technology Before one can discuss cybersecurity and trust related to digital twins and IoT, one must have a basic understanding of digital twins. Although chapters preceding this one have given various explanations of digital twins, this chapter presents a working definition and overview of digital twins that will give the reader an accessible understanding germane to security and dependability in particular.
Cybersecurity and Dependability for Digital Twins and the Internet of Things
367
In industry and academia, descriptions abound that attempt to characterize digital twins. But, as of this writing, there is no industry-wide definition of digital twins. It would be difficult to have such a definition given the absence of a comprehensive, consistent vision for what digital twins are or how they will be used. This chapter uses the following definition for digital twins: A digital twin is the electronic representation—the digital representation—of a single instance of a real-world entity, concept, or notion, either real or abstract, physical or perceived [5].
The author of this chapter believes that the following definition will prove to be reasonable, realistic and accurate characterization for the entire digital twin ecosystem well into the future, as the technology and ecosystem of products and services develop and mature. It is a rather practical, tangible and relevant definition that encompasses all manner and type of digital twin entities and applications. The above definition accurately captures the original vision and motivation for digital twin technology. It also characterizes today’s anticipated usage categories of digital twins across industry, academia, science, research and product and service development. Nevertheless, history has shown repeatedly that industry in particular has a penchant for misapplying technology in ways for which it was never envisioned or intended to be used. Such unanticipated and unintended applications of technology create greater risk, particularly in the areas of cybersecurity and trust. Thus it is imperative that anyone thinking about embarking on a journey to use digital twin and IoT technology first understand the concepts and motivation underpinning digital twins, and also the original vision for digital twin applications. Only then can one acquire an understanding of the very tangible risks and vulnerabilities involved, particularly those related to cybersecurity.
3 Entities Versus Objects The main idea behind digital twin technology is that a digital twin represents—or mirrors—a real world entity. In essence, the digital twin and its real world counterpart are each other’s ‘twin.’ The digital twin is an electronic representation—a digital representation that exists in a computer application—of its corresponding real world twin. The digital twin is a virtual version or representation of the real world entity. The reader undoubtedly noticed the use of the word ‘entity’ above. This usage is intentional. Notice that the above definition describes a digital twin as a representation of a real world entity, not an object. The dictionary definition of entity is: something that exists objectively or in the mind; an actual or conceivable being [6].
The important notion here is that an entity is anything that a human can comprehend regardless of whether it has a tangible or physical form.
368
V. Piroumian
In contrast, the dictionary definition of the word object is: a tangible or visible thing [6].
Another standard definition, congruent with this one, is: something perceptible, especially to the sense of vision or touch [8].
One last definition, consistent with the others, is: something that is or is capable of being seen, touched or otherwise sensed. [H]
The definition of perceptible, seen in the above definitions, is: that which can be discerned by the senses. [I]
The definition of tangible is: discernible by the touch; capable of being touched; palpable [8].
Thus you can see that the word ‘object’ refers to something that we perceive or sense—something that we recognize in our world as a thing with a physical form. Thus, while digital twins can represent any entity according to the definition above, including those without a physical existence or manifestation, for the purposes of this chapter, think of a digital twin as the twin of a real world, physical object [5]. The reason this chapter focuses on digital twins that are associated with physical objects is that the conversation around trust and trustworthiness is more relevant in this context. One important reason for this is that physical objects will rely heavily on sensors and IoT technology in conjunction with applications that employ digital twin technology. The reader shall see that these IoT devices are intrinsically vulnerable to cybersecurity attacks which in turn affect trustworthiness and dependability.
4 Digital Twins for Modeling and Simulation Science and engineering use models throughout the gamut of their activities. A digital twin is really a model of its real world counterpart—its twin. A model is: an object, usually in miniature and built according to scale, that represents something to be made or something already existing [6].
The key notion is that a model represents something. Models capture the characteristics and attributes of the corresponding objects that they represent, enabling one to understand the behavior of the real thing. Think of the plastic scale model airplanes that kids build, or the physical scale models that architects build, or the scale models of an aircraft, or the wing of an aircraft used for testing in a wind tunnel, all of which are suggested by the above definition of ‘model.’ A digital twin is a virtual model with a virtual representation of its twin. The embodiment—virtual or physical—of the model does not preclude its effectiveness. In fact, the virtual nature of digital twins is precisely the impetus behind digital twin
Cybersecurity and Dependability for Digital Twins and the Internet of Things
369
technology and its promise to create highly sophisticated models that support the following capabilities: • • • •
Highly detailed digital representations of real world objects Sophisticated 2D and 3D visual presentations for the benefit of human users Dynamic modeling and simulation Integration with, and use of, virtual reality and augmented reality technology [7]
One cornerstone of the inspiration behind digital twins is the notion that highly detailed, sophisticated virtual models that display a high degree of fidelity to the real world version of something will reduce risk—risk that accompanies the investment into building physical models or full-scale prototypes of things. The discussion of the grand vision for digital twins or even a discussion of the risk aspect of modeling and its potential to ameliorate risk using digital twins are entire subject areas that each deserve a dedicated publication. A full treatment of either subject is beyond the scope of this chapter. Rather, this chapter focuses on the security and dependability challenges in the digital twin and IoT ecosystems and arenas. But before we get into the actual discussion of the cybersecurity and dependability issues, this chapter presents some background on models and the data that informs them.
5 Static and Dynamic Models A model can be static, dynamic or both. A static model describes the object’s static nature. A suitable definition that aligns with the context of this discussion is: at rest; dormant; not active, moving or changing; opposed to ‘dynamic.’ [6]
Think of the static view of a building or a ship. It would include at least the dimensions of the building or the ship, its shape, weight, construction materials and so forth. It could present 2D or 3D views, or both. In contrast to a static model, a dynamic model is one that models the dynamic behavior of objects. A good definition of dynamic for our purposes is: of or pertaining to forces not in equilibrium, or to motion as a result of force; opposed to ‘static.’ Producing or involving change or action [6].
Dynamic behavior refers to the characterization of the changes over time of an object as a result of forces being applied to it. For example, a dynamic model of the aforementioned building might model movement during an earthquake; that model could include a visual simulation in 3D of the building’s motion during a seismic event. Or the model could simply be something suitable for consumption by a computer program, say, a table of numbers representing the change in various quantities that each represent some aspect of the building’s dynamic changes. Similarly, a dynamic model of a container ship might model how the ship moves when hit by a large wave.
370
V. Piroumian
6 Modeling, Data and Sensors Models require data. Data represents the empirical information gathered by observing the attributes and behavior of the real object. Systems use sophisticated sensors to gather data on the condition or state of a physical object. From the most complex to the seemingly simple, today there are sensors and microprocessors of varying sophistication and power both in and on the objects that we encounter every day— from automobiles to homes to utility infrastructure to medicine to credit cards and beyond. Today’s typical automobile has over 100 microprocessors and hundreds of sensors. Each sensor collects information about some aspect of the state of the object. The information collected is passed to some computing system—perhaps one or more microprocessors, systems or sub-systems in the automobile or even external to the automobile. Freeways, boulevards, traffic intersections, bridges, tunnels, airports, shopping malls, your home heating and cooling system thermostat, the gas and electric utility meter in your home and everything in between have sensors. Freeways, roads, and streets, have speed monitoring sensors en route. Street intersections have sensors to detect the presence of vehicles to help determine when to turn the light green. Dams have a complex array of sensors to monitor all kinds of structural conditions. Bridges have sensors to monitor load, stress and deterioration. Satellites perform ground imaging to measure all kinds of parameters about the earth. Your home irrigation system senses moisture levels to determine whether to turn on irrigation. The applications are virtually limitless. Herein lies the intrinsic connection between digital twin and IoT technology. IoT devices, which include sensors, sense the conditions in, on and around their associated physical real-world objects, and they transmit data about those conditions to some computing system. Using that data, scientists and engineers can define the static and dynamic models that describe the object. As more data is accumulated, models become more accurate and refined in regard to their fidelity to, and representation of, the real object. If done right, that model encompasses whatever set of characteristic attributes and behavior (static and dynamic aspects) are necessary and sufficient for its intended use. The definition of any digital twin is entirely the decision of its creator, but it should be fit for purpose.
7 Usage Scenarios for Digital Twins A proper treatment of the grand vision for all of the anticipated types of applications for digital twins is beyond the scope of this chapter. Rather, this chapter focuses on a subset of the usage categories that are particularly poignant with respect to
Cybersecurity and Dependability for Digital Twins and the Internet of Things
371
cybersecurity and dependability. The primary usage categories discussed in this chapter are: • Monitoring of real-world entities in real-time or near real-time • Command and control of real-world entities in real-time Both of these usage categories involve monitoring of real world physical objects. This chapter uses the following definition for monitoring: the scrutinizing or systematic checking of an object with a view to collecting certain specified categories of data [8].
The definition implies that there will be an actual physical real world object to be monitored. The implication is that the monitoring consists of periodic checking of the state or status of the physical object. These conditions, in turn, imply that monitoring captures or observes the object’s dynamic behavior. Dynamic models capture the changes in an object’s state or condition over time. Notice that the above explanation makes no mention of simulation. In computing, the definition of the term simulation is: the imitation of the operation of a real-world process or system over time [9].
Simulation does not require a real-world physical object and thus we forego its discussion here. Instead of involving physical objects, simulation uses static and dynamic virtual, computer-based models to imitate a real object’s behavior. Although simulation is an important application for digital twins, it is not discussed in this chapter because it does not involve IoT or the operational environments that are a necessary part of monitoring and command and control systems. An important point is that simulation environments can be much more easily protected from trust vulnerabilities. This chapter discusses the cybersecurity and dependability issues related to monitoring in digital twin technology environments where the digital twins are associated with a physical real-world object counterpart. Monitoring and command and control are very important areas for the digital twin and IoT communities, arenas and technology domains. Effectively, there are a limitless number of these scenarios and applications of digital twins and IoT in which systems will monitor or control real physical objects. And the anticipated ubiquity of digital twin technology strongly suggests that cybersecurity and dependability are vitally important with respect to the goal of ensuring trust in a highly-connected, digital world where real- world physical objects are involved.
8 Monitoring of Real World Physical Objects In order to monitor an object, one must collect information about the physical status, state or condition of the object being monitored. It is precisely this category of usage that creates the high degree of cohesion between the digital twin and IoT
372
V. Piroumian
technology domains. The natural, intrinsic relationship between these two technology domains is a result of the adoption of digital twin technology in environments that involve the monitoring of actual physical, real-world objects of arbitrary complexity and sophistication. In a typical scenario, the data gathered from the object being monitored is sensed, collected, marshaled, organized and sent to one or more computing systems, which process and analyze the received data using a digital twin of the object being monitored. When objects are monitored in this manner we say that they are instrumented. The sensors and the software that collect and measure the data about the object are called the instrumentation. Applications might simply record the state of the instrumented object. Additionally, they might make notifications of anomalous conditions to an observer: either a human user or another computing system. The object being monitored could be a commercial airliner in flight. The monitoring could encompass every aspect and parameter that represents every condition of the entire commercial aircraft in flight, or it could be any subset of that comprehensive information. One system could do the monitoring, or several specialized systems working in conjunction could address all desired monitored parameters. The conceptualization, vision, architecture, design, inception, and implementation of such systems is varied and related to the desired purpose. Today’s commercial airliner has thousands of sensors monitoring every aspect of the aircraft. For decades airlines have been using on-board systems that transmit telemetry data regarding a wide array of information indicating the status of their planes in flight—the state of all the aspects monitored via sensors. On-board systems transmit the information to the airline’s ground-based flight operations center. Personnel in these flight operations centers monitor each aircraft to ensure safe operation, scheduling, routing, aircraft, etc. In the years to come, digital twins will be built to represent just about every real- world tangible object from nuclear reactors, automobiles, factories, manufacturing plants, aircraft, spacecraft, biological creatures or organs, military troops, military weapons—everything.
9 Presentation Is Not Simulation The very definition of the term ‘monitoring’ [10] implies a periodic capture of the state or status of the real world physical object. That is, the monitoring is intrinsically dynamic in nature. As part of that monitoring, the system might include a presentation of the state of the target object. That presentation could take the form of a visual display suitable for human observation; in fact, this is the typical case. That presentation could be a 2D or 3D sophisticated graphics display representing one or more visual views of the target object being monitored. Or it could be a sophisticated virtual reality (VR) or augmented reality (AR) display.
Cybersecurity and Dependability for Digital Twins and the Internet of Things
373
The visual view could show a static depiction of the object. Or it could be an animated display, updated in real time and corresponding to the real-time frequency of the capture of information pertaining to the state or condition of the monitored object. While this latter scenario presents a dynamic view—an animation—of the real object, its visual representation is not an example of simulation because the display is not simulating the existence of a physical object; there is a physical object.
10 Command, Control and Communications with Real-World Physical Objects Command, control and communications involve bidirectional command, communications and information transfer. The airline flight operations center scenario is a good example of applications that involve command, control and communications. In response to telemetry data received from an aircraft and subsequently analyzed, a ground-based flight operations center might ask the pilots or flight engineers to assess some condition of the aircraft. Or, ground-based systems might query the on-board systems directly for specific information that might be used by the airline mechanics or even the aircraft manufacturer to assess some anomalous condition aboard the aircraft. Additionally, ground-based personnel might send information or commands back up to the pilots instructing them to take action of some kind. It’s quite possible that commands will be sent from machine to machine, that is, from ground-based systems directly to on-board systems. Military drones come in both autonomous and piloted versions. The autonomous drones fly by themselves, using on-board systems for navigation and mission execution. The piloted drones are commanded by ground-based human personnel. The communication is bidirectional for piloting, navigation, status, reconnaissance, telemetry, recording of mission performance and so forth. These are just a few simple examples that demonstrate the bidirectional flow of information. The adoption of digital twin technology will proliferate in all such usage scenarios. You will see digital twins in just about every scenario imaginable.
11 Real-Time and Near Real-Time Systems, Digital Twins and IoT The astute reader will undoubtedly recognize that systems such as those controlling aircraft, automobiles, nuclear reactors, medical equipment and so many others need to be responsive within strict time limits. These systems need to be real-time or near real-time. A real-time computing system is defined as: a system which must respond to externally generated input stimuli within a finite period specified in the system’s performance requirements and design. The correctness of real-time
374
V. Piroumian
systems depends upon the timeliness of response as well as the logical correctness of the response to stimuli [11].
Systems designed for control of nuclear reactors, traffic control, aircraft, spacecraft, civilian, commercial and naval seagoing vessels, military weapons systems, medical equipment, amusement park rides, and so forth will all use digital twin technology. The systems that use digital twin technology and control such objects must be real-time. In some engineering circles there is an implicit perception that these systems must have relatively rapid response times. But the view of what is considered to be ‘rapid’ is very subjective. The issue is really that the response behavior depends upon the system requirements which, of course, depend upon the intended use of the system. The response time requirements in and of themselves do not dictate whether a system is or is not real-time or near real-time [11]. The air bag deployment systems in automobiles must fully deploy an airbag in less than 100 milliseconds in order to prevent serious injury to occupants [12]. Anti- lock braking systems must have similarly rapid response times [13]. A global positioning system (GPS) receiver in a cruise missile might have to update every few milliseconds, but the GPS-based navigation system for a yacht probably does not need such low-latency response. Hospital medical equipment might have latency requirements (response time requirements) that fall somewhere in between those of an airbag and those of a yacht navigation system. A heart monitor might need only a one-second response time and not millisecond response time. But the 3D visual display of a robotic surgery system might need single-digit millisecond response time.
12 Deployment Environments and Architectures The very brief introduction to examples of systems and applications that will use digital twin technology gives the reader some insight into the environments in which digital twins will be found. For systems that do monitoring and command and control, the environments will be varied, covering every industry and application. The objects being monitored, as well as those being commanded and controlled, will have sensors that collect data about the state or condition of the object and send it along to some system that hosts the digital twins. With respect to the target objects, these systems could be anywhere from the same physical proximity to thousands of miles away. Regardless of the application and specific configuration, the target object and its digital twin will be somehow connected both logically and physically. The connection could be via a hard-wired connection or a wireless connection. There are a plethora of wireless connection technologies and standards including, but certainly not limited to, 802.11 (commonly referred to as Wi-Fi); satellite; 802.16 (commonly
Cybersecurity and Dependability for Digital Twins and the Internet of Things
375
referred to as WiMax); microwave; laser-based communications channels; and many others. Sensors performing data collection on the monitored objects will need to have some power source to supply the power required to transmit the data collected to the systems hosting the digital twins (the twins of the monitored objects). Those power sources will be hard-wired alternating current (AC) power, direct current (DC) most likely provided by some kind of battery, solar power, fuel cell and the like. Industry anticipates the use of virtually every kind of power source imaginable as digital twin technology reaches into every environment, ecosystem and application.
13 Cybersecurity This chapter uses the terms computer security and security interchangeably to refer to cybersecurity. The United States National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC) defines cybersecurity as follows: Cybersecurity is the prevention of damage to, protection of, and restoration of computers, electronic communications systems, electronic communications services, wire communication, and electronic communication, including information contained therein, to ensure its availability, integrity, authentication, confidentiality, and nonrepudiation [14].
The goal of cybersecurity is to ensure that computing systems are able to maintain, and operate with, • • • • •
Confidentiality Integrity Safety Reliability Availability
The security community defines the term confidentiality to mean “activity, procedures, policies and controls related to keeping communication from being seen by unauthorized agents.” [15]. The term integrity means correctness. A system has integrity if there has been no unauthorized—usually malicious—modification of any part of a system such as its data or logic (software). The term safety refers to the lack of corruption or damage to a computing system itself as well as to the absence of damage to users of a system, or entities or objects controlled by a system, or other systems with which the system interacts. For example, if a system controls a train or railway traffic signal, safety would refer to the integrity and lack of harm to the computing system itself as well as to the safety of the train and its cargo or passengers. The term reliability refers to the probability that a computing system will return correct results. A more formal definition of reliability is: the probability that the
376
V. Piroumian
system will perform its intended function under specified working conditions for a specified period of time [16]. The term availability refers to the probability that a system is operational and usable at a given time [17]. Of course all of these aspects are related. Failure to protect systems appropriately creates risk, which is the potential for a negative outcome or harm in any of the above areas. Risk is not only the potential for negative outcomes but a measure thereof. A more formalized definition of risk taken from computing and cybersecurity is: a measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence. [Note: Information system- related security risks are those risks that arise from the loss of confidentiality, integrity, or availability of information or information systems and reflect the potential adverse impacts to organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations… [18].
Cybersecurity involves many aspects, from requirements and policy to the mechanisms that implement those policies and requirements to protect systems according to the aspects outlined above. This section focuses on highlighting and describing the security vulnerabilities to computing systems and environments, especially those involving digital twins and IoT technology; it does not discuss the mechanisms to protect against, or the remedies for, security vulnerabilities. There are several major categories of cybersecurity threats or attacks; they are: • Backdoor—a secret method of bypassing normal authentication or security controls. • Denial-of-service attack—a technique whereby a system experiences an extremely high volume of interaction with external systems or agents with the intention of overloading the system in order to make it unusable or inaccessible to authorized users. • Direct-access attack—the modification of the system being attacked by the attacker, or the installation of rogue software such as key loggers, malware, covert listening devices and so forth. • Eavesdropping—surreptitious listening to private computer communication. • Multi-vector, polymorphic attacks—an attack that morphs into a different kind of attack to avoid detection by intrusion detection systems. • Phishing—an attempt to acquire sensitive information by deceiving users. • Privilege escalation—an attacker’s escalation of its privileges, without authorization, to gain access to more restricted or broader areas of the compromised system. • Reverse engineering—the process by which an object—a computer, hardware or software—is deconstructed to understand its architecture, design or software source code in order to be able to defeat it for malicious purposes. • Social engineering—a ruse by which malicious systems convince users to disclose secrets such as log in name and password, generally by exploiting peoples’ trust.
Cybersecurity and Dependability for Digital Twins and the Internet of Things
377
• Spoofing—a technique by which the unauthorized attacker masquerades as a valid entity through falsification of data, the goal being the access to information or resources. • Tampering—the malicious modification or alteration of data in order to defeat the security measures present in a system. • Malware—malicious software installed on a computer via unauthorized means with the intention of gaining access to information or otherwise compromising the system, such as by deleting data [19]. The following section discusses cybersecurity in digital twin and IoT environments.
14 Cybersecurity Challenges in Digital Twin and IoT Applications Just like any other computing system or infrastructure, digital twin and IoT environments and systems have the same cybersecurity challenges as any other computing system; they must address cybersecurity across all of the types of cybersecurity attacks listed above, and they must have a robust cybersecurity policy, framework and implementation. A comprehensive cybersecurity policy and plan must address security in all of the areas in the following list which spans hardware, software, connectivity and communications: • • • • • • • • •
Application security Network security Endpoint security Data security Identity management Database and infrastructure security Cloud security Mobile security Disaster recovery and business continuity
The reality is that digital twin and IoT technology and environments might be more exposed and vulnerable to cybersecurity attacks than typical systems. The reason is that systems involving digital twins and IoT will typically exist in environments with the following characteristics: • • • •
Highly networked and distributed Remote control and communications Edge devices at the boundary of a protected system environment Use of open standards
378
V. Piroumian
The term IoT simply refers to an environment in which sensors are part of, or interconnected with, other elements comprising a computing system. An IoT device is one which has connectivity to other computing elements as part of a network of interconnected devices that collectively form a computing system and application. IoT devices typically have a transducer in the form of a sensor or actuator for interacting directly with the physical world, and they have at least one network interface [22]. Communications involve both a physical medium as well as a set of protocols to actually organize, control and send the information [23]. The physical media used by IoT environments range from hard-wiring using some kind of network ‘cabling’ such as copper or fiber optics, to wireless connectivity using some transmission medium ranging from electromagnetic radio frequency (RF) transmission (a wide spectrum of bands from ultra-low frequency to low frequency to Ultra-Wideband (UWB), microwave, satellite communications and many others) to optical systems such as laser-based systems. The various physical communications mechanisms support a multitude of data communications and control protocol stacks including Ethernet, Wi-Fi (IEEE 802.11), Wi-Max (IEEE 802.16), Long-Term Evolution (LTE), Bluetooth, Zigbee, and others. The connectivity and protocol mechanisms chosen are a function of the usage requirements and environmental parameters; the overall architecture and design of the system must be appropriate for its intended use [24]. None of these communications technologies have cybersecurity controls, making cybersecurity particularly challenging. Additionally, many of the environments which heavily use IoT have the following characteristics: • IoT devices physically remote from other parts of the system • Exposure of IoT devices and objects at the ‘edge’ of the system, outside the normal perimeter of security protection mechanisms • Extremely high volume and numbers of devices being monitored and controlled • Extremely high volume of sensors and their instrumentation of objects being monitored • Centralized object measurement and data collection • Limited access to objects and their sensors for management [20] These characteristics make the digital twin and IoT environments particularly vulnerable from a cybersecurity perspective. The vulnerability is that the IoT sensors as well as the physical medium that connects them to other physical tiers (parts) of a distributed computing system are in physically ‘open’ and exposed environments that are difficult to secure: they are not deployed in protected environments behind firewalls or intrusion detection systems like those in a data center. The IoT devices will send information to some other tier of the system via a network connection of some kind such as those discussed above. The “back end” tiers will do the processing of the data sent by the ‘edge’ components or IoT devices. Typically, the digital twin objects will be in the “back end” of the system or in some centralized computing tier which will most likely be in a protected, secure environment.
Cybersecurity and Dependability for Digital Twins and the Internet of Things
379
Besides the vulnerabilities created by the physical location of the IoT devices and objects, another challenge is that IoT sensors and devices are mostly low- powered, relatively simple devices. These sensors do not have built-in device cybersecurity capabilities. Additionally, they lack non-technical aspects and functions such as management support capabilities. Currently the vast majority of these simple IoT devices do not support automatic software updates, the download to the IoT device of virus profiles, anti-virus software, configuration settings or supply chain information. These shortcomings leave IoT devices very vulnerable to cybersecurity attacks compared to traditional computing devices such as desktops, laptops and tablets, mobile phones or automobiles. And it makes deploying or installing cybersecurity protection mechanisms very challenging. Yet another challenge to achieving robust cybersecurity in the DT and IoT domains is the sheer number of devices. Aircraft, automobiles, buildings, streets, bridges, tunnels, dams, airports and so forth have thousands of sensors. With an increase in the number of sensors that have limited built-in cybersecurity, the probability of breaching security controls increases. Each such IoT device is a target for attack; each is exposed and easily accessible to rogue agents or malicious actors. They do not sit behind traditional robust security protection mechanisms. The physical environment poses a significant challenge for DT and IoT applications in that security must be maintained for each IoT device or sensor. Consider the modern battlefield. Soldiers will have sensors on their bodies and on their equipment, and most (if not all) equipment will have an array of sensors that includes GPS transceivers. Armored vehicles, trucks, ships, personnel will all have GPS transceivers that will receive GPS signals and communicate their location back to a command and control center. Even if each soldier wears only a few sensors, the number of soldiers—the objects tracked and represented by the digital twins—could be vastly numerous. Recent military conflicts have highlighted the rapid adoption of military and weaponized drones. In a large-scale conflict, think of how many suicide or reusable drones might be in use on a large battlefield: potentially thousands. Each one—in fact, each IoT sensor device on each drone—is a potential target for cybersecurity attack. Any IoT sensor that has an actuator can modify the physical world (the device on which the sensor resides) [31]. Think of the risk of a rogue enemy agent taking control of a drone and redirecting its destination or simply fooling the enemy by transmitting a false position or a false indication of a successful mission or using spoofed communications to fool the enemy into sending sensitive information to the compromised drone. Commercial aircraft all have numerous GPS receivers. The United States government Federal Aviation Administration (US FAA) has launched its NextGen program to modernize aircraft navigation, which relies heavily on GPS technology [25]. The NextGen system will affect many aspects of aircraft operation including navigation, communications, air traffic management and flow, surface operations, and so forth [26]. The overall NextGen ecosystem will have huge numbers of sensors and IoT devices across a wide environment that includes aircraft, satellites,
380
V. Piroumian
ground-based systems, data centers, and control centers. One of the expected outcomes of the NextGen system is that there will be narrower flight path corridors especially near airports. Imagine two aircraft set on a collision course because compromised components trick the planes and ground-based navigation systems into thinking the aircraft are somewhere that they are not actually. Satellites are IoT devices too, even though their power supplies are more capable and have greater capacity than those of many IoT device power sources. Naturally, satellites are remote objects relative to the ground-based systems with which they communicate; their communication channels to earth-based systems are particularly vulnerable. Despite sophisticated encryption techniques, there is ample opportunity to probe and attack these communication channels on an ongoing basis because the devices are in an open and accessible physical environment. A rogue agent could disrupt, spoof, or intercept the signals. More nefariously, a rogue agent could compromise the satellites themselves. Imagine the potential or risk of compromising a satellite carrying space-based weapons. A rogue agent compromising a GPS satellite could send false position or time information, change radio channels so that only rogue agents could receive the signal, change the encryption salt or pseudo-random number used for message payload encryption or compromise the system in other ways. If the satellite itself could not be compromised, the communications channel might be. False downlink information could be sent to any GPS receiver aboard an aircraft, ship, automobile, mobile phone, soldier in the battlefield, tank or drone. And false signals could be sent to satellites; for example, instructing them to change their time code synchronization—affecting every receiving device [27]. At the time of this writing, the United States, the European Union and the Russian Federation each have their own constellation of GPS satellites; their own GPS systems. One major motivation for the independent systems is to make them more private or secret in order to avoid the cybersecurity attacks that could disable them. The GPS satellite and receiver example is a good one for demonstrating the sheer volume of DT and IoT devices across a large ecosystem. In addition to the sheer number of ‘edge’ devices, the communications channels are broadcast radio frequency communications, therefore vulnerable to easy physical access, interception and eavesdropping. In every application where IoT devices are remote, each IoT device and each connection to and from each device is a potential vulnerability point that could be exploited by a rogue agent. The absence of built-in management capabilities on IoT devices, coupled with the extremely large number of remote devices that are easily accessible physically, make it difficult to periodically update or replace these sensor devices with upgraded versions of cybersecurity protection. The vast majority of IoT devices cannot be accessed, managed or monitored in the same way as conventional IT devices. Upgrades of software, hardware or security must often be done manually. In fact, the update might require an upgrade which involves simply replacing the device [28]. In some applications, the main computer system that collects and coalesces input from the myriad number and type of sensors might sit in close physical proximity to
Cybersecurity and Dependability for Digital Twins and the Internet of Things
381
the sensor arrays. For example, automobiles have actual on-board computers; clearly they are in close proximity—both physically and logically—to the sensors in the automobile. But the computers in the automobile might communicate with other remote systems in an open, exposed medium. This is the general case in fact. For example, an on-site computer system at a large dam might be connected to another system geographically far away. Each such node in a system, which could be connected via a collection of networks, is a vulnerability point. The reality is that DT and IoT environments will be used for a large variety of such applications where the need is for a small computing device like an IoT device that does not have the robust capability for cybersecurity defense mechanisms. Approaching the problem from the other direction, consider a breach of a centralized system that manage the IoT devices. Think of the potential danger to all of the remote IoT devices and the objects they monitor if the centralized processing system is compromised. The compromised central processing system could be made to send out harmful information or commands to the remote objects. Consider the consequences if a group of cruise missiles, weaponized drones, reconnaissance drones, soldiers, or tanks are sent to the wrong location, passed bogus information to target-friendly troops or misdirected or controlled in some other way. A rogue agent could send a cruise missile to attack friendly forces instead of the enemy, target allied troops, crash a commercial airplane into a mountain in bad weather, force it to land short of the runway in low-visibility approaches, steer an automobile into a wall or into another vehicle, intentionally send two trains on a collision course. There have already been serious attacks on the electric power grids of nations. Clearly, this is a real threat even in today’s world. Moreover, as the number of sensors and connected devices such as IoT modules proliferate, this vulnerability will only increase [29]. The dramatic proliferation of IoT devices and digital twin technology will increase vulnerability if these environments and ecosystems are not properly secured. Finally, consider the role of open standards. Digital twin technology is intended to develop as an open systems standard. While standards are good, the potential benefits come at a price. Open standards are developed by a worldwide community of experts. The benefit is that experts can find flaws and vulnerabilities, making the standards and their implementations robust, relevant and useful for all. And, because it’s an open standard, many people can view, review, analyze, assess and comment on its correctness, effectiveness which is more likely to produce a robust, secure standard. But the negative is that all details of the standard are available for anyone to examine. This openness is potentially dangerous, as it makes it easier for criminals to find ways to defeat a system built around that technology. Thus the standards development process must adopt an architecture that is resilient to its open nature.
382
V. Piroumian
15 Recommendations for Building Robust Cybersecurity in Digital Twin and IoT Environments DT and IoT technology environments, like any other computer environment or system, must have comprehensive end-to-end cybersecurity controls in place [30]. The challenge for DT and IoT environments are the objects being monitored and controlled, the myriad sensors and IoT devices that instrument them and their digital twin counterparts. A discussion or treatment of how to improve your cybersecurity in a DT and IoT environment is beyond the scope of this chapter. Nevertheless, the references provide some starting points from which to embark on a comprehensive journey towards understanding the cybersecurity challenges for DT and IoT ecosystems, and how to protect against those vulnerabilities [31]. In particular, the US National Institute of Standards and Technology (NIST) and the United States Cybersecurity and Infrastructure Security Agency US CERT publish a group of recommendations on cybersecurity and best practices. Both of these organizations publish a plethora of excellent material to guide any organization on its journey to robust cybersecurity. The reader should note that each situation, system, environment or application must consider the specific requirements, challenges and obstacles to achieving robust cybersecurity defenses. Each system is unique, and its architects and designers must fully understand the intended usage scenarios and vulnerabilities [32].
16 Dependable Systems and Trust Dependability is a part of systems reliability engineering. Both dependability and cybersecurity are elements of trust or trust engineering in computing systems. Trust is a conceptual term that describes the various kinds of elements of trustworthiness in systems, and the term implies all types of trustworthiness. Dependability is a tangible, concrete term which addresses a specific kind of trust, namely trust in the sense of reliability. Likewise, cybersecurity is a tangible area that also addresses a specific kind of trust, namely trust in the sense of security against malicious attack. Both dependability and cybersecurity are part of trust engineering. The standard engineering definition for dependability is: that property of a computer system such that reliance can justifiably be placed on the service it delivers. The service delivered by a system is its behavior as it is perceptible by its user(s); a user is another system (human or physical) which interacts with the former [33].
A standard definition for system reliability is: the probability that a (capable) system (subsystem or component) will function in a desired manner under specified conditions for a given time [34].
Cybersecurity and Dependability for Digital Twins and the Internet of Things
383
Dependability engineering is a distinct, rich and expansive area of computing, and a general treatment of it is way beyond the scope of this chapter. This chapter only discusses some of the more important aspects of dependability as they relate to the use of digital twin and IoT technology in particular. In the 1980s, the concern for, and realization of, the need for reliable and dependable systems increased dramatically as the use of computer systems for safety- critical systems proliferated [35]. Today, the adoption of digital twin and IoT technologies in systems will require a redoubling of effort to ensure that these systems are trustworthy. Dependability is a rubric for non-functional aspects of systems architecture, design and engineering. The following areas are usually classified as different aspects or properties comprising dependability: • • • • •
Availability Safety Integrity Manageability Maintainability
This chapter will only discuss the relationship between cybersecurity and dependability. Both cybersecurity and dependability have many sub-elements. The reader has already seen the aspects comprising cybersecurity. This section discusses some of the aspects of dependability. While the aspects in the above list are distinct, they are all interrelated, as are cybersecurity and dependability. As an example, consider a system that is not easily maintainable. If an organization’s system administrators or IT technicians do not apply security patches in a timely manner, the reason might be that the system is not easy to patch or that operations has not accommodated a suitable maintenance period during which the system can be taken offline and serviced. A system that is not up-to-date with its security patches is vulnerable to cybersecurity attacks. What happens if an intrusion detection or intrusion prevention sub-system is not up and running reliably? That is an example of a failure in availability. That failure in particular could result in a cybersecurity breach—an intrusion—by a rogue agent. Such failures in maintenance (resulting from difficulty of maintainability) or availability can again leave a system vulnerable to cybersecurity attacks. There are a myriad number of scenarios that one could cite to demonstrate the interrelationship between cybersecurity and dependability. The converse is also true, that dependability depends upon cybersecurity. A successful cybersecurity attack could render a system unavailable or make it produce incorrect results. Imagine a cyberattack that successfully takes down a system that monitors or controls a nuclear reactor or a train control system or a locomotive en route, rendering those systems unresponsive or not suitably responsive. The train might overrun a traffic signal as a result of an attack that compromises the locomotive’s dead-man’s switch; or a cyberattack might compromise the train track switching mechanism. The nuclear reactor monitoring system might not detect a dangerous
384
V. Piroumian
problem in time. The cyberattacks in these examples rendered the systems not dependable, potentially leading to further negative outcomes. Systems that employ digital twin and IoT technologies must consider dependability to be as much a high priority objective as cybersecurity. Dependability, like cybersecurity, is really a measure of the level of confidence in the system’s quality [20]. Quality refers to many things, including the accuracy of a system’s results and the accuracy and precision of the results it reports to users (human users or other systems).
17 Trust Considerations in Architecting Systems Using Digital Twins and IoT Recall that, for the purposes of this chapter, a digital twin is a digital representation of a specific instance of a tangible real world object. The digital twin exists in a software application, and it presents a virtual representation—a virtual version—of its twin or real world counterpart. The main idea is that the digital twin must accurately represent with fidelity its real world counterpart. You may interpret this to mean that the user—either a human or another system—should be able to obtain from the digital twin the right kind and the right level of detail of information needed for the function that the digital twin and its associated software system and application perform. Another way of saying this is that the digital twin must accurately reflect the state of its counterpart in a meaningful way and at all times. To be able to do so, the DT must be able to acquire the monitoring information from the myriad of sensors that gather the data on the state of the real world physical twin. You can say that the user of the system can trust (in the sense of ‘fidelity’) the system if it accurately represents the real-world physical objects at all times and with the right amount of data. This is the challenge for system architects: to architect and design an application, its digital twin definitions, the decisions about how to use IoT devices, so that the entire system accurately and faithfully represents the real-world objects being monitored or managed in a manner suitable for the need and intended use of the system. Failure to achieve this fidelity means that the system is not trustworthy. To accomplish that fidelity, the architect of an application or solution must consider the following: • Do the digital twins that exist in software represent their real world counterparts with the right level of detail and comprehensiveness at all times? • Do the digital twins encompass all types of data to support a meaningful representation of the real-world object in regard to the application? • Are the digital twins functional equivalents to their real world counterparts? • Is replication of a digital twin in software a semantically logical operation? Is there a physical counterpart to the digital twin replica?
Cybersecurity and Dependability for Digital Twins and the Internet of Things
385
• How can one verify the accuracy and trustworthiness of a digital twin? Various considerations follow for the architect or designer of applications who plan to use digital twins and operate in an IoT environment. Creation of Digital Twin Instances in Software When a software application executes, it creates digital twin data structures or objects as part of its operation. In the context of those applications that monitor or control real, tangible objects, one must consider the life cycle of the digital twins and the real objects they will monitor or control. Several scenarios are possible: • The main software creates the DT before the physical entity exists or before the DT acquires information about the physical object from its sensors. • The physical object exists before the DT; the software application instantiates a DT to correspond to the real world physical object after it acquires information about the object. • The physical object information is created at the same time as its corresponding DT. The system architect must ensure that the digital twin has fidelity to the physical object regardless of which of the above temporal orderings are used. One must think about how the digital twin will be defined and updated. What happens if the state or condition of the physical object changes in each of the above scenarios? How will the digital twin be updated to maintain fidelity to the state of the physical object? Temporal Considerations The temporal considerations of DTs and their physical counterparts are closely related to the creation scenarios described above. Physical objects change over time [20]. Mechanical things age. A bridge or an airplane wing fatigue over time. Metal rusts. Electronic sensors wear out. Parts fail. Physical objects will experience decay over time. Every system must think about how its digital twin objects will maintain fidelity to their physical counterparts especially in light of the reality of decay and deterioration. How will the changes be sensed or detected? Are the IoT sensors and devices equipped to detect the temporal changes? Are they the right kind of sensor? Can the IoT device communicate those changes to the DT, or can the DT query the IoT device to glean information on changes resulting from decay? What is the application model for maintaining accuracy and fidelity of the digital twin to its physical counterpart? And how does the application record checkpoints—timestamps of snapshots of the state—to communicate when the states of the two objects were synchronized? The ability to ensure synchronization is imperative when digital twins have real-world physical counterparts as their twins. Accuracy and Functional Equivalence One of the main visions for applications that use digital twin technology is simulation. Simulation, by definition, describes the scenario in which a software application simulates the dynamic behavior of something—how something operates or behaves. In engineering parlance, this description of dynamic behavior is called functional behavior. As previously
386
V. Piroumian
defined, simulation does not involve a physical real-world counterpart to the digital twin; the digital twin only describes an abstract entity. Nevertheless, as the reader already knows, in other popular usage scenarios a digital twin can describe a physical object, and an arbitrary physical object can have dynamic behavior. The object’s digital twin can certainly include a dynamic model of its physical twin. The correct terminology is that the digital twin presents the dynamic behavior of its physical twin, presumably from data acquired in real-time as a result of monitoring the physical object. In both cases, namely the simulation of abstract entities and the presentation of real-world, physical objects, the digital twin must have functional equivalence to its twin. Functional equivalence is the notion that the executable specification of the digital twin must match exactly that of its physical twin. In the same way that any digital twin’s representation of the static condition of its physical twin must meet a suitable threshold of accuracy, so must the digital twin’s functional equivalence meet a suitable minimum threshold of accuracy. That minimum threshold must be specified accordingly by the engineers and designers of both the physical object and its digital twin. In order to achieve functional equivalence, first and foremost the specification and design of a digital twin must anticipate what is needed to make it capable of functional equivalence for the physical object being represented. Secondly, one needs to construct the digital twin appropriately to achieve functional equivalence. Finally, functional equivalence must be maintained for both the full life cycle of the digital twin and the physical object. There are many reasons why there is a risk of compromising functional equivalence. The rubric is that no two physical objects are physically identical. Each application must determine and define what ‘identical’ means for its purposes. For physical attributes and characteristics, there are tolerances for each metric [21]. For example, what difference in length—what tolerance—determines when two physical objects must be considered to be of the same, or different, lengths. There are virtually an unlimited number of aspects to determining sameness, and each has an associated metric and tolerance. The definition of sameness of identical nature should be related to the specific application and intended use. These issues relate to the initial construction or manufacturing of the physical objects. Depending on the digital twin application, each physical object might have its own digital twin object. The software application might compare the two physical objects via comparison of their digital twins: do the values of the parameters defined in the digital twin show equivalence or sameness? Or do they indicate that they are different? What are the metrics used—the algorithms or logic—used to determine whether two digital twins are the same or different? Are they the same weight; are they the same diameter; do they behave the same (within tolerance) when their operation is simulated? And do two digital twins deemed to be the same reflect that their two corresponding physical counterparts are the same? Functional equivalence is relevant throughout the entire life cycle of the physical object [7, 20]. External forces through regular use are the main causes of decay and
Cybersecurity and Dependability for Digital Twins and the Internet of Things
387
deterioration. Natural forces on a dam, including pressure from the huge reservoir of water, can cause a dam’s structure to fatigue. Stress caused by forces on an airplane wing causes structural fatigue. Even normal use causes wear and tear on the tires and brakes of automobiles and trucks or on the steel wheels and brakes of rail cars. The relevant aspects of a physical object need to be tested and measured on a periodic basis. Engineers must periodically test the integrity of a dam or a bridge. The United States Federal Aviation Administration (US FAA) has a very robust airworthiness certification program for certification, inspection, maintenance, repair and reporting of all aircraft types. Automobiles must have their brakes checked and repaired per the manufacturer’s specification. In sum, the vision is that the digital twin will obtain and process the information about the actual state and condition of the physical object—its twin. In order for the former to be effective, users must be able to trust that the digital twin accurately represents the functional state and condition of the physical object per its tolerance specifications. If not, then the digital twin is not trustworthy. Instrumentation and Monitoring When dealing with actual, physical, real-world objects, the only way to associate a physical object with its digital twin is through instrumentation. Instrumentation refers to the mechanisms that enable the sensing (collecting), measuring, analyzing, and reporting of information about an object— any physical object. In fact, instrumentation is the affinity that binds digital twin technology and applications with IoT technology. IoT technology provides the myriad type and sophistication of sensors and probes that supply information to the larger ecosystem of computing systems that enable all of the applications envisioned for digital twin technology. However, there are real-world challenges to injecting physical objects with instrumentation. Understanding where to place probes comes from experience with real-world physical objects [20]. This reality underscores the importance of building digital twin applications that focus on monitoring of real-world entities, either for real-time applications, or simply for laboratory modeling for purposes of comprehending simulation and modeling. The overarching notion is that the information identified as being important for collection should reflect the intended use of the DT application, especially the level of detail and breadth of information needed to construct a meaningful DT that adequately represents its physical twin. DT applications that will be used for monitoring safety-critical objects is the purview of safety- critical systems engineering [20]. Heterogeneity of Standards The digital twin ecosystem is expanding rapidly. On the surface this impetus in this new technology arena is encouraging, but it belies some challenges. Commercial and industry forces are undermining the primary vision and hope of digital twin technology; namely, standardization and seamless interoperability of models and software systems and applications across vertical markets, industry silos and academia.
388
V. Piroumian
Today there is no clear definition of a digital twin. As commercial enterprises rush headlong to build software applications that claim to be “digital twin compliant,” they end up building proprietary solutions by virtue of the fact that there is no standard, or even a clear definition, for digital twins. Digital twins offers nothing new beyond the promise of standards and interoperability across applications, ecosystems and vertical markets. That is, as previously mentioned, modeling and simulation have existed since the first generation of computer software. There have been advances in technology including high-powered microprocessors, sophisticated 3D graphics, VR, AR and so forth. But these technological advances only facilitate more advanced applications; they do not cultivate standardization or interoperability. Without standards, digital twin technology will cause more harm than good despite advances in related technology areas such as sensors, networking, virtual reality, augmented reality, modeling and simulation, and raw computing power. Amidst all this chaos, one must carefully examine the meaning and reality of these claims of “digital twin compliance” in light of the absence of any standard, or even a formal definition, of the term ‘digital twin’ at this time. Even as standards materialize, there will quite possibly be more than one standard as is frequently the case in a technology area. In that case, anyone thinking about embracing digital twin technology or experimenting with trials, prototypes or third-party commercial software packages purporting to be digital twin compliant, must examine interoperability. Vendors might offer predefined digital twin models and definitions of certain objects; an automobile, a house or a smart city. One vendor might sell a model for an automobile engine in the form of a digital definition. But the user must be ready to assess compatibility between that definition—that component—and the digital application that will use it. And one must assess whether that vendor’s digital twin model is suitable for the intended use and whether it is adequately accurate for the particular physical automobiles that will be modeled, monitored, simulated or controlled by the application. Complicating the landscape even more is the notion of component definitions. A digital twin of any object—an automobile engine, a dam, an airplane—might be a composition of digital twin components from different vendors or applications or modules [20]. One vendor might sell you a digital twin representing the automobile’s engine block while others describe, respectively, the navigation system, anti- lock braking system or air bag system. One must carefully evaluate the compatibility and interoperability of such components. A failure in interoperability or compatibility would make the system untrustworthy; perhaps even unusable. Standards are the key to ensuring this level of interoperability. Will standards be truly compatible? Will they be able to interface to one another? Will they be truly interoperable? The most important aspiration underlying the genesis of digital twins is the goal of true interoperability across domains, vertical markets and applications. Anyone considering building a strategy around digital twin and IoT technology must be aware of the risk posed by today’s lack of standards.
Cybersecurity and Dependability for Digital Twins and the Internet of Things
389
Certification The problems and challenges regarding standards, compatibility and interoperability can be negated via certification [20]. A competent organization of experts would be well-suited and capable of determining interoperability, fidelity to a specification and to the accuracy supported or reflected by a digital twin module or component, or even an entire turnkey system. In the absence of such certification, each organization would have to do extensive testing itself in order to determine dependability or trustworthiness of a system. Think about the complexity and difficulty of achieving and verifying trust. Each independent testing effort would be useless without the ability to confirm that each test environment was equivalently accurate, compatible and certified. Who would have the knowledge, skills and tools to achieve that result? What test environment and tools would they use? Who would certify the results? A modern commercial aircraft is assembled from components that are designed and manufactured by thousands of third-party companies. One vision for digital twins is that architects, engineers, designers and the like would verify the design specifications, modeling, engineering analysis, simulation, component compatibility, integration, implementation, testing, assembly and all other engineering activities using digital twin models for modeling and simulation prior to building any parts. What level of trust would be necessary to ensure that a safe product could be built in such an ecosystem? This is a critically important consideration as this scenario describes the future vision for digital twin technology. There are two components to testing: • Certification of the process used to develop a digital twin. • Certification of the final artifacts produced by the testing process. With respect to digital twins, the aforementioned would be germane in determining how the digital twin was tested. Additionally, it would also relate to certifying the accuracy of the digital twin. And, as discussed above, digital twin accuracy could mean: • The accuracy of the digital twin definition in regard to the attributes and functional behavior of the class of real-world physical objects it represents • The accuracy of a specific digital twin instance with respect to its actual physical twin—the condition, state and functional equivalence of one specific physical object. Trust involves a clear understanding of all of these aspects. Ultimately, trust— whether something can be trusted or not—will be determined by the end user. Some of the examples offered in this section clearly convey the importance of trust in systems that involve physical objects that will interface with or be used by or affect real people. This universe of applications involving human users are precisely the reason that this chapter focuses on the relationship between digital twins and IoT. The synergy between these two areas are imperative for systems involving physical objects that will affect humans or even other aspects of our world.
390
V. Piroumian
Error Propagation One significant problem is error propagation. Consider the composition of digital twins discussed previously. An application might use digital twins from various vendors, or the digital twins might have been tested and certified by different certifying authority organizations. Digital twin C, which uses digital twin definitions A and B, must trust that A and B accurately represent their associated real world objects. If not, digital twin C incurs errors. Now if digital twin C is used by digital twin D, it will propagate or cascade the errors to D, and so on [20]. It doesn’t matter how the digital twin loses accuracy. A digital twin might inaccurately represent its physical counterpart if that object’s sensors or probes are not properly calibrated, if their physical placement is off, if they do not properly measure decay as discussed previously, or if there are cybersecurity attacks that successfully and maliciously change and misrepresent the state of the object. The errors would still cascade through the system. At each juncture, additional inaccuracies would amplify the degree of the errors. Counterfeiting A cybersecurity attack could be any of a number of attacks (discussed earlier in this chapter) that could render the state of either the physical object or its digital twin inaccurate. A successful attack would remove the digital twin’s fidelity to the physical object. Another approach is to create counterfeit digital twins and pretend that they are the legitimate ones in a system. Counterfeit instances could change state as desired and produce inaccurate results. Or they might simply not have the level of accuracy or precision required for the application. Testing and Determination of Accuracy Making a system dependable is very different than determining that a system is, in fact, dependable. For all of the aspects discussed above, the system architect creating specifications, architectures, designs or solutions of digital twin applications and technology must ensure that there is a robust mechanism or method, or processes to determine the accuracy of the system. Digital twins complicate testing. Firstly, one must ensure that sensors accurately and meaningfully measure the quantities needed for the application. One must test the physical object to determine whether sensors and probes are accurately reporting the state or condition of the object. Additionally, one must test the accuracy of the digital twin. Accuracy includes measuring the fidelity of the digital twin model to the nature of the physical object. But it also involves testing to verify that a particular digital twin instance accurately reflects the state of the unique physical instance that is its counterpart; this involves considering the accurate measurement of decay of the physical object. As this technology area develops, there will be much flux in the effort to specify digital twin definitions: what comprises an accurate, or realistic, or comprehensive digital twin definition. What will be the standard or standards? Will there be different versions of digital twins: some less detailed than others for applications that don’t require the sophistication of the most comprehensive model? For example, will there be multiple variants of digital twins comprising multiple echelons of breadth and detail of a smart city, or an automobile, ship, commercial airliner or the power plant of a commercial jet liner?
Cybersecurity and Dependability for Digital Twins and the Internet of Things
391
18 Conclusions Digital twins will be used for modeling, simulation, monitoring and command and control applications. The overarching idea behind digital twin technology is that a digital twin represents, and is associated with, a real world counterpart—its twin. This chapter focused on and examined the cases where the real-world counterpart was a physical object. Moreover, this chapter focused on monitoring and command and control usage categories. Any digital twin must represent its physical twin with the level of fidelity and accuracy required by the application at hand. That is, it must be fit for purpose. A system whose purpose is to represent a real world object fails if it cannot convey accurately to the user—either a human or another system—the true state of the real world object(s) it monitors or controls. A system that cannot do that reliably cannot be considered trustworthy. Failure can result from both inconsistencies in dependability, or from breaches in cybersecurity—either of which results in systems that are not trustworthy. Failures in dependability or security affect each other. Lack of dependability and security leads to risk. Risk is the probability of an unfavorable outcome occurring. One aspect of risk is the calculation of that probability vis-à-vis an unfavorable or deleterious outcome. Reliability engineering, the calculation and treatment of the probability of system failure, is one element of risk. Digital twin and IoT technology will be pervasive within a matter of a few years. Already our world is filled with electronics, sensors, microprocessors, communications, networks and computing systems that touch every aspect of our lives. As these systems and applications proliferate, it will become even more important to ensure their security against malicious attack. It will also become imperative that these systems are dependable and trustworthy. As these systems interact more and more with human activity, failure to make them secure and trustworthy will lead to greater risk and harm to humans and other animals. Anyone thinking of using digital twin and IoT technology must consider cybersecurity and trust from the outset of their efforts to develop an aspiration vision, business model, or business plan or program that intends to use digital twin and IoT technology.
References 1. Chen, L., Ali Babar, M., & Nuseibeh, B. (2013). Characterizing architecturally significant requirements. IEEE Software, 30(2), 38–45. https://doi.org/10.1109/MS.2012.174 2. Tarvainen, P. (2008). Adaptability evaluation at software architecture level. The Open Software Engineering Journal, 2, 1–30. https://doi.org/10.2174/1874107X00802010001 3. Schmidt, D., Stal, M., Rohnert, H., & Buschmann, F. (2000). Pattern-oriented software architecture (Vol. 2). Wiley.
392
V. Piroumian
4. Buschmann, F., Meunier, R., Rohnert, H., Sommerlad, P., & Stal, M. (1996). Pattern-oriented software architecture: A system of patterns (Vol. 1). Wiley. 5. Piroumian, V. (2021). Digital twins: Universal interoperability for the digital age. IEEE Computer, 54(1), 61–69. 6. The Readers Digest Great Encyclopedic Dictionary. The Readers Digest Association. 1966. 7. Minerva, R., Myoung, L. G., Crespi, & Noël. (2020). Digital twin in the IoT context: A survey on technical features, scenarios and architectural models. Proceedings of the IEEE, 108(10). 8. The American Heritage Dictionary. (1982). Houghton Mifflin Company. 9. Wikipedia. Simulation. https://en.wikipedia.org/wiki/Simulation. Accessed 2021. 10. Webster’s Seventh New Collegiate Dictionary. (1967). G. & C. Merriam Company, Publishers 11. Furht, B., Grostick, D., Gluch, D., Rabbat, G., Parker, J., & McRoberts, M. (1991). Introduction to real-time computing. In Real-time UNIX® systems. The Kluwer international series in engineering and computer science (real-time systems) (Vol. 121). Springer. https:// doi.org/10.1007/978-1-4615-3978-0_1 12. Radu, A., Cofaru, C., Tolea, B., & Dima, D. (2018). Study regarding the influence of airbag deployment time on the occupant injury level during a frontal vehicle collision. MATEC Web of Conferences, 184, 01007. https://doi.org/10.1051/matecconf/201818401007 13. Guo, J., Jian, X., & Lin, G. (2014). Performance evaluation of an anti-lock braking system for electric vehicles with a fuzzy sliding mode controller. Energies, 7, 6459–6476. https://doi. org/10.3390/en7106459 14. National Institute of Standards and Technology (NIST). Risk Management Framework For Information Systems And Organizations. NIST SP 800–37, REVISION 2. https://doi. org/10.6028/NIST.SP.800-37r2. Accessed 2021. 15. Kaufman, C., Perlman, R., & Speciner, M. (2002). Network security: Private communication in a public world. Prentice-Hall. 16. Xie, M., Kim-Leng, P., & Dai, Y.-S. (2004). Computing system reliability, models and analysis. Springer. 17. Wikipedia article. Reliability, availability and serviceability. https://en.wikipedia.org/wiki/ Reliability,_availability_and_serviceability. Accessed 2021. 18. Ross, R. S., Pillitteri, V. Y., Graubart, R., Bodeau, D., & McQuaid, R. Developing cyber resilient systems: A systems security engineering approach (Vol. 2, pp. 800–160). NIST Special Publication. https://doi.org/10.6028/NIST.SP.800-160v2 19. Wikipedia. Computer security. https://en.wikipedia.org/wiki/Computer_security#cite_ref- :2_1-0. Accessed 2021. 20. Voas, J., Mell, P., & Piroumian, V. Considerations for digital twin technology and emerging standards. NISTIR 8356. https://doi.org/10.6028/NIST.IR.8356-draft 21. Voas, J., Kuhn, R., Laplante, P., & Applebaum, S. (2018). Internet of things (IoT) trust concerns. National Institute of Standards and Technology (NIST). 22. Fagan, M., Megas, K. N., Scarfone, K., & Smith, M. National Institute of Standards and Technology (NIST). Foundational Cybersecurity Activities for IoT Device Manufacturers. https://doi.org/10.6028/NIST.IR.8259 23. Fagan, M., Megas, K. N., Scarfone, K., & Smith, M. National Institute of Standards and Technology (NIST). IoT Device Cybersecurity Capability Core Baseline. https://doi. org/10.6028/NIST.IR.8259A 24. Padlipsky, M. A. (1985). The elements of networking style. Prentice-Hall. 25. US Department of Transportation Federal Aviation Administration. (2016). Performance based navigation, navigation strategy 2016. https://www.faa.gov/nextgen/media/PBN_ NAS_NAV.pdf 26. Dillingham, Gerald L. “Next generation air transportation system.” United States Government Accountability Office. GAO-07-784-T. 2007. 27. Hurn, J. (1989). GPS, a guide to the next utility. Trimble Navigation. 28. Paulson, C., & Toth, P. (2021). Small business information security: The fundamentals. NISTIR 7621, Revision 1. https://doi.org/10.6028/NIST.IR.7621r1. Accessed 2021.
Cybersecurity and Dependability for Digital Twins and the Internet of Things
393
29. Koppel, T. (2016). Lights out: A cyberattack, a nation unprepared, surviving the aftermath. Crown Publishing Group. ISBN-10: 0553419986, ISBN-13: 978-0553419986. 30. Federal Aviation Administration. NextGen Implementation Plan, 2018–2019. https://www.faa. gov/nextgen/media/NextGen_Implementation_Plan-2018-19.pdf 31. Boeckl, K., Fagan, M., Fisher, W., Lefkovitz, N., Megas, K. N., Nadeau, E., Gabel, D., Ben, O.’. R., Piccarreta, B., & Scarfone, K. Considerations for managing internet of things (IoT) cybersecurity and privacy risks. NIST 8228. https://doi.org/10.6028/NIST.IR.8228 32. Crume, J. (2000). Inside internet security. Addison-Wesley. Pearson Education Unlimited. 33. Laprie, J.-C. Dependable computing: Concepts, limits, challenges (pp. 42–54). LAAS- CNRS. 25th International Symposium on Fault-Tolerant Computing. Special Issue. 34. Churchley, A. R. Safety Availability and Reliability Assessments (SARA) Ltd. Volume: 17 issue: 6, page(s): 223–226. Issue published: June 1, 1984. https://doi. org/10.1177/002029408401700602 35. Dewsbury, G., Sommerville, I., Clarke, K., & Rouncefield, M. (2003). A dependability model for domestic systems. In S. Anderson, M. Felici, & B. Littlewood (Eds.), Computer safety, reliability, and security (SAFECOMP 2003. Lecture notes in computer science) (Vol. 2788). Springer. https://doi.org/10.1007/978-3-540-39878-3_9 Vartan Piroumian, is a certified enterprise architect and global technology consultant with formal training in computer science and electrical engineering. He came up through the ranks, starting as a software engineer, and presently consults to C-level and executive leadership; his expertise encompasses the full gamut of enterprise architecture and software technologies. He particularly excels in helping his clients achieve enterprise-scale strategic and tactical business value through the intelligent adoption and use of technology. Mr. Piroumian is recognized as a world expert in digital twin technology. The executive co-chair of the Institute of Electrical and Electronics Engineers (IEEE) 2nd Annual International Conference on Digital Twins and Parallel Intelligence, he has served on the conference steering committee and advisory board since 2021. He has penned several articles on various aspects related to digital twins, in addition to having recently been chosen to edit a new magazine column dedicated to that subject. He is also a published author, having independently written two books on Java programming. Mr. Piroumian is a popular invited speaker and presenter at business and technology conferences, where he delivers presentations on strategy and technology futures—including digital twins, reliability engineering, IoT technology, and enterprise architecture.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage Flavio Bonomi and Adam T. Drobot
Abstract The successful use and adoption of Digital Twins hinges on a general infrastructure comprised of at least four technology areas. The converged Networks for Data, Digital Storage, Computing, and Communications form the necessary fabric to host and operate Digital Twins. This combination promises to deliver both the functionality and intrinsic attributes that make good on the promises of Digital Transformation. They are what makes it possible to conquer and tame the complexity of the barriers to successful management of the lifecycles for manufacturing, products, services, and processes. There are two concepts that are important here: the first is the decoupling between the infrastructure itself and the applications (e.g., Digital Twins) that run over it; and the second is infrastructure resources that are software defined, distributed, composable, and networked, to fit a large range of applications. In this Chapter we motivate the characteristics of the end-to-end data, storage, computing, and communications fabric which will ideally host Digital Twin models as they are built, deployed, operated, and continually refined. We further address the converged infrastructure that connects the physical endpoints, involving both humans and machines, to the digital space existing in Clouds, Edges and High-Performance Computing Centers. Specifically, we focus on the role of the critical boundary between physical systems and cyber space, and between Operational Technologies (OT) and Information Technologies (IT), where a challenging cultural and technological transition needs to fully unfold. The last point is illustrated by examining the application of Digital Twins in two critical domains, modern manufacturing and automotive. Keywords Cloud computing · Edge computing · Embedded computing · Infrastructure · Data management · Networking · Digital twins · Software-defined · Information technologies · Operational technologies · Virtualization · Time- sensitive networks · 5G · Embedded systems F. Bonomi (*) Onda, LLC, Palo Alto, CA, USA A. T. Drobot OpenTechWorks Inc., Wayne, PA, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_15
395
396
F. Bonomi and A. T. Drobot
1 Introduction Digital Twins, no matter how they are developed, deployed, and operated have a number of common characteristics, illustrated in Fig. 1. This commonality is true whether we look at a simple stand-alone twin for a single entity or function, or a complex twin applied to a system of systems. A way of identifying these core features is to start with a simple set of questions that identify the purpose and expected value we anticipate from a Digital Twin. The answers determine the context in which the Digital Twin will be used. They further relate the representation, digital models, and dynamics that are captured by the Digital Twin to the entities it represents, the environment in which the entity and the Digital Twin operate, and the capabilities that the Digital Twin must have to deliver value. A larger set of questions and answers addresses the mindset and structure of the organization that manages Digital Twins and the corresponding entities. The organizational human factors are just as important. This includes how the organization relates to its own employees, the ecosystem in which it operates, the expectations of external stakeholders, and finally the relationships with its customers. The emergence of Digital Twins changes the business landscape of what is possible profoundly. That change is driven by advances in technology, the recent buildout of new powerful infrastructures, successful experimentation with new business models, and the ability to solve hard business problems that were previously beyond reach.
1.1 A High-Level View of Digital Twins A closer look at the composition of a Digital Twin and how it is operated exposes the important technologies and infrastructure necessary. In practice there may be up to 40 or 50 discrete technologies in action. In this Chapter, we concentrate on the
Fig. 1 The common features of a Digital Twin showing the interaction between the Digital Twin, the entity it represents, the environment it operates in, and some of the key technical resource and organizational issues
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
Some of what’s under the skin
Digital Twin
397
Data And Data Feeds
Representation
Controls and Interface Displays
Execution And Management
Models And Simulations
Analytics for Decisions and Actions
Testing Verification Validation
Operational Maintenance Modifications
Fig. 2 Some of the important aspects of the composition of a Digital Twin. They expose the need for four building block infrastructure technologies that are addressed here. These include: (1) Data Storage and Curation; (2) Computing; (3) Communications; and (4) Software Platforms and Frameworks. Each aspect implicitly includes the roles and mindsets of human operators, organizations and institutions important to the ecosystem in which the Digital Twin exists, and the central role of end users
core disciplines and capabilities that are almost always an integral part of a Digital Twin application. While not a focus of what we will be describing, we want to acknowledge that an overarching consideration is a deep intuition and experience in the application domain and specialized technologies that are particular to that domain. We will first address the key issues at a high and abstract level and, later in the Chapter, zero in on specific and illustrative examples. In Fig. 2 we show some of the important building block considerations for Digital Twins. In the discussion that follows we explain how each of these uses and prioritizes aspects of Data, Digital Storage, Computing, Communications, and Human Factors. 1.1.1 Data and Data Feeds The starting point for a Digital Twin and the eventual building block for value is the identification, collection, processing, manipulation, maintenance, and use of data needed for the Digital Twin. This emphasizes the technologies for data science, data capture, storage, as well as data management, processing, access, retrieval, and curation. It often means that a solution will require the full use of the hierarchy of storage technologies across timescales and spatial distribution, with implications for the computing and communications infrastructure. Operational considerations may further require that the Data performance and attribute characteristics must be commensurate with the use case. Both computing and communications technologies are very much part of “Data” solutions and determine the architectural tradeoffs of where and when data is processed, transmitted, and consumed. What is important here, as shown in Fig. 3, is that the effort and activities around Data, from its collection to its movement, typically consumes a large share of attention and resources. The consequence is that Data must be thought of very much differently than in the past. It used to be good engineering practice to have each project within an enterprise develop and look after its own data with limited
398
F. Bonomi and A. T. Drobot
Problem Formulation
10%
Data Acquisition
Data Transformation
80%
Feature Engineering
Modeling and Evaluations 5% Predictions
Measure Results
5%
Fig. 3 Illustrates the steps needed for applying AI techniques for modeling and analytics in a typical Digital Twin application. In constructing and deploying Digital Twins, it is not unusual for activities related to Data to consume the lion’s share of the resources and time, often accounting for as much as 80%
attention to how the data was managed and shared across the enterprise. The premium in the Digital Twin world is the reuse of data and the building of data catalogs. To extract the greatest value, Data once captured, should be managed, and shared to support as many applications as possible. Connecting the dots and extracting meaning from the Data is what eventually allows a modern enterprise to deal with the inherent hard complexities of running a business and supporting products and services. An important consideration is the organizational structure and the attitudes of management. What was considered good engineering practice in the past, silos of data created by divide and conquer decomposition of concerns, becomes an impediment. In many organizations this has been institutionalized by budgeting practices where each function in the decomposition is responsible for its own infrastructure, to minimize the tangles and interfaces with other functions. For Digital Twins to achieve full potential we would argue that it is crucial to consider Data and the Data Network as an Enterprise, or even Ecosystem, wide infrastructure, centrally funded, but with the mechanisms for direct involvement of specialists in functional areas. This implies universally controlled data access where needed, with the responsibility for curation by experts who understand what the data means.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
399
1.1.2 Representation Digital Twins can be applied to a wide range of applications that include manufacturing, products, services, processes, and even purely abstract entities. Different applications require an emphasis on a range of methods and techniques to represent the Digital Twin. The representations expose the state of the Digital Twin. It also exposes how that state relates to its counterpart in the physical world as well as the processes and dynamics that will affect future states or conditions. This includes the representation needed for the models and simulations incorporated in the Digital Twin. For Digital Twins where the operational environment has significant impact that requires an appropriate representation of the environment and of changing environmental conditions. Under ideal conditions Digital Twins may operate autonomously but it is still important to consider the interaction with human operators and the appropriate displays/visualizations that reveal the state of the Digital Twin – that is a representation for human consumption. The representations can vary significantly. They can be a highly visual portrayals of the Digital Twin that capture the geometric description of parts and components and their relationship to each other. This is important in both manufacturing and products. They can be logical representations of procedures and workflows that are important for services and processes. Whatever the techniques used, the representation of Digital Twins requires all of the infrastructures we mentioned above for their design, development, and operation. Specific examples of this appear in several of the application chapters within this book. This aspect of Digital Twins consumes significant capital and operating resources at all stages of the DT lifecycle and over the lifecycle of the physical entity it represents. A good way of illustrating the importance of representation is to consider a product such as an automobile. The context is a Digital Twin as an integral part of the automobile with the capability to add value over the vehicle’s lifetime. A typical automobile has approximately 30,000 parts. The Digital Twin of such an automobile will contain the data that captures the geometry and material properties of each part as well as the arrangement of the parts. It will also include information about the source of the parts, information about the history of how each part was treated in manufacturing and assembly, the environmental conditions, the stress levels experienced by each part in operation, and the variation in the part’s performance under different driving conditions. To compound the richness of the information necessary is the time dimension of what happens to the parts and components over the lifetime of the car. This can be on average more than 10–15 years. A modern car contains 150–200 million lines of code, that too becomes a significant part of the Digital Twin but requires a very different set of techniques and methods more focussed on logic and event driven behaviors. The Digital Twin requires a representation of the operators of the vehicle and the of the environment in which it is immersed, as well as the supporting facilities for maintaining the vehicle. Finally, it is important to aggregate data and information from fleets of vehicles to understand the performance of the vehicle design and of the individual parts and components. The challenge and focus here is the establishment of Data Networks that can
400
F. Bonomi and A. T. Drobot
efficiently address the scale and complexity of the information that is needed for a mature and comprehensive Digital Twin in this domain. 1.1.3 Control, Interfaces, and Displays The timescales involved in the interaction between Digital Twins and their physical counterparts vary tremendously. The integration of Digital Twins with a physical or operating entity can be practiced at varying level of impact and can involve actions on a range characterized by fast, intermediate, slow, and archival timescales. We can think of the scales and what they mean, for simplicity, as follows: Timescale Fast Intermediate Slow
Range .00001 sec – 10.0’s of sec 10’s of sec – a few hours A few hours – To a month
Archival
A month – many many years
Uses Real time control Operations, Tactical decisions Management, Business decisions, Supply chain Strategy, Planning, and Lifecycle support
The most stressful cases are the direct inclusion of the Digital Twin in the control structure of the physical entity where the Digital Twin acts on a fast timescale and is responsible for: understanding the situation and state of the entity; understanding the surrounding environment; automatically issuing control commands; diagnosing and anticipating problems; and autonomously generating corrective actions. An example would be a Digital Twin for the control system of a Level 5 autonomous vehicle with full driving automation as defined by SAE [23]. The criticality of such a use drives the need for the supporting infrastructure to satisfy the requirements for high availability and reliability, redundancy of critical capabilities where needed, resilience to operate in less than ideal conditions, and a high bar for testing and certification to meet safety and performance metrics. One of the challenging aspects in this example are the extreme capability of the computing, data processing and storage, and communication system to deliver predictable low latency performance for key functions within the constraints on power, space, and weight of the vehicles design. Without going into much detail, the Digital Twin must also interact with the immediate entity operator, the supervisory system, the surrounding environment, and other sources of information important for the control system. For the example we have chosen, there has to be a tight integration with sensors on the vehicle, external information, as well as a display system that allow both the immediate and supervisory operators to express intent and to interact with the control system when needed. This means the capture of all interfaces and significant allocation of resources to the in vehicle displays, as well as any off board supervisory displays. While the vehicle may be autonomous the need for connectivity and upstream functions such as fueling/charging, maintenance, software updates, performance feedback and subsequent modifications, and fleet analysis is undiminished and
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
401
considerable. This also includes the flow of data to upstream functions and the storage of that data. Many of these functions are performed on slower timescales but again consume considerable resources. It can be argued that much of the eventual value of the Digital Twin comes from the learnings and insights that these functions provide. Architecturally one can think of the Digital Twin as an aggregation of Digital Twins where the “Control” Digital Twin is based on reduced fast running models compatible with the control timescales and is bonded to the much slower “Product” Digital Twin that is responsible for detailed granular analysis and the lessons learned that lead to product improvement, optimization, greater efficient in operations and performance. The important aspects of the scenario is that the Digital Twin, when mature, must be capable of predictable real-time operation, orchestration of disparate processes, and supported by a distributed infrastructure. 1.1.4 Execution and Management It is perhaps easiest to start off by examining how an entity the Digital Twin represents would be operated and managed. In the case of an automobile the operation can range from vehicles that are controlled by a driver to fully autonomous cars. What is central here is that “Execution” refers to the governance of the processes and actions that directly affect the core operation of the entity and “Management” refers to the operation and governance of the necessary ancillary processes to support the entity indirectly. We make a distinction between Execution and Management which are likely to be highly automated and Decision Making which is described in a subsequent section that deals with human roles in organizational structures. In the case of manual operation of the vehicle from point A to point B the action is in the driver’s hands. In modern cars it is supported by a body of applications that connect to the environment outside the vehicle. Examples of this include navigation systems for route selection, data feeds that provide information about traffic and alternate routing in congested areas, the identification of hazards, and convenience aids for location of services such as gas stations, parking, food, entertainment, and shopping. In addition, the execution functions performed by the operator and reflected in the Digital Twin, must also interact with the physical systems that constitute a car. These include items such as starting the vehicle, steering, acceleration, braking, monitoring the safety systems, logging performance, engaging the vehicles emergency response in the case of trouble, and much more. The takeaway here is that even at the lowest level of functionality the Digital Twin represents a complex, heterogenous system and requires all four infrastructures. The value the Digital Twin brings comes from collecting data, performing analysis, and closing the information flow loop to influence the manufacturers product lifecycle decisions for all cars, public administrators in improving the highway system, and the public in adjusting operator behaviors. For autonomous cars the scenario changes significantly. To get from point A to point B the vehicle must be told, either by the passengers through a user interface or by a remotely connected operator, the description of points A and B and governing
402
F. Bonomi and A. T. Drobot
parameters for the journey. More complicated is the need to include the scheduling of the vehicle for multiple trips and to optimize the assignments for a fleet of vehicles. This places a premium on the ubiquity of the infrastructure for connectivity, the computing assets for vehicle assignment and trip optimization, and on the data network for what is now time sensitive distributed information with a high degree of variability and significant uncertainty. As described later in this chapter the techniques used require access to infrastructure resources that fully account for capabilities stressing the gamut of both time sensitive and spatially distributed resources. With considerations for safety paramount, the infrastructure must also deliver high levels of reliability and availability to support the execution and management of the Digital Twin. 1.1.5 Models, Simulations, and Emulation Models, simulations, and emulations (which we collectively refer to as the “models”) are at the heart of Digital Twins. They express the knowledge base for capturing the functional aspects of the Digital Twin, including the entity dynamics under time varying conditions. It is useful to think about them in three ways: (1) the type of representations and methodologies used for the models; (2) where they fall into the hierarchy of a composite/aggregate Digital Twins; and (3) the aspect of the lifecycle that they support. These factors determine the processes and resources, necessary to capture the results in the Digital Twin. The resources consist of the four infrastructures as well the staffing, software, and any instrumented hardware implementations for the models, Typically, the models range from compute intensive simulations based on first principle thinking addressing detailed physical phenomena, to instrumented hardware for emulation, to data driven empirical models derived from specialized testing, and increasingly to AI based techniques such as Machine Learning. The primary models, which can be very compute intensive, are subsequently reduced or abstracted into fast running models incorporated explicitly in the Digital Twin and are also an essential aspect of the processes for creating and maintain the Digital Twin. In the case of AI based models, the reduction is to an inference engine based on a limited number of weight settings derived from a machine learning cycle. The inference engines often run 4–6 orders of magnitude faster than the machine learning cycle itself. What must accompany the reduced models is a determination of the range of operating parameters where the reduced model can be trusted or can accurately predict the onset of specific conditions. To give a concrete example based on just the infrastructure for computing, the resources needed will span the full hierarchy of High Performance Computing Centers for simulations, Hyperscale Cloud Facilities for machine learning, Data Centers for analyzing historical model information and performance, Edge and Fog Computing facilities or local facilities for management, and embedded computing for execution and controls based on the Digital Twin. Similar hierarchies are necessary for the other three infrastructures. To give a concrete example we can start off with something like a crash model of an automobile where the simulations would
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
403
involve detailed physics based simulations, supported by geometric models and materials specifications, the manufacturing processes used to build the automobile, multiple crash scenarios, test results from crash experiments, and collected data and reports from actual crashes. The Digital Twin becomes the knowledge base for continuous improvement of a car’s design. It also has enduring value in the lifecycle of the automobile – it represents a key element in improving automotive safety. At the same time, it needs to be supported by a distributed infrastructure that consists of all four components and a variety of modalities within each component. 1.1.6 Analytics for Decisions and Action – The Operation of Digital Twins Under routine circumstances Digital Twins may function automatically, but it is nevertheless crucial to include human operators in the loop. Distinct from Controls (1.1.3) and Execution and Management (1.1.4) Digital Twins play a significant role in supporting human decision. Specifically, the decisions made in the most stressful cases such as anomalies and exceptions not covered by the models built into the Digital Twin. The important factor is to explicitly include in the Digital Twin guard rails that specify when the Digital Twin is operating within prescribed boundaries and can function autonomically and when it must revert to a safe condition and seek human intervention. In addition, it is important to identify all situation that require human judgement. As we mentioned previously a Digital Twin can consist of an aggregation or composition of other Digital Twins, which requires either orchestration or synchronization of how the Digital Twins act collectively – this requires all four principal infrastructures to accomplish. While we sometimes get lost in worrying about the technology and the infrastructure that supports the Digital Twin, there is also a complex and organized human network consisting of operators, vendors, and end users that is essential for the operation of the Digital Twin. The staffing and organization of the human network is what sustains the Digital Twin over its lifecycle. The human network brings its own unique set of requirements. This includes: • The organization and management of the human network –– Within an operating enterprise –– Within the ecosystem that supports the enterprise –– With interfaces to other stakeholders (ranging from regulators to standards bodies, to educational and research institutions) –– With the inclusion and role for the end-users • Skilled expert, experienced, and trained staffing within the enterprise and the ecosystem, as well as end-users who have been familiarized with, qualified for, and comfortable with the roles they play. • The resources for communication and collaboration across the network • Access to data and information • Digital and physical assets and “tools” to support the human network
404
F. Bonomi and A. T. Drobot
The difficult aspect, probably much more difficult than the orchestration and synchronization of the “atomic” Digital Twins to act in concert as a composite Digital Twin, is the orchestration and efficient tasking of the human network. The very existence the Digital Twin, and much of the value that it can bring, stems from the fact it allows us to tame complicated and complex problems that were too difficult to solve in the past. Much of that has to be supported by the analytical and decision support tools that enable human decision to be aligned across the network. It also crucial that the human decisions should be evidence and fact based and able to deal with inherent complexity under time constraints. The issues bring together the worlds of Information Technology (IT) and Operational Technology (OT) and tools from statistical analysis, optimization, and increasingly Artificial Intelligence and Machine Learning. These are the technologies that can dramatically improve decision making and take on routine tasks, perform analysis when large volumes of data are needed, and efficiently reduce complex data to meaningful choices for the human network. All four infrastructures are necessary for support as well as specific technologies for real time or time sensitive operating systems. 1.1.7 Testing, Verification, Validation, and Closing the Feedback Loop Recognizing that Digital Twins represent complex systems composed of other Digital Twins, that are operated in complex settings, under non-deterministic conditions, and are involved in safety critical functions, testing is essential. Much of what can be said here applies to the basics of systems engineering [11] and so we will not describe it in detail. What is important to know is that in the development of a product the testing, verification, and validation can easily consume over half the budget. The testing must be performed at a component level (corresponding to the atomic Digital Twins), the unit level, and eventually at the systems level that is by the composite or aggregate Digital Twin. 1.1.8 Operational Maintenance and Modification One of the most popular applications of Digital Twins is prognostics and predictive maintenance applied to manufacturing, products, and goods. The value comes from avoiding breakdowns and surprises that are potentially dangerous in a safety sense, otherwise disruptive, and often costly. In the same way that we worry about physical entities we have to worry about the operation, maintenance, and when needed, modifications to the Digital Twin. There are several chapters in this Book that cover this aspect with the emphasis on the use of analytics, AI/ML techniques used in the context of Operational Technologies. In addition, there is also a need for workflow processes, supply chain technologies, and emerging AR/VR to capabilities to support maintenance and modification functions, and the use digitized product information. The four infrastructures, digitization, distribution, and multiple requirements such as high reliability and availability are needed to support these functions.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
405
1.2 Application Domains for Digital Twins The starting point is to recognize that Digital Twins can be applied to a wide range of use cases. To simplify, these can be binned into five categories that emphasize different aspects of the Digital Twin: 1. Manufacturing 2. Products 3. Services 4. Processes 5. Abstract concepts and ideas In each of these categories and combinations of categories, there is an almost infinite variation of use case applications where Digital Twins can play a significant role. While recognizing that almost every Vertical Industry Sector is likely to develop its own ecosystem employing the horizontal capabilities that are characterized by the four infrastructures, we have discussed it is important to focus on the requirements for the most critical aspects impact safety, economics, and societal value. In order to both illustrate applications of Digital Twins and at the same time bring out the need for an end-to-end enabling infrastructure we will limit our scope to the still broad domain of Cyber-Physical Systems [19] applications for Digital Twins, including Transportation, Industrial, Energy, Smart Cities, Health Care, with reference to Fig. 4. One of the main objectives of the Digital Transformation is the application of human intelligence and knowledge to the management, optimization, and control of systems that touch the physical world, such as the ones shown in Fig. 4, to achieve improvements in efficiency, security, safety, cost, etc.
Physical Space
Reliability/Availability
Cyber Security
Physical Security
Determinism
Time Sensivity
Funconal Safety (FuSa)
Fig. 4 Requirements at the boundary between Cyber Space and Physical Space often include time-sensitivity, determinism, physical and cyber-security, functional safety, and reliability
406
F. Bonomi and A. T. Drobot
This intelligence, derived from human knowledge and experience as well as from physical models (e.g., Digital Twins) can complemented by artificial intelligence models (e.g., neural networks). This intelligence can be hosted within resources distributed across what is called “cyber space,” supported by the vast computing, networking, storage and data organization and management resources that are progressively deployed through the evolution of Enterprises and the Internet, the Telco fixed and mobile infrastructure, Cloud and Edge Computing, as well as all the resources deployed in the operational environments mentioned above. Enabled by these end-to-end infrastructure resources, intelligence deployed in cyber-space can be continuously enriched through physical system sensing, can be hierarchically aggregated and shared, can collaborate with other distributed intelligence, and is applied to the control of the physical systems through some form of actuation. There is a critical boundary between cyber-space and the physical world, which is called Cyber-physical Boundary or Operational Edge. This is where the most meaningful convergence of IT technologies and OT technologies is taking place. Infrastructure and applications deployed at this boundary need to satisfy critical requirements because of the time-sensitive way digital control systems need to interact with the systems they control, and because the systems such as those shown in Fig. 4 carry huge physical and financial risk implication in case of failure or unpredictable performance. Thus, computing, networking, and storage at the Operational Edge together with the digital twins deployed there often need to satisfy critical requirements such as determinism, time sensitivity, and safety which may need to be satisfied to avoid the costly – and possibly life-threatening – implications of failures, unpredictable responses, missed alarms, and security breaches of the production process. In this Chapter, we will limit our attention to two specific and exemplar application areas: Industrial and Automotive. We are already witnessing applications of Digital Twins delivering value at all phases of the Industrial process, starting at the design and validation time, following with the logistics and supply chain phase, manufacturing planning, execution, and quality control, all the way through the product distribution and service phase. Models supporting these complex phases and operations will naturally need to be hosted at different information and decision locations, managed, and maintained by different organizations. Similarly, as discussed in a different Chapter in this book [14], we are seeing an exploding interest in the role of Digital Twins in the Automotive and Transportation domains, motivated by the movement towards electrification and autonomy. In this case, Digital Twins and simulation are adopted during the design process of these complex systems, as well as during the validation process. Intelligent Transportation will also rely on sophisticated traffic models to provide support to future autonomous vehicles from the roadside infrastructure as well as from multiple Cloud locations.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
407
2 An Infrastructure Perspective Towards the Deployment of Digital Twins
Deep Learning
In this section, we will describe the role of the distributed computing, networking, data management and storage infrastructure as the foundational underpinning for the complex interaction between Digital Twin intelligence and physical systems. Also, we will motivate the critical importance of the deployment of Digital Twins at the Operational Edge and articulate the challenges that need to be addressed in order to achieve this goal. Figure 5 illustrates at the high-level the end-to-end infrastructure supporting distributed applications from a variety of Endpoints to Private or Public Clouds and emphasizes that real-time decisions are naturally made closer the controlled Endpoints, while slower optimizations and learning naturally happen at higher levels of this infrastructure. Figure 6 illustrates how models residing in Embedded endpoints, at the Edge and in Clouds, interact with physical systems, through the creation of a hierarchy of information distribution and processing loops, characterized by different time scales. Again, learning and training happen deeper up the infrastructure, while inference models for fast decision making are active near the Endpoints. This architecture creates a powerful, adaptive, and collaborative cyber space. Also, thinking for a moment more “horizontally,” models related to different phases of a process, such as a production process, including design, supply chain,
Public and Private Cloud
Compute Edge
Device Edge
Sensors
Vehicles
Factories
“Things And Endpoints”
Buildings Turbines
Cies
Real Time
Decision Making
Network Edge
Fig. 5 The end-to-end communications, computing, and storage infrastructure hosting distributed applications, from Endpoints to the Edge “Continuum” to Clouds. Real-time, critical decisions are supported near the Endpoints, while data aggregation, slower optimization, and learning are supported deeper up the infrastructure
408
F. Bonomi and A. T. Drobot
Data Collection & Acquisitions & Data Feature Determination
Slow
Experimentation & Model Development
Intermediate Fast Ancillary Data Real Time
HPC Cloud
Model Integration & Deployment & DevOps
Model Operation & Monitoring
Fog Edge Updates and Feedback Local Embedded
Fig. 6 Digital Twin Lifecycles require a continuous process of data extraction, analysis, model building and refinement, inference model reduction, and deployment. This process is supported by the distributed, orchestrated communications, computing, and storage resources of the underlying infrastructure
production, distribution, and support, may motivate a chain of models one feeding into the other. This virtuous cycle of data collection, analysis, and feedback can occur inside the physical system hosted by embedded computing or it can take advantage of computing capabilities distributed in the developing “continuum” involving what is known as Fog or Edge Computing [3] [3, 4] [4, 21] [9, 21] [9] resources, all the way to computing resources located in Private or Public Clouds or where High Performance Computing (HPC) is available. Recently, much emphasis has been devoted to the deployment and impact of Digital Twins (as well as AI) supported by Cloud computing resources. This is already bringing important benefits, because Cloud intelligence can rely on vast aggregate data as well as scalable computing and storage resources. With the perspective of the broad distributed objectives of Fig. 6, it is fundamental to bring more attention to the complementary deployment of modern intelligence closer to the boundary with the physical systems. Intelligence available at the Operational Edge can respond quickly to observations and thus be more effective and enrich control, quality, and equipment reliability. This is also where data is extracted, possibly in growing amounts, and may need to be cleaned, normalized, reduced, assigned access rights, and security protected. This is where private data may need to be stored after local analysis and exposed to locally deployed intelligence. Some of the expected benefits gained by deploying Digital Twins and other models at the Edge include, among others: • the achievement of more sophisticated and intelligent behavior prediction and control of the physical systems. • more agile and precise provisioning of such systems.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
409
• better information interpretation from sensors associated to the physical processes for maintenance and quality control purposes. • improved overall process efficiency through real time-modeling of process and operations flows. • real-time synthetic data creation from models that may complement physical data for AI application purposes. At the Operational Edge, however, computing resources are more scarce and usually more distributed, more energy and location sensitive, more exposed to physical and cyber security attacks, and not as easy to manage. Also, this is where computing and networking need to satisfy critical requirements such as determinism, time sensitivity, and safety, since the ultimate objective is in fact to intimately integrate digital twin models and other forms of cyber intelligence within the operational process, where digital machine control is built on the foundation of cyclical communication and computing patterns, and where reliability and determinism are required to deliver Functional Safety (FuSa).
3 The End-to-End, Software-Defined Infrastructure for Digital Twins In this section we will describe the envisioned, evolving distributed infrastructure of computing, networking data management and storage resources which will host Digital Twins and other distributed interacting applications as described in the previous section. The distributed infrastructure will be built by leveraging current pools of computing, storage, and networking resources available in Clouds, Data Centers, Edges, public and private networks, wired and wireless, but will need more technological progress and integration in order to provide a more interconnected, interoperable, orchestrated, secure distributed system, capable of hosting distributed applications across multiple domains, seamlessly covering IT and OT domains. This infrastructure, illustrated in Fig. 7, needs to offer a non-homogeneous, distributed, networked computing and storage fabric, hosting software applications that can be potentially deployed in a consistent way anywhere across this fabric. This requires the ubiquitous adoption of containerization and microservice models of software development and lifecycle management. This will enable the development of applications and models in the Cloud and the deployment of these applications and models across the infrastructure. Note that as we come closer to the physical boundary, stricter timing, determinism, security, and safety requirements apply, with the resources becoming more scarce and usually more distributed, more energy and location sensitive, more exposed to physical and cyber security threats.
410
F. Bonomi and A. T. Drobot
Applications + Infrastructure HPC Cloud
Slow “Cloud Native” Containers, Microservices, Virtual Machines
Intermediate Fast
Fog Edge
Real-time Safety Domain Real Time
End-to-end Distributed, Portable, Orchestrated Services
End-to-end Distributed, Managed, Secure Computing, Communications and Storage Infrastructure
Local Embedded
The Software-Defined Paradigm Fig. 7 The vision for an end-to-end distributed infrastructure of data, computing, communications, and storage resources
Software and data will need to be securely and effectively distributed across the fabric, enabling previously unthinkable cooperation and collaboration across humans and machines. A far-reaching implication of what we are describing here is the progressive decoupling of applications and infrastructure. Applications are not bound any longer to specialized hardware components but may be moved and hosted wherever sufficient resources are available and where performance guarantees can be met. We are moving towards what is now called a Software Defined paradigm [15] [15, 27] [22, 27] [22], proliferating from networking to storage to the computing infrastructure, from Clouds, Data Centers or Enterprise Networks towards the operational domain. This movement is leading to what is referred to as the Software Defined Vehicle or Software Defined Manufacturing, discussed later in this section. This is a deeply ambitious but not too distant vision for this most advanced Distributed System of Systems [2]. Achieving this vision will require further cultural and technological progress; however, the full realization of this vision will contribute greatly to the fulfillment of the functional and market promises of the Digital Transformation. More concretely, let us clarify these concepts by illustrating the progress towards the infrastructure discussed here in the Automotive and in the Industrial domains, with reference to Figs. 8 and 9, respectively. Over the past 10 years, in both the Automotive and the Industrial domains, we have been witnessing the deployment of more powerful multicore computing in vehicles, robot, industrial floors, and industrial machines, supporting the fast evolution of more advanced sensors. Endpoints are progressively becoming more connected for monitoring and data extraction, with Clouds playing a growing role in the collection, analysis, and storage of the exploding amount of data collected from the endpoints. A continuum of
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
411
Cloud Services
Edge Services
Close-loop Optimizations Even Slower Response Security
Supervisory Control Slower Real-time Security
Real-time, Safety Domain
Vehicle Services
Vehicle Services
Real-time Control Safety Security
Critical Computing Deterministic Networking and Data Distribution
Fig. 8 End-to-end distributed computing, communications, and storage in Automotive
Cloud Services
Close-loop Optimizations Even Slower Response Security
Edge Services
Firewall & DMZ
Real-time, Safety Domain
Supervisory Control Slower Real-time Security 5G
Real-time Control Safety Security
Operational Services
Critical Computing Deterministic Networking and Data Distribution
TSN
Fig. 9 End-to-end distributed computing, communications, and storage in Industrial System Automation
computing resources is now being deployed between endpoints and Clouds, providing the Edge layer of resources in Enterprises and at the periphery of the Telco networks (e.g., Mobile Edge behind cellular towers). The full realization of the Software Defined infrastructure described here will require a deeper convergence between IT and OT technologies and practices. This convergence is dramatically accelerating in the automotive vertical, while it is progressing more slowly, although in the same direction, in the industrial vertical. In the automotive vertical we are witnessing the adoption of recent advances developed in IT, such as virtualization, data distribution, analytics, digital twins [14] and software lifecycle management with remote over the air update. The progress
412
F. Bonomi and A. T. Drobot
towards a Software Defined vehicle architecture in now irreversible. A key enabler of this evolution is the progressive adoption of virtualization from the Cloud all the way to various manifestations of the Edge, whether in telco Edge deployments behind 5G towers or in cities and roadsides, supporting transportation and autonomy relevant services. The internal vehicle architecture, characterized until recently by a proliferation of small and poorly networked ECUs, is quickly consolidating its computing infrastructure around a few virtualized larger controllers, supporting applications of mixed criticality, and networked via high bandwidth time sensitive Ethernet (TSN) [10]. This dramatic evolution is illustrated in Fig. 10, where we show distributed services (Cloud, Edge, Vehicle) supported by virtualized computing nodes across the distributed infrastructure. In order to satisfy the critical requirements at the cyber- physical boundary, the virtualization technologies more appropriate for the IT domain need to be complemented or replaced by more Mission Critical Virtualization, evolving from embedded, real-time technologies, which will be discussed more in detail in a later section. The implications of this new automotive infrastructure are enormous and will be discussed later as well. A slower but consistent infrastructure evolution is happening in the Industrial sector. Industrial Operations have traditionally been the domain of embedded, often mission critical electronic systems, built on microcontrollers, Programmable Logic, or Numerical Controllers other types of deterministic controllers, running real-time Operating Systems, supported by ruggedized Windows Industrial PCs, often un- managed and minimally interconnected. This traditional infrastructure is not conducive to the insertion and dynamic evolution of software innovations inspired by recent IT advances in Data Analytics, AI, etc. It is static, and the software functionality is usually rigidly associated with a specific and fixed piece of hardware. Each Mission Critical (OT) Virtualization: Cloud Services
• Enables HW – SW Separation • Enables Determinism and Safety at the Critical Edge:
Edge Services
• Virtual Machine and Container Level
IT Hypervisors
• Enables Mixed Criticality and Consolidation • Extends “Cloud Native”, Automation/Orchestration and CI/CD to the Operational Edge
IT and Mission Critical Hypervisors
IT Virtualization: • Enables HW– SW Separation • Powers Cloud and Telco Edge Computing:
Vehicle Services
• Virtual Machine + Container Level TSN Mission Critical Hypervisors
Mission Critical Hypervisors
TSN
Mission Critical Hypervisors
• Enables “Microservices-based” software delivery – Cloud-native • Delivers Automation/Orchestration and CI/CD
Fig. 10 Virtualization (IT and OT) is a key infrastructure enabler for Software Defined Automotive System Automation
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
413
Cloud Services
Mixed Criticality Virtualized Edge Computing and Storage
Cloud/Hybrid (IT) Resource Management Firewall & DMZ
Local (OT) Resource Management
A Distributed Fabric of Managed, Virtualized Mission Critical Edge Computing and Storage Nodes
5G
Good “Coordination” with IT Infrastructure Above
TSN
A Common Software Deployment Model from Level 1 to the Cloud
Plant
Towards a Microservices/App based World Towards Open/Modular Automations
Fig. 11 Virtualization (IT and OT) and Management are key infrastructure enablers for Software Defined Industrial System Automation
hardware-software system is usually managed by its operator, locally. For security reasons, the Operational Edge is, in general, strictly isolated from the IT world via Firewalls and Demilitarized Zones. The Industrial floor is currently undergoing a dramatic transition towards a new electronic architecture inheriting some of the technological progress experienced in the evolution of cloud computing, software defined networking, cloud storage and object data bases, big data, software deployment and orchestration, as well as security. The new infrastructure needs to adopt other innovations in virtualization, communications and networking, storage security, and management that specifically address the unique Operational Edge requirements. We will progressively move towards an Edge architecture like the one depicted in Fig. 11, which is the illustration of a distributed, interconnected set of non- homogeneous virtualized computing and storage nodes, a “system of systems”, a distributed computing fabric over which software applications are deployed, interworked, and orchestrated. These nodes host critical (i.e., time-sensitive, predictable, reliable, safe, etc.) processes close to the interface with physical endpoints. Applications satisfy less critical requirements as they are operating further away from that interface. Data is distributed, processed, and stored at various locations across this fabric.
4 Digital Twins as Distributed Applications on the Software Defined Infrastructure – Use Cases In this section we will describe the powerful interplay between a modern end-to-end Software Defined Infrastructure and Digital Twins as distributed applications over such a platform. We will again cover the automotive and industrial cases separately. We will also point to concrete use cases of significant impact.
414
F. Bonomi and A. T. Drobot
4.1 Automotive Figures 12 and 13 highlight the interplay between the key elements of the end-to- end infrastructure supporting future automobiles and a number of Digital Twins, hierarchically deployed within vehicles, at the Edge and in the Cloud. Essentially, the distributed infrastructure hosts Digital Twins as software applications, requiring specific resources to satisfy the required quality of service, e.g., real-time and safety
Implications of Software Defined Automotive: Cloud Services
• CI/CD for RT Applications (e.g., SOAFEE) • Develop/validate/update in Cloud • Deploy at the Edge
DT 3 DT 4
Edge Services
AI/Digital Twins/ RT App developed in the Cloud deployed at the Edge CI/CD
• HIL/SIL IT Hypervisors
• AI/Digital Twin from Cloud to Edge
DT 2
• Software Defined Control – Cloud/Edge Assisted IT and Mission Critical Hypervisors
• App Store
Control Assistance C2X
• Intelligent Transportation
DT 1
Vehicle Services Mission Critical Hypervisors
DT 1
TSN
Mission Critical Hypervisors
DT 1
TSN
Mission Critical Hypervisors
Mission Critical Hypervisors Mission Critical Hypervisors
Fig. 12 Digital Twins and Software Defined Automotive
Fig. 13 Digital Twins and Software Defined Automotive (e.g., SOAFEE)
Hardware In the Loop Simulator
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
415
in the vehicle and at the edge. The infrastructure provides processing, storage, and data communication resources, as well as management capabilities that may enable a dynamic update of the twins and configuration of the data communications links across such applications. In the diagrams, we show a number of Digital Twins. The vehicle twins (DT 1) may contribute to the real-time vehicle control, while at the Edge Digital Twin DT 2 may contribute to the support of specific Intelligent Transportation activities (e.g., Intersection Management). More aggregate and sophisticated Digital Twins may be developed and deployed in the Cloud (DT3 and DT4) and may not be required to run in real-time. Data is shared across the applications based on specific patterns. Data distribution may also need to satisfy more strict timing and safety requirements towards the Edge. The networking technologies supporting critical exchange may include Ethernet TSN and 5G. The capabilities of the new software defined infrastructure complement well the requirements of Digital Twins, namely: • The need for continuous update and calibration of models is supported by the Continuous Improvement/Continuous Deployment which is now extending towards the edge and the vehicles. • The flexible placement of models where required, together with configurable data management and routing, enables the construction and execution of hierarchical models. • The execution of Digital Twins at the edge in sync with the physical systems, or even faster, enables improved and enriched control. • Digital Twins can be effectively used in the validation of complex functionality (e.g., autonomy) by using Hardware and Software in the Loop validation approaches. The capabilities of the new software defined infrastructure are fully leveraged in the architecture presented in Fig. 13, developed in the context of the SOAFEE initiative [24]. The architecture demonstrates the relevance of cloud-native software development, of mixed criticality virtualization, of mixed criticality aware orchestration, and of simulation of models at various levels of the distributed platform in hardware and software.
4.2 Industrial Figures 14 and 15 illustrate the envisioned future industrial operational infrastructure, highlighting its key technology elements and also illustrating how Edge and Digital Twins interact for the specific but exemplary application of robotic Industrial Automation floor. Essentially, the Edge infrastructure hosts Digital Twins as software applications requiring specific resources to satisfy the required quality of service. The infrastructure provides processing, storage, and communication resources, as well as
416
F. Bonomi and A. T. Drobot
DT4
A1 A1
DT3
Applications
A1
A2
A3
A1
Cloud services
Applications
Hierarchical, Synchronized Digital Twin at the Edge In Sync with Physical System
Employees or Operators
A2
A2
A3
DT2
5G
A3
TSN
Cell 1
...
A1
Surveillance and Quality Control
Cell N
DT1
Fig. 14 Digital Twins and Software Defined Industrial
A2 A1
Applications A1
DT3
A1
Cloud services
A1
Many Patterns of Data Communications
Distribute
Digital Twin at the Edge In Sync with Physical System
Employees or Operators
A2
DT2
A3
5G
Publish-Subscribe
A2
DT4
Client-Server
Cell 1
Collaboration on Shared Data in Real-time
A3
Cell N
A1
TSN ...
DT1
Surveillance and Quality Control
Fig. 15 Digital Twins and software defined industrial – applications/Digital Twins share data over the network based on multiple exchange patterns
management capabilities that may enable a dynamic update of the twins and configuration of the data communications links across such applications. In the diagrams, we show a number of robotic cells along a production line, including robot or tool controllers, which may host simple digital twins or other models to enhance control, maintenance, or quality (this machine/tool level digital twin here is labeled DT1). The networking within the cell may evolve towards Ethernet TSN.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
417
At the cell level, a mission critical edge node hosts, among other applications, a cell level digital twin (DT2), networked to DT1, but also to DT3, a line level digital twin, hosted on a higher-level edge computing node, communicating using Private 5G. Note that these twins may communicate and interwork with other twins at the same level or at a higher, at factory level or beyond. In the figures, we also show a number of different applications, hosted in sensors, mobile devices, or workstations. In fact, some of these applications or services may be supporting the 5G user plane or control plane for a private 5G network. In Fig. 15, we illustrate the Data Distribution fabric enabling the interworking of applications distributed on the infrastructure. While some of the data distribution sessions are assumed to follow more traditional patterns, such as client-server or publish-subscribe, we also introduce the vision for more sophisticated data exchange patterns, such as the one in green across the hierarchy of digital twins, assuming a capability to share data which may be manipulated by multiple twins in a coherent way, creating a regulated collaboration or conference of models, as discussed in a later section in this chapter. We can conclude from this discussion that a modern end-to-end infrastructure is essential to support Digital Twins which are simultaneously evolving in real-time and collaborating. In this example, we see systems models (e.g., a robot twin) active as parts of more aggregate structure models (e.g., a robotic cell), themselves part of even more aggregate structure models (e.g., an assembly line). Next, we will describe an important use case for digital twins at the industrial operational edge in the Industrial Automation for Automotive vertical. This use case is inspired by the significant experience of a successful early deployment involving Audi, Intel and Nebbiolo Technologies [12]. The early deployment goal was to demonstrate the potential for a dramatic improvement in Quality, based on the envisioned operational infrastructure described above, by hosting a hierarchy of Artificial Intelligence models deployed and maintained at the robotics cell, at the line level of an Audi A3 body assembling via welding, and in the Cloud. The models were built in the cloud based on a large quantity of data obtained from successful and failed welds observed across multiple factories via sampling. Simplified models were deployed at the cell and line level and were constantly updated via continuous learning, locally and in the cloud. Most importantly, the insights derived from the models are used to impact the ongoing process through the generation of alarms, the modification of process control parameters, and the injection of commands into that process (e.g., stop executing a predicted bad weld). The local predictive quality models ensure quick reactions (say, 50 Hz) making it possible, for example, to ensure that a failing welding gun is prevented from performing any more work. These advances have the potential to remove the requirement for quality control by sampling from the spot-welding process thereby offering significant improvements in quality and efficiency and a reduction in both product and labor costs. The proposed solution is illustrated in Fig. 16.
418
F. Bonomi and A. T. Drobot
EDGE NODE
Fig. 16 Audi-Intel-Nebbiolo inline predictive quality solution using AI at multiple levels of the distributed infrastructure
The architecture of this important use case maps directly to a number of similar applications for quality control, but also for Predictive Maintenance or for model based enhanced control. The AI models in the Audi example would be replaced or supplemented with Digital Twin models, again structured in a hierarchy of progressive levels of aggregation and abstraction. Lighter and real-time models would be deployed near the endpoints, while more complex models would be used in the cloud or offline. Data would again flow across this hierarchy, allowing the closing of nested information loops, but would also flow horizontally, “east-west,” to previous and subsequent nodes in the distributed architecture (e.g., to the previous and next robotic cell on a production line).
5 The Software Defined Infrastructure at the Edge: Modern Edge Computing, Data Networking and Storage The full potential and maturity of the distributed infrastructure described in this paper requires a complex but promising technology convergence, ultimately requiring the integration of key elements, some of which are still maturing, including deterministic networking and data distribution (such as IEEE TSN and 5G), secure, safe and real-time capable virtualization, with software deployment centered on containers and microservices, time-sensitive data analytics and AI, digital twins and
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
419
simulation, and distributed and QoS aware system management and orchestration. In the following sections, we will discuss achievements and work in progress in some of the principal components of the forthcoming distributed edge infrastructure, along the following dimensions: (a) Computing: Mission critical computing at the edge, supporting mixed criticality applications, with strong security and safety. (b) Data Networking: Time-sensitive, reliable networking technologies like TSN and Private 5G/6G. (c) Data Distribution: Time-sensitive, reliable Data Distribution Middleware, including OPC UA over TSN and DDS over TSN. (d) Data Management and Storage: Data are the key motivation and driver for the Digital Transformation and requires distributed management, distributed structuring and storage in file systems, object and relational databases, and streaming databases. (e) End-to-end Management and Orchestration: A common software deployment model, from Clouds to endpoints, enabling a modern continuous integration and continuous delivery (CI/CD) software lifecycle management. Once this innovative infrastructure gets deployed, the emphasis should shift towards the application layer, and, in particular, on the modeling description technologies that may enable an expanding community to develop, validate, and deploy distributed applications in the operational domain with greater agility and lower cost. Existing programming languages, toolsets, and platforms have proven to be limited in dealing with the overwhelming technical challenges posed by the description of complex models for cyber-physical system behaviors that should be capable of executing in sync or even faster than the physical system being modeled. Promising technologies in this fundamental area are on the horizon and will help scaling the programmer community and reduce the cost of system modeling. Much more progress in this area is needed.
5.1 Edge Computing and Mission Critical Edge Computing One of the key promises of Edge Computing is precisely to provide hosting of new applications, such as AI and digital twins, near physical systems, closely integrated with and enhancing typical monitoring, control, and supervisory applications running today’s operations. The hope is also in a new edge infrastructure management and application deployment model, evolving from the lessons learned in the IT world and, particularly, in Cloud Computing (e.g., remote management, microservices, orchestration, etc.). There is a parallel imperative to be mindful of several key “mission critical” characteristics of the manufacturing floor when introducing any such innovative Edge Computing functionality. Quality of Service (QoS), availability, determinism, security, and (ultimately) Functional Safety (FuSa) requirements must be satisfied
420
F. Bonomi and A. T. Drobot
to avoid the costly — and possibly life-threatening — implications of failures, unpredictable responses, missed alarms, and security breaches of the production process. Mission Critical Edge Computing, based on a class of thin hypervisors such as those delivered by Lynx Software [18], Green Hills [6], Wind River [28], or from recent Open Source effort such as ACRN [26], Jailhouse [25], and others, fulfill these demands. These technologies deliver partitioning and monitoring of all system resources on multi-core computing nodes with optionally immutable allocation of critical resources to more critical subjects. They provide efficient and controllable inter-partition communications with real-time support and strong security separation. They can also support safety critical applications, while providing the option of dynamically allocating non-critical resources. Some of these hypervisors are portable across a broad range of hardware platforms including x86 and ARM. They can be adopted in industrial systems in applications ranging from embedded controllers to high-performance servers, turning them into Mission Critical Edge nodes. A technology of this kind allows systems architects to subdivide systems into smaller independent partitions and stacks, as illustrated in Fig. 17. This promotes the design of more consolidated, traceable, and efficient architectures reducing the development overhead associated with integration and enabling security and safety validation. The most appropriate architecture for a mission critical hypervisor is based on the concept of a Separation Kernel (Loveless, 2020) [16, 17]. Lynx Software LynxSecure is one of the most mature implementations of such an architecture. With reference to Fig. 17, the hypervisor supports multiple VMs (or “subjects”) which host different types of guest operating systems including Linux, Windows, Virtual Machines and Containers Automotive Controller
Robot Controller
Digital Twin
PLC Controller
Data Normalization Indexing
Forwarding
Anonymization
Firewall
Data Rights
Control apps
IT/Media apps
Comms/Sec apps
Encryption, Compression apps
RTOS
Linux OS
Secure OS
Linux OS
Mission Critical Hypervisor
Core 0
Core 1
(e.g., Lynx Secure, …)
Core 2
Core N
Multi-core processor
Dedicated I/O
Shared or Dedicated Devices Shared I/O
Fig. 17 Mission Critical Hypervisors, evolving from the real-time embedded application towards modern Edge Computing are a foundational element of the infrastructure, within Endpoints, at the Edge, but also at the Network Edge and in Clouds
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
421
traditional fully featured real-time operating systems (RTOSes), minimal scheduler- like RTOSes, safety certifiable OSs and applications, and true bare-metal applications. Figure 17 illustrates some of the options available for internal and external connectivity to I/Os and devices. Efficient shared-memory-based peer-to-peer connections can be configured across VMs, while device sharing across VMs is enabled using bridged connections. These are implemented via a special Linux VM that hosts device drivers, bridging, management, and other functions. The hypervisor has no visibility into what is happening above the guest OSs. Specifically, while containerization can be supported inside a Linux OS, the management of containers is performed by complementary functionality, offered by partners or Cloud Service Providers, as discussed in a later section.
5.2 Data Communications: Networking and Data Distribution In this section, we will discuss both the evolution of networking in the Operational Edge and the all-important Data Distribution middleware technology, simplifying the development and deployment of distributed and interworking applications, such as Digital Twins. 5.2.1 Networking: Towards TSN and 5G Networking at the operational edge needs to satisfy the same type of critical requirements on Quality of Service (QoS), availability, determinism, security, and (ultimately) Functional Safety (FuSa) listed above with respect to edge computing. These requirements, diverging from the typical requirements applying to IT networking, have led to the proliferation of non-standard, often regionalized networking technologies deployed in the Operational domain. In the past 20 years, the adoption of Ethernet, WiFi, Bluetooth, and Cellular, although not yet in a fully standard fashion, has moved IT and OT networking closer. In the past years, we have seen this process converge with the ongoing effort towards the adoption of operational technologies which are also based on IT standards, namely Ethernet with IEEE TSN [10] for wired networking and 5G [1] for wireless networking. The two technologies are deeply complementary and are cross-pollinating in their developing standards. Figure 18 summarize the functional objectives of these technologies. These developments will lead not only to a more open infrastructure, easier interoperability, and a more seamless transition between operational and IT networking domains but also across the layers of the operational domain. This is most important since the foreseen deployment of applications such as digital twins and other forms of models requires a flexible data distribution from end points to computers hosting the various levels in a hierarchy of models. Also, timing requirements may now need to apply beyond the area of field buses close to the endpoints, the
422
F. Bonomi and A. T. Drobot
Fig. 18 Summary of requirements and standardization efforts for IEEE TSN and 5G
domain of machine control (Level 1 in the Industrial Automation pyramid), and extend to higher levels, such as Level 2 (Supervisor Monitoring and Control) and even Level 3. Also, based on a solid, standard timing distribution architecture, such as that provided by TSN and 5G, it will be possible to create a distributed fabric of applications that work in synchrony with each other and also with the physical systems they model and control.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
423
5.2.2 Data Distribution Middleware at the Edge The next technological element required in the operational infrastructure at the Edge in order to effectively enable, standardize, scale, and operate the deployment of distributed applications at the operational edge is the Data Distribution middleware. With reference to multiple figures above, we envision a hierarchical distribution of applications synchronized and collaborating through the distribution of data over the underlying network. Data distribution middleware is designed to abstract away from the application designer the task of distributing data, of making sure the data are received in a timely and reliable way, and of assuring the data received are understandable across the interoperating application (semantic interoperability). Again, Data Distribution at the operational edge needs to satisfy the same type of critical requirements on Quality of Service (QoS), availability, determinism, security, and (ultimately) Functional Safety (FuSa) listed above with respect to edge computing and networking. Communication middleware has grown, mostly in the IT domain over the years, to address a number of application areas. The choice of appropriate candidates, satisfying the requirements of the operational edge, is extremely limited, and much work remains to be done in this area. Two key candidates are at the forefront of these evolution, OPC UA [20], and DDS [5]. As indicated in Figs. 19 and 20, they differ in their core architectures, but both are based fundamentally on the Publish- Subscribe data exchange model. In order to address the time sensitivity requirement, they are both in the middle of a standardization effort to integrate over TSN or over 5G. The models of data distribution required across applications at the operational edge need to go well beyond a publish subscribe model, even when it becomes time sensitive when bolted over TSN. For example, with the current data distribution standards (OPC UA and DDS), there is no way to share the “ownership” of a data object across multiple applications which would continue to modify the shared data in a collaborative and distributed mode while enforcing a fair sequencing policy of updates that would maintain coherency. This model of data sharing would be needed if we want a collaborative activity across distributed systems, looking like “Machine Conferencing,” where multiple systems coherently work on the same data, e.g., the position or status of an object multiple machines work on together. Promising advances in the direction of more functional Data Distribution and management at the edge are starting to surface. Figures 14 and 15 above illustrate a distribution of applications deployed as VMs or containers/microservices, including a hierarchy of interworking models, hosted on mission critical edge nodes across the operational edge of an industrial floor. In those figures we show a data distribution fabric characterized by a number of data exchange modes, including traditional client-server and publish-subscribe, but also alternative modes such as “broadcast,” “exchange,” a transactional mode between two entities, and “coordinate,” the coherent conferencing-like data sharing mentioned above. More innovation is needed in this area of technology.
424
F. Bonomi and A. T. Drobot
OPC UA
OVER TSN
Publish Subscribe
Fig. 19 Industrial Data Middleware today – OPC UA over TSN
DDS
OVER TSN
Fig. 20 Industrial data middleware today – Data Distribution System (DDS) over TSN
5.3 Edge Storage One of the key drivers for Edge Computing is the need for storage at the Edge, motivated by the growing volume of traffic generated by the endpoints, by the cost and time penalty involved in moving this data to the Cloud, by privacy concerns, and by the responsiveness advantages coming from storing the data, processing it, and deriving insights and action indications from it close to where the data is originated. The requirements on memory bandwidth, latency, determinism, and reliability imposed by streaming analytics, AI, and Digital Twins at the Edge are producing deep innovations in storage technologies.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
425
HCI Storage Cloud Services
Cloud/Hybrid (IT) Resource Management Firewall & DMZ
Mission Critical Edge Storage Local (OT) Resource Management
Sensor/Actuator Embedded Storage
Plant
Fig. 21 Storage at the Edge is supported by embedded storage in endpoints, virtualized, low latency storage in the OT edge, and more typical hyperconverged storage in local Data Centers
In Fig. 21, we show the role of storage in the envisioned Industrial Operational Edge infrastructure. Storage will be deployed with Hyperconverged Computing (HCI) in on-premises Data Centers or even closer to the endpoints, where Industrial servers may be required. Storage featuring mission critical characteristics will be associated with networking and computing resources deeper in the Operational Edge and in sensors and actuators on low level controllers. These modern, fast, dense, reliable, storage resources will mostly consist in Solid State Drives (SSD), well suited to the environmental requirements at the Operational Edge. Memory technologies faster than today’s SSD, such as Intel’s Optane [13] will help accelerate analytics, AI, and Digital Twin processing. Large DRAM based designs will lead to further improvements in edge application performance and help support more deterministic workloads. The envisioned Edge infrastructure, with its distributed, non-homogeneous nature will lend itself to the application of Software Defined Storage concepts [8] which decouple the software managing distributed non-homogeneous storage elements from the hardware hosting such storage. These innovative storage architectures are stimulating much progress in the areas of distributed streaming Data Bases, SQL and Object Data Bases, and File Systems. With reference to Fig. 22, two key innovations which will provide important performance and virtualization advances highly relevant for Edge. Computing will be NVMe [7] capable SSD drives that also will provide support for SR-IOV [16]. Together, these technologies are enabling a higher bandwidth to storage over PCIe, compared with legacy SATA drives, providing multiple QoS levels accessing storage necessary to achieve predictable access times, and slicing a single drive into multiple virtual drives.
426
F. Bonomi and A. T. Drobot Mission Critical Resource and Security Management (Cloud, Local or Hybrid)
Virtual Machines and Containers Tool Controller
Robot Controller vPLC Controller
Digital Twin Data Normalization Indexing
Forwarding
Anonymization
Firewall
Data Rights
Control apps
IT/Media apps
Comms/Sec apps
Encryption, Compression apps
RTOS
Rich OS
Secure OS
Linux OS
Mission Critical Hypervisor
Core 0
Core 1
(e.g., Lynx Secure, ACRN)
Core 2
Core N
Multi-core processor
Dedicated or SRIOV’ed I/O
Mission Critical Edge Storage
NVMe Capable Storage
Shared SRIOV’s Storage PCIe Link
Fig. 22 Virtualized Edge storage with NVMe and SRIOV is a natural match for mission critical virtualized computing and networking
Together, these technologies are enabling a higher bandwidth to storage over PCIe, compared with legacy SATA drives, providing multiple QoS levels accessing storage necessary to achieve predictable access times, and slicing a single drive into multiple virtual drives. Figure 22 illustrates a Mission Critical Edge Computing node complemented by an SSD drive featuring NVMe and SR-IOV. This combination brings to the Operational Edge high performance, reliability, and capacity storage, supporting critical virtualized computing. Computing nodes of this kind are ideal hosts for Digital Twins.
5.4 Edge Management and Orchestration As discussed in previous sections, one of the key advances and benefits brought about by the evolving Software Defined infrastructure, particularly at the edge, is its manageability inherited and developed based on management functionality deployed in IT and, specifically, in Cloud Computing. Management and orchestration functionality and interfaces at various points in the infrastructure offer remote and aggregate control on resources from the hardware device level to the networking and storage configuration, to the virtualization at the VM and Container level, to the performance and security configuration and monitoring levels. The high level integrated end-to-end management architecture discussed here is illustrated in Fig. 21 above.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
427
Cloud native applications can be deployed, monitored, updated, interconnected, and orchestrated across the end-to-end infrastructure supporting a CI/CD (Continuous Integration/Continuous Delivery) model across the entire distributed infrastructure. These powerful capabilities will reduce operations costs, improve efficiency, performance, and security enabling dynamic upgrades for product-by-product customization or subsystem repurposing. In the perspective of the deployment of Digital Twins at the Edge, this modern infrastructure will facilitate the dynamic deployment, interconnection, monitoring, and continuous refinement of the models which is critical for the successful application of this technologies. The insights from a distribution of physical systems and their twins can be used in real-time to debug process issues, predict and prevent failures, improve product design, and so on. The management layer offered by the new Operational Infrastructure will be an essential enabler towards the implementation of well instrumented, monitored, and supervised complex cyber-physical and industrial structures dynamically evolving as natural organisms and providing visibility and intelligent control on critical systems such as energy production and distribution, manufacturing, and transportation systems.
6 Summary and Conclusions In this chapter, we have presented a contribution to the understanding of Digital twins from an infrastructure perspective. We highlighted the importance of decoupling applications, in this case looking at Digital Twins as applications from infrastructure hosting, interconnecting, executing, storing, and managing them. This decoupling is consistent with the powerful Software Defined paradigm. We have also motivated the importance of the component of the infrastructure positioned between physical endpoints and the more traditional cyber space (IT) infrastructure. This is the Edge where IT and OT technologies deeply intertwine in their requirements and functionality. This is the component of the infrastructure that will enable Digital Twins to impact the physical systems and contribute to their efficient control. The successful evolution of the technologies empowering this Edge component of the infrastructure will significantly influence the future of the Digital Transformation.
428
F. Bonomi and A. T. Drobot
References 1. ACIA 5G. (2021). Integration of 5G with time-sensitive networking for industrial communications [Online]. Available at: https://5g-acia.org/whitepapers/ integration-of-5g-with-time-sensitive-networking-for-industrial-communications/ 2. Bleakley, G., & Hause, M. (2016). Object management group: Systems of systems engineering and the internet of things [Online]. Available at: https://www.omg.org/news/meetings/tc/il-16/ special-events/iiot-presentations/Hause_Bleakley.pdf. Accessed May 2022. 3. Bonomi, F., Milito, R., Zhu, J., & Addepalli, S. (2012). Fog computing and its role in the Internet of Things (pp. 13–15). Sigcomm.org. 4. Bonomi, F., Milito, R., Natarajan, P., & Zhu, J. (2014). Fog computing: A Platform for internet of things and analytics. In Big data and internet of things: A roadmap for smart environments. (pp. 169–186). Springer. 5. DDS Foundation. (2021). What is DDS [Online]. Available at: https://www.dds-foundation. org/what-is-dds-3/. Accessed May 2022. 6. Green Hills Software. (2022). Green hills software [Online]. Available at: https://www.ghs. com. Accessed May 2022. 7. Gupta, R. (2020). Western digital blog: What is NVMe and why is it important? A technical guide [Online]. Available at: https://blog.westerndigital.com/nvme-important-data-driven- businesses/. Accessed May 2022. 8. IBM. (2022). Software-defined storage [Online]. Available at: https://www.ibm.com/storage/ software-defined-storage?utm_content=SRCWW&p1=Search&p4=43700060595209662&p 5=e&gclid=Cj0KCQjw0umSBhDrARIsAH7FCoeKzWuV84MEygoXRICXl1KfHOf3FrYF- WzAf8RsViWeHW9wKoOtrzEaAgFkEALw_wcB&gclsrc=aw.ds. Accessed May 2022. 9. IBM. (n.d.). What is edge computing? [Online]. Available at: https://www.ibm.com/cloud/ what-is-edge-computing. Accessed May 2022. 10. IEEE. (2020). IEEE standard for local and metropolitan area networks—Timing and synchronization for time-sensitive applications [Online]. Available at: https://standards.ieee.org/ ieee/8802-1AS/10767/802.1AS/7121/ 11. INCOSE Systems Engineering Handbook. (2015). A guide for system life cycle processes and activities (4th ed.). Wiley. ISBN-13: 978-1118999400. 12. Intel. (2020). Case study: Audi’s automated factory moves closer to industry 4.0 [Online]. Available at: https://www.intel.com/content/dam/www/public/us/en/documents/case-studies/ audis-automated-factory-closer-to-industry-case-study.pdf 13. Intel. (2022). Intel optane technology: Revolutionizing memory and storage [Online]. Available at: https://www.intel.com/content/www/us/en/architecture-and-technology/intel- optane-technology.html. Accessed May 2022. 14. Kochar, N. (n.d.) Digital Twins in Automotive. In Digital Twin. s.l In press Springer Verlag. 15. Kreutz, D., et al. (2014). Software-defined networking: A comprehensive survey [Online]. Available at: https://arxiv.org/pdf/1406.0440.pdf. Accessed May 2022. 16. Loveless, T. (2019). Lynx software technologies: What is SR-IOV and why is it important for embedded devices [Online]. Available at: https://www.lynx.com/embedded-systems-learning- center/what-is-sr-iov-and-why-is-it-important-for-embedded-devices. Accessed May 2022. 17. Lynx Software Technologies. (2020). Lynx Software technologies: What is a separation kernel? [Online]. Available at: https://www.lynx.com/embedded-systems-learning-center/what- is-a-separation-kernel. Accessed May 2022. 18. Lynx Software Technologies. (2022). Lynx software technologies [Online]. Available at: https://www.lynx.com/. Accessed May 2022. 19. National Science Foundation. (2022). Cyber-physical systems: Enabling a smart and connected world [Online]. Available at: https://www.nsf.gov/news/special_reports/cyber- physical/. Accessed May 2022. 20. OPC Foundation. (n.d.) Unified architecture [Online]. Available at: https://opcfoundation.org/ about/opc-technologies/opc-ua/
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
429
21. Open Fog Consortium. (2017). Industry IoT consortium [Online]. Available at: https://www. iiconsortium.org/pdf/OpenFog_Reference_Architecture_2_09_17.pdf. Accessed May 2022. 22. Raza, M. (2018). What is SDI? How software defined infrastructure works [Online]. Available at: https://arxiv.org/pdf/1406.0440.pdf. Accessed May 2022. 23. SAE Mobilus. (2021). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles [Online]. Available at: https://www.sae.org/standards/content/ j3016_202104/. Accessed 2022. 24. SOAFEE. (2022). SOAFEE [Online]. Available at: https://soafee.io. Accessed May 2022. 25. Texas Instruments. (2020). Jailhouse hypervisor [Online]. Available at: https://softwaredl.ti.com/processor-sdk-linux/esd/docs/06_03_00_106/linux/Foundational_Components/ Virtualization/Jailhouse.html. Accessed 2022. 26. The Linux Foundation Projects. (2022). ACRN – A big little hypervisor for iot development [Online]. Available at: https://projectacrn.org. Accessed 2022. 27. Wikipedia. (2022). Software-defined storage [Online]. Available at: https://arxiv.org/ pdf/1406.0440.pdf. Accessed May 2022. 28. Windriver. (2022). Windriver [Online]. Available at: https://windriver.com. Accessed May 2022. Flavio Bonomi is a visionary entrepreneur and an innovator with expertise spanning from low level silicon to the broad domain of distributed and Cyber-Physical Systems. Over the past 10 years he has been passionately committed to the realization of the potential of the Digital Transformation, both as an Edge Computing pioneer and as a driver of transformative products, use cases and business models in verticals such as Industrial Automation, Robotics, Automotive, Intelligent Transportation and Energy. Over those years, Flavio has been focusing on the foundational challenges in the application of modern IT technologies to the optimization and control of physical systems in the verticals listed above, characterized by real-time, determinism, and safety requirements not typical of the IT domain. In particular, Flavio has developed expertise, Intellectual Property and products in the areas of deterministic networking (IEEE TSN, 5G), mission critical computing and virtualization (Separation Kernel Hypervisors) and real-time, distributed Operating Systems. Over the past two years, Flavio Bonomi was closely associated with Lynx Software Technologies, a Silicon Valley company that provides safety, security, and real-time capable virtualization platforms for embedded and distributed systems. Between 2015 and 2020, Flavio Bonomi was the Founder and CEO at Nebbiolo Technologies, a startup delivering the first complete distributed Edge Computing software platform for the Industrial Automation market. This platform vision is now being broadly adopted as the future of Industry 4.0 for the transformation of the Industrial Floor infrastructure. Flavio spent 14 years at Cisco Systems and, as a Cisco Fellow and VP, led the vision and technology adoption for Cisco’s forward-looking initiatives, at the inception of SDN, Cloud, Edge and Industrial Internet. He identified key technologies needed for Cisco products, and actively drove the acceptance and productization of many these technologies. As technology leader, Flavio
430
F. Bonomi and A. T. Drobot drove Cisco to become a focal catalyst of the Industrial Internet/ Digital Transformation movement and identified the need for the deployment of modern computing resources closer to the endpoints/edge by adopting recent innovations developed for the Cloud Computing domain. Thus, “Fog Computing,” now known as Edge Computing, was born. At Cisco, in collaboration with BMW, GM and other OEMs Flavio initiated a number of developments and investments in the area of Connected Vehicle, Automotive Architecture and Intelligent Transportation. Prior to Cisco, Flavio spent 4 years in Bay Area startups and 10 years at AT&T Bell Labs. During his career Flavio directly contributed to fundamental technology inflections in the fields of Networking, Computing, and Industrial IoT. Thriving at the boundary between applied research and advanced technology commercialization, characterized by a broad hardware and software architecture and performance expertise, Flavio has published 100+ papers in technical journals and conference proceedings, multiple book chapters, and is co-inventor in 70+ USA and International Patents. Flavio has PhD and MS degrees in Electrical Engineering from Cornell University and a Laurea Summa Cum Laude Electrical Engineering from University of Pavia, Italy. Dr. Adam Drobot is an experienced technologist and manager. His activities are strategic consulting, start-ups, and industry associations. He is the Chairman of the Board of OpenTechWorks, Inc and serves on the boards of multiple companies and no-profit organizations. These include Avlino Inc., Stealth Software Technologies Inc., Advanced Green Computing Machines Ltd., Fames USA, and the University of Texas Department of Physics Advisory Council. In the past he was the Managing Director and CTO of 2M Companies, the President of Applied Technology Solutions, and the CTO of Telcordia Technologies (Bellcore). Previous to that, he managed the Advanced Technology Group at Science Applications International (SAIC/Leidos) and was the SAIC Senior Vice President for Science and Technology. Adam is a member of the FCC Technological Advisory Council, where he recently co-chaired the Working Group on Artificial Intelligence. In the past he was on the Board of the Telecommunications Industry Association (TIA) where he Chaired the Technology Committee; the Association for Telecommunications Industry Solutions (ATIS), the US Department of Transportation Intelligent Transportation Systems Program Advisory Committee, and the University of Michigan Transportation Research Institute (UMTRI) External Advisory Board. He has served in multiple capacities within IEEE, which include the Chair of the IEEE Employee Benefits and Compensation Committee, as a member of the IEEE Awards Board, and the IEEE Industry Engagement Committee. In 2017 and 2018 he chaired the IEEE Internet of Things Initiative Activities Board and has been a General Co-Chair for the IEEE World Forum on the Internet of Things since 2018. He has published over 150 journal articles and holds 27 patents.
Infrastructure for Digital Twins: Data, Communications, Computing, and Storage
431
In his professional career he was responsible for the development of several major multi-disciplinary scientific modeling codes and also specialized in developing tools and techniques for the design, management, and operation of complex scientific facilities, discrete manufacturing systems, and large-scale industrial platforms, for both government and industry. His degrees include a BA in Engineering Physics from Cornell University and a PhD. in Plasma Physics from the University of Texas at Austin.
Digital Twin for 5G Networks Marius Corici and Thomas Magedanz
Abstract The current 5th Generation Mobile Networks (5G) standardization is aiming to significantly raise the applicability of communication networks for a wide variety of use cases spanning from industrial networks, automotive, content acquisition, multimedia broadcasters and eHealth (NGMN Alliance. 5G white paper. Next generation mobile networks, white paper 1, 2015). At the same time, this presumes that a smaller size, dedicated 5G network must be integrated into an existing complex communication infrastructure, specific to the use case. This becomes particularly challenging with a 5G network as it is a highly complex systems by itself with highly complex network management requirements in terms of fault, performance, and security. To address this issue, existing work suggests that the use of Digital Twins (DT) or Asset Administration Shells (AAS) within the industrial domain, to model information about the 5G network and to use this data to plan, evaluate and make decisions on how to optimize the behavior of the system. However, the DT based modelling of 5G systems remains a relative new topic. Within this chapter, we provide a comprehensive overview of how the exiting 5G network management uses a sort of Digital Twin (DT) approach and how a full DT paradigm would optimize the 5G networks. First, the 5G network as a complex system will be described with the specific automation and optimization capabilities as well as underlining its limitations. The additional opportunities for a more flexible DT of the 5G network, due to its softwarization would be further analyzed especially concentrating on the extension of the DT model towards an even more complexity as well as towards the new opportunities of dynamic resource scheduling as representative elements for the 5G network management functionality. A short analysis on the impact of the network between the DT and the 5G system will be provided to understand the impact of the network characteristics such as delay, capacity, and packet loss on the functioning of the system. To conclude, the presented considerations can act as robust enablers for future 6G networks including multiple self-reconfiguration M. Corici (*) · T. Magedanz Software-Based Networks Division of Fraunhofer FOKUS Institute/Technische Universität Berlin, Berlin, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_16
433
434
M. Corici and T. Magedanz
mechanisms. A short set of considerations are made on the governance of the multiple decision points and potential ways to implement such multi-decision models. Keywords Software networks · Network management · System modelling
1 Perspective A Digital Twin (DT) is a virtual model designed to accurately reflect a physical object [1]. A set of sensors is installed within the object to acquire the necessary data where the behavior of the object can be determined. The data is then relayed to a processing system to create a digital copy of the physical object. This copy is used to assess the different qualities and performance of the system and to determine possible improvements which can be directly applied back to the original physical object. This functional definition immediately can be translated into the definition of the network management. A Network Management System (NMS) [2] is a set of functionalities which is able to collect, analyze data and push configuration changes in order to improve the responsiveness to faults, the performance of the system and the security. Similarly to the DT, the NMS is outside of the active telecommunication system as it does not participate into the specific services provided. Also, both rely on the monitoring of the system through monitoring probes installed in the system and on its modification through remote commands. However, there is a lack of functionality from the NMS system which the modeling perspective of the DT brings which will be developed in detail in this chapter. The NMS was designed as a conglomerate of heterogeneous functions each optimizing for a specific part of the system such as radio, transport, or core network a specific characteristic such as reliability, load balancing or privacy. Because of this focusing on specific problems to be solved in the active system, the NMS became a very complex, sometimes conflicting in decisions, a system by itself. When considering the current 5G deployments at use case premises [3], addressing the requirements of specific vertical domains and of specific local communication needs, such a system is too complex to install. Instead, we advocate for a holistic modeling of the complete 5G system within a single integrated digital twin model and the execution of all decisions in harmony on the model itself. Considering this new perspective, in the following sections we will present how such a model can be granular enough defined for a 5G network system and the potential of this virtual cloning. Practically, we are offering a DT way to redesign the NMS by concentrating on the model instead of the functionality. As we will prove in the next section, the DT way of handling the network management provides a simpler and more flexible way of providing network management for smaller size networks as well as the basis for new types of cross-functionality network optimization.
Digital Twin for 5G Networks
435
2 Background: 5G System and Its Management In this section we concisely present the 5G system and its Network Management System. The considerations from this section are valid for any generation of communication networks starting with GSM and they should be suitable also for 6G networks, due to their expected evolutionary direction compared to 5G. As to be able to present such a complex system, a very large number of functional selections and simplifications are made. The architecture presented (Fig. 1) is based on a combination of Open RAN (O-RAN) architecture [4] and a minimal 3GPP Packet Core [5, 6] as this presents enough functional diversity. Ultimately what is presented here is already a model of the 5G system good enough to be used as the basis for the first considerations on the Digital Twin. The User Equipment (UE) is able to connect over the 5G radio to one or more Remote Radio Heads (RRH) controlled by a central Base Band Unit (BBU). Depending on the functional split with the RRH, the BBU includes at least the higher levels of central user plane functionality (CU-UP) including the Radio Link Control (RLC) and Packed Data Connection Protocol (PDCP) and their equivalent central control plane functionality (CU-CP). These two sets of functionalities are extended by additional specific runtime management functionality for interference management, enabling the dynamic planning of the local spectrum, the Radio Connection Management, enabling the scheduling of the radio resources for the different subscribers and the Mobility and QoS Management, enabling the planning from a subscriber profile perspective of the radio resources. These components are considered as part of the active system as they are immediately needed to be able to assure the expected connectivity to the devices. Near the RAN related functionality, the system also includes the packet core functionality which enables the connectivity of the subscribers with the following components. The Access and Mobility Function (AMF) includes the functionality for communication with both the User Equipment (UE) and the Radio Access Network (RAN). It performs access and registration procedures, mobility
Fig. 1 Simplified 5G system architecture
436
M. Corici and T. Magedanz
management, and acts as a proxy for other management layers handled by other NFs. For authentication and authorization specific algorithms, the AMF uses the functionality provided by the Authentication User Service Function (AUSF). The Session Management Function (SMF) controls the User Plane Functions (UPF) to execute data path operations and set QoS parameters. Next to the subscriber state maintained in each NF, a centralized subscription profile is maintained in the Unified Data Management (UDM) for synchronization reasons. Much of the state information is replicated and, in many procedures, the UDM must be queried and updated to maintain this synchronization. For the end-to-end data plane, the UE is connected to the RAN over the radio interface and then from the RAN through one or more UPFs to the Data Network (DN). The core network functionality can be located in different infrastructure elements both close to the access network, at the edge, and in distributed central nodes. The active system is managed by the management system. It is composed of many monitoring probes at different levels from infrastructure, software network components enabling a comprehensive understanding of the system’s performance including resources consumed, failures and reliability limitations, security breaches as well as related to the subscribers. The different monitored metrics are then grouped in more complex ones which would provide a dynamic view on how the system performance is perceived by the subscribers. The accumulated data is used to generate events of the system which usually are related to either a failure of a component to provide the expected service, a security breach or the passing of a load related threshold. These events are transmitted to the Network Management System (NMS), which oversees making decisions and enforcing them to the active system. The NMS has the functionality for Fault, Configuration, Accounting, Performance, and Security (FCAPS). These different management elements are acting independent of each other and, as they are not fully orthogonal to each other, in many situations a conflict mitigation is needed. Especially within the performance management the multiple decisions may enter conflict such as: • Performance of different components – scaling up and down of the components as needed to be able to handle in a proper manner the subscriber requirements • Load balancing – aims to split the subscribers’ requests across as many components as possible • Robustness – aims to group subscribers and to delay the execution of decisions, conserving the state of the system against modifications which may produce failures. Furthermore, robustness may imply the reservation of a large number of not used resources to be able to handle unexpected load peaks. • Energy efficiency – aims to reduce the energy consumption of the system by grouping the subscribers’ serving in as few components as possible These may enter into conflict with fault management decisions where hot standby components are stated to be able to handle the load in case one of the active components fails. Similarly, the security management is providing extensive controls of the different subscribers’ requests resulting into decreasing of the performance of the system.
Digital Twin for 5G Networks
437
3 Model Complexity Considerations As tried to hint in the previous description, the 5G system is a complex system. Its digital twin model and replica include a very large number of metrics reflecting the system. These metrics are further computed into even more sophisticated ones as to enable advanced system insight. Decisions are made for only on specific perspectives on the system without a clear definition, if even possible, on the impact on other areas. While no generally accepted definition of complexity exists, there are several common characteristics which appear often. Complexity could be generally defined as being “the state or quality of being intricate or complicated” [7]. Both two qualities “intricate” and “complicated” are very hard to quantify on their own, as they are both referring rather circularly to the increased number of details as well as towards a very large number of functional features entangles to each other. Growing from this definition, a system may be considered “complex” when their behaviour is chaotic – extreme sensitivity to noise (not to be confused with perfect Brownian chaos) or if they have intensive relationship properties (a large number of dependencies which are not apparent for the components in isolation) or if they are computationally intractable to model (too many parameters need to be modelled to be able to account for the evolution of the system). In the last 20 years, a new complex system theory has emerged aiming to model and to further assess complex systems [8, 9]. The complex system theory assumes the following main characteristics of the system: Predictability – this has to do with behaviour of the system across time. For a simple system it is easy to determine its evolution. For a complex system, it may happen that at some moment of time generally counterintuitive or acausal behaviour, full of surprises happens. Connectedness – this characteristic includes the interactions between the different components within the system. It has several intrinsic qualities which should be assessed individually: Number of interfaces between the different elements of the system – this is the most intuitive parameter. The more interfaces, the more interactions between the network system. Intensity of usage – the more the interfaces are used, the more connected are the network functions Interface dependencies – the interfaces are not acting in isolation. Instead through the different internal procedures the network functions are interacting with each other. Intuitively, a “star topology” of interactions is simple as there is the option to synchronize the dependencies at the central point. Same, a system with multiple cycles (i.e. feedback loops) it is complex as it may generate unexpected behaviours potentially exacerbating some behavioural directions. Distributed control – a system with a centralized control is simple as the decisions and their implications are immediately traceable and the feedback loops have to reach this single synchronization decision point. With a decentralized decision, it
438
M. Corici and T. Magedanz
is becoming highly complicated to trace the results of the decisions especially because of unexpected feedback dependencies between the different decisions elements. However, it is to be noted that a system with distributed control is less prone to failures (and to very high optimizations the same) De-composability – a system which can be reduced to multiple smaller systems is considered in general a simpler system. Complex processes, on the other hand, are irreducible. A complex system cannot be decomposed into isolated subsystems without suffering an irretrievable loss of the very information that makes it a system. Neglecting any part of the process or severing any of the connections linking its parts usually destroys essential aspects of the system’s behaviour or structure. Ultimately, complexity is about the modelling of the system. When a system is easy to model, and the model is simple than we can infer the system is simple. To be able to assess complexity, the system must be first properly modelled. However, given any system, a simple model with few elements and characteristics is giving the impression that the real system is also simple. A simple model is more prone to unexpected behaviours (i.e. less predictable) thus, for complex systems it is better to have complex models as part of the digital twin. A simple system model does not mean that the system is simple. It only means that we chose not to look at some details. And this is where the side effects come from.
The 5G system model is immediately bound to observable characteristics. A simple model includes less observable elements on which the system can react to while a more complex model is encapsulating more observable elements. We have already established in the previous section that due to the large number of metrics monitored and due to the large number of independent and not completely overlapping decisions, the 5G system would qualify as a complex one. This brings us to a highly counter-intuitive conclusion: the only way to simplify the model of a system is to make it more complex. For being able to predict more potential behaviours, especially those that are emerging due to unexpected side effects, the system itself is prone to become more complex as to be able to encapsulate more of these potential additional behaviours and to respond to these changes more robustly. To reduce the likeliness of side-effect prone behaviour, the system model must become more complex.
To understand this, we would take a simple example. If we compare the two strings “ashsdiefis” and “ababababab”. Both have 10 characters seemingly randomly selected. If we add an additional patter matching insight we could describe
Digital Twin for 5G Networks
439
the two strings as 1*ashsdiefis and 5*ab which immediately brings us to the conclusion that the string “ababababab” is simpler. Thus, with adding a new insight we have simplified the initial model of the system. However, to generate this insight a new element was added to the system (the pattern matching element) with its own interactions and predictability (and as known from finite automata machines, pattern matching is not an easy feat). Practically, adding new functionality to the system model such as additional insight and decision points the system model would look simpler as from the initial versions. Which is also true to some extent as the system is modelled to cover more of the potential behaviours and in more detail which in general can cover more of the potential surprise states. However, the system is becoming more complex as a new functional element is added. Stemming from the dynamic system theory, a system model is composed of a behavioural element – a transfer function (or matrix as in the case of dynamic systems) and a momentary system state. Based on the external input the transfer function modifies the momentary system state and generates a system output. Until now, we have discussed only about the transfer function as the behaviour, interactions, and control. Now we will see how a system can be modelled from a state perspective. Definitory for the state of a system are equilibrium states and state transitions. A system can have two types of states of equilibrium: single points (a single state where the system stabilizes) or cycles (the system is cycling across multiple states). No matter in which state a system is, with the transformation of the state, in time the system is directed towards one of the equilibrium states. The transformation of the state is also of two types. It can be a “normal” transformation, where the state is very close and predictable from the previous state. However, with the considerations of the complex systems, it can be also “chaotic”. A small change in state may impact a very different behaviour which would bring the system to a very distant state from the one previously considered (e.g. a satellite impacted by a solar flare may change minimally its trajectory resulting into potentially very high variations in its position during time). This “chaotic” changes will push the system towards a new set of equilibrium state(s) where the system will remain until other state very prone to minimal changes is reached (Fig. 2). The current 5G system is governed by multiple independent and automated policy-based management decisions [10]. As illustrated in the figure underneath, the only connection between the different policy decisions is the managed system. Because of this, the system is immediately open for side-effects in which any “normal” transformation from the perspective of one decision it can push the system to an overall “chaotic” behaviour. To be able to reduce the number of surprise behaviours, these side effect changes in the state of the system must be sensed and the system must be able to react to these. Due to the intrinsic finite nature of the policy-based systems (there are only a certain level of triggers and conditions which can be covered by a finite number of policies) the current 5G system policy-based system is not able to observe and
440
M. Corici and T. Magedanz
Fig. 2 An observable modelling perspective on the 5G system’s management
properly classify these side-effect changes. Because of this, minimal variations from the normal states may pass unobserved. Ultimately, the maintenance of an open network model both in the observable events as well as in the coordinated response to these events is the only possibility to mitigate complexity. A potential approach towards an open network model is given in the next sections. But, before, the 5G system Digital Twin must solve another reality related item: data between the managed system and the model has to pass through a non-perfect network.
4 Network Impact on the 5G Digital Twin The different decisions taken in the system are usually centralized. They use multiple monitored information, acquired from the managed system. These probes are centralized by the monitoring system and are used to create the momentary network status using the network model. While looking at this functional description, it is very easy to overlook that between the 5G System and its digital replica there is a network or to consider that this network is perfect [12].
Digital Twin for 5G Networks
441
Fig. 3 Between the 5G System and its’ model there is a network
This network may be a local network, wireless or fixed or a combination of the both with wireless termination and fixed backed. As well, this network may be a network operator internal backhaul with optic fibers or satellite or it may be a set of best-effort internet-like peering elements for cross-border or inter-continental networks. Although the characteristics presented in the following are rather generic, their impact is usually larger when more distant and more complex networks are used (Fig. 3). Depending on the specific technologies used at the physical level and how these networks mitigate the external factors, several specific characteristics which must be handled when using a digital replica [11]: • Variable latencies – with geographically distributed systems, the data communication between the monitoring probes, the decision and later the enforcement points can significantly vary. For some of the system characteristics these latencies are already unacceptable especially when related to the subscriber service continuity. As such, simple and fast automated decisions are taken locally by the 5G system without waiting for the network management decision. This implies that the state of the 5G system is modified locally without a proper notification of the correspondent management decision entities resulting into de- synchronization between the actual network and its digital twin model. In these situations, the decision model should compensate by making the assumptions that the 5G system reacted as expected to the specific events and executed the local automated decisions. In most of the situations this is sufficient. However, in case of multiple component failures at the same time, it was observed that there are situations when the automated decisions are worsening the state of the system pushing it towards a massive failure. • Variable network reliability levels – some probes of data may be lost when transmitted across the network. This can be fixed mostly through data retransmissions which would make them arrive later. Due to this, a decision taken using the system model may be significantly delayed. While waiting for the data to arrive for making the appropriate decision, the 5G system further evolves towards the new states even without local automatic decisions. Especially in critical situa-
442
•
•
•
•
M. Corici and T. Magedanz
tions as fault or security management, this delay in taking the decisions could make the 5G system towards an overall failure. Variable bandwidth – the monitoring and actuation service can be properly dimensioned to fit the existing communication capabilities of the network. In critical situations, to be able to take more complex decisions, the network model may be extended with further granularity such as probes taken at shorter time intervals. This could produce a congestion of the network which in its turn will result into increased delay in receiving these probes. To mitigate this, a proper dimensioning of the monitoring and actuation should be performed as a more granular model is not anymore as useful if the probes are arriving with significant delay and a coarser one being better. Heterogeneous levels of security in transport – to secure the communication, additional mechanisms such as authentication and authorization of the data exchange peers and the encryption of the communication network should be considered. Especially for the connection between the 5G system and its management, extensive features were standardized in this area, as this feature is critical for the overall system’s success. From the perspective of the network’s digital twin, these security mechanisms are introducing additional delay and capacity overhead. As such, the security mechanisms are aggravating the variable delays and bandwidth characteristics. Dynamic topologies – the network may change its topology during usage resulting in routing modifications. Because of this, the specific data may arrive with significant jitter variations to the decision elements. Especially problematic is the situation of dynamically placed edge network nodes where the network characteristics cannot be previously determined, the network topology varying depending on the specific momentary location. The proper measurement and inference of the jitter levels and the consideration of an overall synchronization delay at the given specific level is necessary when the decisions are taken. Multi-administrated domains, variable transport costs and high network heterogeneity – depending on the different policies in place in the different parts of the network the transmitted data may be delayed and even lost. In the current networks this is less significant as a basic layer agreement for exchanging data between domains should be available as a basis for communication.
For the transmission over the network to be able to function as expected and to report the specific link characteristics a new data exchange layer should be added to the network as motivated in.
5 Self-Reconfiguration and Multi-Decision Governance Most of the functionality of the 5G system is implemented as software components on top of generic hardware. This created the premise for an extensive number of potential reconfiguration options within the system which in its turn is the basis for the multiple management decisions to which the system is subject to.
Digital Twin for 5G Networks
443
The flexibility is achieved with a very limited number of basic operations such as deployment, migration, scaling up or down of the network functionality or its termination. On top of this the data plane or control plane data traffic can be dynamically redirected to other components or sometimes throttled down to fit the existing processing capabilities. Using these basic operations, more sophisticated adaptation procedures are possible which impact multiple functions and their interactions. The result is that the currently almost isolated actions from the management level decisions are overlapping. And as expected, they are not pushing into the same direction. As shortly assessed in the 5G system description, the multiple decisions functions have different target goals on the managed system including energy efficiency, robustness, performance, load balancing, fault management and security (Fig. 4). One may ask, why we do not consider putting all these decisions into an ultra- optimizer which provides the ultimate system configuration for the network model, to be replicated to the managed 5G system. We have all the premises for this: the 5G system model is an abstract model taken drawn from a real system and there is only a finite number of network functions with a finite number of adaptation options. As expected, the answer to this is that the 5G system is highly complex due to the huge number of interdependencies and intensity of operations. So, the decisions will have to stay as separate autonomous entities. One of the main options is to define the specific functionality is the definition of an Agent-Based Model (ABM) as part of the digital twin model [13]. The ABM would be part of the model and will concentrate on the accurate definition of the interactions between the different decision elements as they should be established in order to be able to reach an appropriate system decision across the multiple decision actors. Even this task is highly complicated as most of the different network
Fig. 4 A governance model for the 5G system
444
M. Corici and T. Magedanz
management decisions are in different situations interacting with most of the other ultimately being a dense graph of interactions, almost a full mesh. Following a general utility function should be defined. The function should give the option to define what a better system means. So, to ultimately be able to quantify across a multi-dimensional vector the quality of the state on which a system is. This would allow to run simulations on top of the 5G system model and to define the specific optimum which can be reached in the given situation, using methods such as Monte-Carlo [14]. A simple governance schema would be to pass the decision for a specific situation through all the management decision entities in a pre-selected order, each adapting the decision with the specific utility metric in mind. However, it was observed that such a solution is usually not efficient, as the last of the decisions usually changes all the parameters based on its own utility metric disregarding the previous optimizations. Same results would be obtained for a “winner takes all” schema where the impact of the other decisions would be completely removed from the system decision. Other options would be to define a negotiation schema between the multiple decision algorithms in which a compromise may be reached from the perspective of the single algorithm to be able to obtain a better overall system. However, this would require that the different algorithms may be able to provide different degrees of “involvement” in the decision and the capacity to compromise their utility metric in order to increase the ones of the other elements. At the current moment, considering the reduced knowledge of modelling of the 5G system management decisions and their impact on each other, the best alternative would be to use the Digital Twin model for running multiple decision simulations and to choose from these the one which result in the best metric for the system. With this a set of potential state altering processes would be defined through which the importance of the different management decision elements can be situationally defined.
6 Conclusions In this chapter we have aimed to apply the Digital Twin methodology to a 5G system. First, a similar modelling activity is part of the framework of carrier grade networks since the beginning. However, this framework was never unified albeit several activities aimed at this in the past. It was until now easier to take different perspectives on the 5G system and to have simplified models and decisions to address the specific perspective. However, with the passing of the 5G system to a software dominated environment, these different perspectives tend to overlap in their models and especially on how their decisions are impacting the overall system. This impossibility to reduce the system to a single perspective is one of the main characteristics of complex
Digital Twin for 5G Networks
445
systems, thus the specific advancements in the complex theory would apply where holistic approaches on systems are now in development. Because of this, the only viable alternative is the definition of a comprehensive 5G system model where the momentary state of the managed system can be described in detail and on which holistic management decisions can be taken. Furthermore, the Digital Twin for 5G is highly prone to data exchange limitations due to the network between the managed system and its management, situation increasing with the deployment of networks at the edge and interconnected with different backhauls. A careful attention should be given in the future to reduce this impact through optimized data layer oriented 6G/5G network systems. Considering that the comprehensive 5G system network model can be appropriately defined and data to maintain its replica updated can be transmitted through the network, the next major advancement required is the definition of a governance schema where the different management decisions are brought together to increase the overall performance of the system. For this, new governance schemas must be imported from other complex systems’ management and adapted specifically for the 5G system. Although a very large amount of research activities and further developments are needed to be able to provide the missing elements for such a technology, this road should be pursued, as the current management systems are failing to account for the specific complexity and as the complex system trend is foreseen to continue with the development of the 6G systems.
References 1. Boschert, S., & Rosen, R. (2016). Digital twin—The simulation aspect. In Mechatronic futures (pp. 59–74). Springer. 2. Leinwand, A., & Fang, K. (1993). Network management: A practical perspective. Addison- Wesley Longman Publishing. 3. Ziegler, V., Wild, T., Uusitalo, M., Flinck, H., Räisänen, V., & Hätönen, K. (2019). Stratification of 5G evolution and beyond 5G. In 2019 IEEE 2nd 5G World Forum (5GWF) (pp. 329–334). IEEE. 4. O-RAN Architecture Description 4.0 – March 2021 (O-RAN.WG1.O-RAN-Architecture- Description-v04.00). https://www.o-ran.org/specifications 5. 3GPP TS 23.501. System architecture for the 5G System (5GS). v17.0.0, 2021-03-30, www.3gpp.org 6. 3GPP TS 23.502. Procedures for the 5G System (5GS). v17.0.0, v17.0.0, 2021-03-31, www.3gpp.org 7. Oxford Dictionary. https://www.lexico.com/definition/Complexity. Last visited on 2021-09-18. 8. Casti, J. https://www.britannica.com/science/complexity-scientific-theory/Surprise- generating-mechanisms. Last visited in 2021-09-18. 9. Casti, J. https://core.ac.uk/download/pdf/52944519.pdf. Last visited on 2021-09-18. 10. Ciavaglia, L., Ghamri-Doudane, S., Smirnov, M., Demestichas, P., Stavroulaki, V.-A., Bantouna, A., & Sayrac, B. (2012). Unifying management of future networks with trust. Bell Labs Technical Journal, 17(3), 193–212.
446
M. Corici and T. Magedanz
11. Wilson, G. (2021). http://blog.fogcreek.com/eight-fallacies-of-distributed-computing-tech- talk/based on Peter Deutsch, http://nighthacks.com/jag/res/Fallacies.html last visited on 2021-09-18. 12. Corici, M., & Magedanz, T. (2021, November). “One layer to rule them all”: Data Layer- oriented 6G networks. In Shaping future 6G networks: Needs, impacts and technologies. (pp. 221–233). Wiley – IEEE. 13. Macal, C. M. (2016). Everything you need to know about agent-based modelling and simulation. Journal of Simulation, 10(2), 144–156. 14. Hammersley, J. (2013). Monte Carlo methods. Springer. Marius Corici (Dr. Eng.) is a senior researcher at the Fraunhofer FOKUS Institute. He has received his Diploma-Engineer degree at the – Polithenica University of Bucharest on Nomadic SatelliteBased VoIP Infrastructure. He joined the Next Generation Network Infrastructures (NGNI) competence center of Fraunhofer FOKUS Institute, later renamed as Software-based Networks Division. He has received his Doctoral Degree in 2013 on SelfAdaptable IP Control in Carrier Grade Mobile Operator Networks. Currently, he is the deputy head of the Software-based Networks business direction of Fraunhofer, leading the research and development teams for the Open5GCore (www.open5gcore.org) and NEMI (www.nemi-project.org) toolkits and acting as a research pathfinder for the evolution towards vertical sectors and customization of massive core networks as well as the design and specification of novel beyond-5G features and 6G architectures. Furthermore, Marius Corici is acting as researcher at the Technische Universität Berlin and preparing the lectures on 5G as part of the department next generation networks (Architekturen der Vermittlungsknoten – AV) (www.av.tu-berlin.de).
Thomas Magedanz (Prof. Dr. Habil.) has been professor at the Technische Universität Berlin, Germany, leading the chair for next generation networks (www.av.tu-berlin.de) since 2004. In addition, since 2003 he has been Director of the Business Unit Software-based Networks (NGNI) at the Fraunhofer Institute for Open Communication Systems FOKUS (www.fokus.fraunhofer. de/go/ngni) in Berlin. For 33 years Prof. Magedanz has been a globally recognized ICT expert, working in the convergence field of telecommunications, Internet and information technologies understanding both the technology domains and the international market demands. His interest is in software-based networks for different verticals, with a strong focus on public and non-public campus networks. His current interest is in the evolution from 5G to 6G.
Augmented Reality Training in Manufacturing Sectors Marius Preda and Traian Lavric
Abstract This chapter provides an overview of Augmented Reality (AR) as a training tool in manufacturing sectors, with a focus on manual assembly procedures. The proposed analysis investigates the two main components of an AR training system, the content creation or expertise capture (i.e., authoring) and the content consumption or information conveyance (i.e., training), separately, as they are generally treated in the literature. Finally, we present a classification of information conveyance mediums in AR, a relevant topic particularly for AR usage in industrial context. Keywords Augmented reality for industry · Assembly knowledge transfer · In-situ authoring
1 Introduction Augmented Reality (AR), an emerging technology of Industry 4.0, promises to address some of today’s main challenges faced by manufacturing industries. AR has demonstrated its benefits as a knowledge-sharing tool, among other applications in a variety of domains including education, medicine, tourism, and entertainment [6, 26]. Studies show that AR training systems can be more efficient in terms of task completion time and error rates when compared to classical training procedures (i.e., paper instructions) [13, 39, 52, 121, 136]. Although AR has been investigated as a guidance tool for manufacturing process since more than two decades [19], only recently, technological advancements enabled a resurgence of the AR use. However, despite the exponential progress that AR has experienced in recent years, no significant breakthrough can be noted in the industrial environment, due to various challenges [81, 82]. AR systems have been mostly designed and evaluated in controlled environments, under laboratory settings, as recent surveys show [14, 29, M. Preda (*) · T. Lavric Telecom SudParis, Institut Polytechnique de Paris, Institut Mines Telecom, Evry, France e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_17
447
448
M. Preda and T. Lavric
85]. Palmarini et al. [110] claimed that AR technology is not sufficiently mature for complying with strong industrial requirements such as robustness and reliability. Masood and Egger [82] identified and classified AR challenges into three main categories: technology, organization and environment and uncovered a gap between academic and industrial challenges. The authors suggested that field studies must be conducted to ensure the successful implementation of AR systems in industrial sectors.
2 Content Authoring in AR One of the most relevant challenges of industrial AR (iAR) is represented by the content authoring. In this section, we propose an overview of AR content authoring tools and methodologies, to understand the status of the research in the field and identify the most significant concerns when it comes to creating contents for iAR applications. This analysis focuses on research works dedicated to creating AR work instructions for industrial manufacturing but investigates AR topics and concerns that are of general interest as well, potentially relevant to iAR.
2.1 Authoring Tools Classification Generally, AR authoring tools are classified from low to high-level, according to their programming and content design characteristics. In this sense, development tools for building AR applications can be broadly organized into two different approaches: AR authoring for programmers and AR authoring non-programmers. Mota et al. [96] categorized 19 commercial and academic AR content design tools by authoring paradigms (i.e., stand-alone and AR-plugin) and deployment strategies (i.e., platform-specific and platform-independent). We observe that most of the analyzed tools are stand-alone and platform-independent and that the plug-in approach potentially offers more features than the stand-alone approach. The proposed taxonomy however does not investigate the scope of these tools, their features, or characteristics. Furthermore, no assessment or functional evaluation between these authoring tools is provided, making it impossible to determine which of the dataflow models is most adapted to certain authoring scenarios and requirements. Nebeling and Speicher [102] tackled this problem and classified AR authoring tools by their level of fidelity in AR/VR and by the required skills and resources involved. Five classes were identified, including basic mobile screens and interactions, basic AR/VR scenes and interactions, AR/VR-focused interactions, 3D content, and 3D games and applications. Further, the authors of the study identified three main problems of the existing AR authoring tools, including the massive tool landscape, that most AR/VR projects require tools from multiple classes and that significant gaps within and between classes of tools exist. They concluded that AR
Augmented Reality Training in Manufacturing Sectors
449
authoring tools still must address numerous challenges for people without programming knowledge to be able to author complex AR experiences. More recently Bhattacharya and Winer [7] used a different taxonomy for classifying existing AR authoring tools, based on their intended design, as follows: linking system [46, 54], AR previewer [70, 154], virtual registration [64, 97], hybrid methods [107], context-aware [156], knowledge based [62] and finally, third party packages [95, 111]. Further, a classification of AR authoring tools interfaces was provided, based on how the virtual content is authored. This authoring classification included desktop GUI based [137], mobile AR [79, 152], HMD with 2D/3D camera sensor [145], hybrid [51, 153] and finally, demonstration-based, an authoring approach proposed by the authors of the study [7]. We observe that the literature tends to categorize AR authoring tools by using multiple taxonomies. However, a set of well-established requirements and guidelines for elaborating use-case-specific AR authoring tools dedicated to industrial sectors is not provided, potentially because of the lack of field experiments and real- world evaluation. This could explain why generally, AR solutions do not target specific use cases or industrial sectors, but they rather seem to be designed with a general-purpose objective in mind. de Souza Cardoso et al. [25] supports this hypothesis, identifying that a considerable number of AR applications (48%) are not focused on a specific industry but developed for a general segment. The same study found that 97% of the AR applications are tested to ensure their viability and compare AR pros and cons against traditional methods, while only 5% of them are implemented in production.
2.2 Methods and Systems In this section, we provide a classification of AR authoring techniques proposed in the literature. We focus on those targeting industrial usage and dedicated to creating AR work instructions. 2.2.1 Fully Automatic Authoring The best AR authoring system is the one that can generate AR instructions automatically, without human intervention and ideally without prerequisites and processing stages. Such system should be able to connect and extract the necessary information from various data content providers (DB, live data feed, etc.) and generate the corresponding AR work instructions automatically. Palmarini et al. [110] uncovered that 64% of the AR contents are manually generated, suggesting that a rather high percentage of AR contents are generated in a fully or semi- automatic manner. One such research work (i.e., AREDA) was conducted by Bhattacharya and Winer [7], and proposed an AR authoring system able to automatically generate AR
450
M. Preda and T. Lavric
work instructions from a recording of an expert performing the assembly, called demonstration-based authoring. The proposed approach is composed of a demonstration and a refinement stage. The demonstration phase requires several procedures including the capture setup, background calibration, area calibration, skin calibration and finally the processing of the recorded demonstration, steps which, claim the authors, are easy to perform and do not require computer vision (CV) or 3D graphics expertise. The second phase of the authoring, the refining one, proposes a set of “layers” that allow the author to enhance the AR work instructions with 2D text, images, or video elements. The technical limitations uncovered by the authors, the complex system setup, and the lack of a user evaluation questions the usability of the system particularity in a real-world industrial context. This claim is supported as well by the detection results of one of the test cases (i.e., third test case – laptop), which showed that slightly complex assembly setups and small components represent a significant challenge for the proposed authoring system. We remark however potential advantages of such an authoring approach, as the author of the AR instructions (i.e., an assembly expert) would only be required to perform a calibration setup and the assembly procedure, while the system would capture and generate the corresponding AR work instructions automatically. We note that the most relevant concerns of the system are represented by the evaluation setup in which the expert capture is performed (i.e., controlled environment), the system prerequisites and authoring workflow which is comprised of multiple stages. 2.2.2 Video Retargeting Another automatic generation of AR work instructions techniques proposed in the literature is the one able to generate interactive 3D tutorials from existing 2D video tutorials, called video retargeting [28, 94]. This authoring technique relies on computer vision to detect and track objects in the video [93] and consequently generate instructional 3D animations [28]. However, it seems that video tutorials cannot be handled gracefully by CV algorithms, restricting the extracted instructions to the capabilities of the object detector that analyzes the video. We observe that most retargeting approaches are only demonstrated on assemblies which consist of large detectable parts, a potential concern for industrial assembly context. The video retargeting method proposed by Yamaguchi et al. [151] claimed to address the detection of small parts, however the limitations of the system were significant: (1) larger parts are still needed for tracking, (2) the system can only process videos where objects are continuously manipulated in a series of video frames without interruptions, (3) the corresponding CAD model together with a detailed assembly video must be provided, while (4) lighting, background contrast and camera calibration play an important role in the detection performance. It seems therefore very unlikely that such authoring approach could provide acceptable results in uncontrolled assembly environments.
Augmented Reality Training in Manufacturing Sectors
451
2.2.3 Product Disassembly Another automatic authoring approach of AR work instructions was proposed by Chang et al. [21] as AR-guided product disassembly (i.e., ARDIS) for maintenance and remanufacturing. The proposed authoring system consists of two major modules, disassembly sequence planning and automatic content generation. A third module, the AR interface, conveys the AR work instructions for guidance purposes. The product disassembly sequence planning requires the definition of an assembly sequence table (AST) in which the contact and the translation functions between all parts of the product is defined. A disassembly sequence table (DST) is obtained from the AST and other additional constraints, which finally generates the optimal disassembly sequence of the product. The second module, the automatic content generation, uses visual data (e.g., text, 2D graphics and images, and CAD models) to generate the corresponding AR disassembly instructions. The description of the overall authoring is not complete, and the procedure does not seem to be fully automatic. We note that the definition of the AST requires human input for specifying relations and constraints between product’s components. This process was not presented in the paper, impossible therefore to evaluate it from a time and difficultly viewpoints. Further, it seems that the text, the 2D and 3D graphics used for visually describing the generated AR disassembly instructions must be provided to the authoring module beforehand. The correspondence between these media assets and the generated assembly steps seems to also be part of a manual human intervention. Considering that an exhaustive description of the overall authoring process was not provided, and that the effectiveness of the generation process was not evaluated, we argue that is virtually impossible to assess the utility of the proposed authoring system in an industrial setup. The authoring proposal was exemplified in a preliminary evaluation on a Nespresso coffee machine; however, no experimental study was conducted. The simplistic evaluation questions the usability and effectiveness of the proposed system. 2.2.4 Printed Documents to AR Instructions Mohr et al. [93] proposed a system that automatically transfers printed technical documentation such as handbooks, to 3D AR instructions. The system identifies the most frequent forms of instructions found in printed documentation, such as image sequences, explosion diagrams, textual annotations and arrows indicating motion. The analysis of the printed documentation requires the documentation itself and a CAD model or a 3D scan of the corresponding real-world object. The authoring is semi-automatic, as some form of user input is required. The output is an interactive AR application, presenting the information from the printed documentation in 3D, superimposed on the real object. The proposed approach was designed for basic manipulation operations (e.g., place, rotate, slide), while one of the system prerequisites is a 2D assembly diagram of the product model, where the product parts are relatively big and visible.
452
M. Preda and T. Lavric
Other similar research works retargeting technical documentation to AR step-by- step instructions were conducted in the past in Zauner et al. [154], Gupta et al. [50], W. Li et al. [74] and Shao et al. [127] to name a few. We note that the authoring methods proposed in these works are not adapted to use cases where 2D diagrams (e.g., annotated, action, explosion, structural) and technical drawings do not exist, for complex assembly operations and for small or deformable parts. We observe that the utility of existing digital data or paper instructions as raw input for these approaches is questioned by various concerns. Firstly, as the work instructions are not described in a formalized manner across industrial sectors and even factories, expert intervention would be required for adapting the AR authoring system from one use case to another. Secondly, the quality of work instructions is generally sparse. The main categories used to classify quality issues include intrinsic and representational problems, unmatched, questionable, and inaccessible information [55]. D. Li et al. [73] claimed that work instructions are often insufficient and unused. These findings support our case study conclusions, questioning the reliability of existing work instructions, especially as input for systems that have layers of data interpretation. In a recent study on the same topic, Gattullo et al. [42] identified that the implementation of AR documentation in industry is still challenging because specific standards and guidelines are missing. They proposed a methodology for converting existing traditional documentation to AR-ready documentation, as a prerequisite for creating the corresponding AR work instructions. We note that redesigning and recreating existing paper instructions prior to authoring the corresponding AR work instructions contradicts industrial expectations. 2.2.5 Machine Learning-Based Recently, Mourtzis et al. [98] presented a framework for automatic generation of AR maintenance instructions based on convolutional neural networks (CNN), a framework which consists of three main modules, including spatial recognition, CNN and AR instructions generation. A preliminary experiment conducted in a controlled environment showed that the generated AR instructions presented in a Hololens 1 device allowed the 10 participants to accomplish a disassembly task in 60 min, compared to 25 min required by an experienced technician. Other experimental results are not provided in the study, neither detail regarding the visual representation of the generated AR work instructions, except that they are CAD-based. Limitations of the proposed system relate to the computer vision module that does not provide results in real-time, while the volume of the input data increases, the computational time increases exponentially as well. Secondly, the CNN was only able to reach an accuracy of approximately 80% after 10 epochs, with a fluctuation of ±10%, a rather poor performance even for laboratory evaluation settings. Finally, the prerequisites of the system including large data sets of thousands of images required for training the CNN, CAD models for generating the AR instructions
Augmented Reality Training in Manufacturing Sectors
453
together with the environmental constrains, question as well the usability of the proposed system in an industrial setup. 2.2.6 Motion-Capture Another method for capturing assembly expertise, briefly discussed in the literature, is represented by motion capture (MOCAP) systems [112, 122], a technique that allows recording and digitally presenting human body movements. Typical motion capture systems are expensive and difficult to setup and operate in industrial context, while the generated motion cannot be edited or adapted to different task constraints [131]. The key problems identified in current industrial practice regarding digital human model (DHM) in addition to the extremely limited research work conducted around this technique, strongly suggest that such approach was not yet sufficiently explored to be effective employed in AR content authoring process under challenging conditions. 2.2.7 Context-Aware and Adaptive Authoring Finally, one of the most promising approaches for creating AR work instructions is represented by context-aware and adaptive AR authoring. One such system (i.e., ARUM), designed to improve the maintenance efficiency through adaptive operational support using a context-aware AR technique was proposed in Erkoyuncu et al. [30]. ARAUM comprises two platforms, one that enables the automated authoring process and another that allows maintenance experts to interact with the information frameworks used for generating the AR contents. Thus, the authoring steps are automatically generated by the system, while maintenance experts focus on information representation, defining its format and sequence. A preliminary study conducted with 6 participants showed that ARUM was 4 times faster than Vuforia (cf. [143]) during authoring, and 2 times faster than paper-based instructions during maintenance. We note that the study was conducted in laboratory settings with a limited number of participants; the maintenance experiment seems to not have used the generated AR instructions obtained from the authoring experiment, potentially questioning the reliability of the system in a real-world assembly environment. Like other approaches, the authoring procedure is multistep, the usage of a desktop application is required, while 3D contents should be available. A similar work dedicated to assist maintenance workers with context-aware AR was proposed as well in Zhu et al. [156, 157]. The system, ACARS, can analyze the context and provide relevant information to the user in real time by conveying textual descriptions, indication arrows, product models and animations. It consists of five modules – i.e., Context Management, AR-based Visualization, Database, Offline Authoring, and On-site Authoring, and requires the use of a Desktop PC, programming skills, and domain experts. We note that in addition to the functional complexity of the proposed authoring methodology, the study did not report on the
454
M. Preda and T. Lavric
performance of the authoring process (e.g., time, error rate, mental workload). The system was evaluated in a preliminary field experiment with 8 participants and reported better qualitative scores when compared to paper-based and traditional AR-assisted approaches. The subjective feedback uncovered a set of concerns including tracking and registration accuracy, rendering issues, and non-intuitive interaction. Geng et al. [44] proposed a systematic design method for authoring adaptive and customizable AR work instructions for complex industrial operations for non- programmers. The proposed authoring workflow consists of 3 separate stages (i.e., planning, enriching and post-processing) and requires 2 experts (i.e., an engineer and an assembly worker). The authoring starts with the planning stage which is conducted by an engineer using a desktop application, then continued by the worker on the field using AR equipment and finally completed by the engineer with the post-processing stage using a desktop application. The reported authoring time for an assembly scenario consisting of 19 operations was approximately 120 min. Even though the acceptability score (S = 73) and the perceived mental workload (S = 51) reported by the participants were acceptable, it seems that the proposed authoring method is overly complex and slow. We note that 50% of the participants (i.e., 5 out of 10) rated the system with a score lower than 70, which is considered the minimal threshold value of a system to be considered acceptable. Considering the large size of the ontology entities used during the authoring which add to the complexity of the authoring system, it would be worth evaluating the learning curve of the system to the point where the authors of the AR scenario become comfortable using it effectively, an aspect with significant impact on industrial adoption. Overall, it seems that the potential advantages of such authoring approach compromise ease of use and human-centered design and relevant industrial expectations including the need of expert knowledge, adaptiveness, and reliability. Finally, a recent work conducted by T. Wang et al. [144] proposed CAPturAR, an in-situ programming tool that supports users to author context-aware applications by referring to their previous activities. The system consists in a customized AR head-mounted device with multiple camera systems that allow for non-intrusive capturing of user’s daily activities. During authoring, the captured data is reconstructed in AR with an animated avatar and virtual icons that represent the surrounding environment. The proposed system framework employs an object-orientated programming metaphor allowing the user to define events with human actions and context attributes, and finally create context-aware applications (CAPS) by connecting events with IoT functions. From a hardware perspective, the system is composed of a VR headset and a touch controller, a stereo camera for video see-through AR and object detection, a fisheye camera for action recognition and an AR HMD connected to a backpack computer. A preliminary, remote user study with 12 participants was conducted in a laboratory setting to evaluate the event definition precision and the overall usability of the system. The authored CAPs consisted in activities like reading books, heaving meal, taking a pill, and drinking coffee. While all participants managed to successfully complete the authoring tasks and the obtained usability scores indicated a good usability of the proposed user interface,
Augmented Reality Training in Manufacturing Sectors
455
the study did not report on the task difficulty and authoring time, nor on the usability of the authored CAPs. Even though the proposed system was evaluated on four different scenarios, including a sequential task tutorial, relevant to our research, the authoring methodology and the hardware configuration questions its suitability in industrial environment, particularly its effectiveness in authoring manual assembly operations. The detection performance together with the processing delay rise significant concerns particularly in video see-through AR visualization. 2.2.8 Conclusion Numerous AR authoring methodologies and tools were provided in the literature in the last decade; however, their adoption in real world use cases seems to be extremely low. Based on the analysis performed in this section, our hypothesis is that the AR authoring tools proposed before the release of modern AR HMDs like Hololens (cf. [86]), were limited by hardware capabilities. We observe that many of the proposed hardware configurations dedicated to immersive AR authoring are complex, making it hard to replicate and evaluate these proposals in different environments and setups. Other authoring techniques require the use, generally a combination, of desktop PCs, software tools and AR devices, together with specific technical expertise, often programming skills, for creating relevant AR work instructions. We observe that many of the recently proposed authoring workflows require data preparation and training (i.e., machine learning approaches), expert hardware setup and calibration and multiple authoring stages, to name just a few relevant concerns. The complexity and the required expertise to develop and operate these authoring systems potentially explain the complete lack of comparative evaluations between them. Manual authoring, either by programming, animation tools or a combination of both, not only requires specific expertise as well, but is usually a very time- consuming task, a claim supported by most research works dealing with AR authoring [93]. Automatic authoring of AR instructions generally requires some form of user input (e.g., system calibration, training, post-processing, etc.) and a set of prerequisites including existing data like printed documents and CAD models, controlled environments and dedicated tools for training and processing of the data. Authoring techniques that propose generating AR content fully automatically are not effective neither suitable in view of the complex tasks [31]. In addition, the lack of field experiments and in-depth user feedback, questions the usability and effectiveness of the proposed AR authoring systems in industrial assembly environments. To conclude, we observe that the AR content authoring approaches proposed in the literature, generally, do not address the industrial challenges and requirements (e.g., simple, reliable, efficient, adaptable, non-technical, etc.). The status of AR adoption in the industry supports this claim, indicating that at least some of the considered industrial requirements and expectations are not fulfilled by current AR authoring systems.
456
M. Preda and T. Lavric
3 AR Training and Guidance In this section, we propose a survey on AR training and guidance systems and methods, focusing on those designed to support workers in manual sequential assembly operations. We aim to identify how AR was used in similar research works, including information representation (e.g., visual contents like text, video, 3D, etc.), visualization mediums (e.g., HHD, HMD, SAR, etc.), registration methods (e.g., head, world, etc.), picking techniques (e.g., pick-by-light or pick-by-vision) and other AR relevant topics.
3.1 AR Training Versus Traditional Training Numerous studies have tried to demonstrate the benefits of AR compared to traditional training procedures. Most often, these studies compared AR approaches with paper-based instructions by measuring the time required to complete a task, the number of errors that occurred during the task, and the perceived mental workload reported by the participants [61]. [4] demonstrated that superimposing 3D animated drawing could ease the assembly processes compared to traditional user manuals. Vanneste et al. [142] compared the effects of verbal, paper-based, and AR instructions on the assembly performance and reported that AR outperforms traditional guidance methods in terms of productivity, quality, stress, help-seeking behavior, perceived task complexity, effort, and frustration. The findings of a field study on AR-assisted assembly conducted by Koumaditis et al. [65] indicated as well improvements in physical and temporal demands, effort, and task completion time when using AR. Smith et al. [129] studied the effects of a mobile AR fault diagnosis application on the performance of novice workers compared to a group of experts with no AR support. The study reported significantly better performance in the AR group in terms of assembly time, accuracy, and cognitive load. Polvi et al. [113] compared the effects of AR guidance versus pictures in an inspection use case and reported as well significant improvements in completion time, error rate, and cognitive load when using AR. Tang et al. [136] demonstrated as well that AR could improve significantly the performance and relieve mental workload on assembly tasks, in comparison to printed manuals or images displayed on LCD or HMD. Quite a few other studies evaluated as well the effectiveness of conveying instructions via AR (see-through or projected), demonstrating benefits when compared to traditional ways of training or guidance like paper instructions, desktop monitors or tablet displays [20, 56, 66, 77]. Other studies suggested that AR leads to better completion times only when participants are required to execute difficult assembly tasks [12, 39, 115, 130]. The literature uncovers however few contradictions and drawbacks when comparing AR-based training approaches with the traditional ones, particularly when it comes to industrial sectors. Some studies showed that classical training methods
Augmented Reality Training in Manufacturing Sectors
457
still perform better than AR in some respects (i.e., completion time) [39, 43, 149]. Another comparative study between paper-based and head-mounted AR instructions, conducted in Werrlich et al. [149], reported improvements in the error rate, but significant longer assembly completion times when using AR.
3.2 Use Cases in Manufacturing Sectors Industrial studies have investigated the suitability and effectiveness of AR applications for manual assembly and maintenance for more than two decades. As identified by Westerfield et al. [150], earlier research in utilizing AR technology for training has largely involved procedural tasks where the user follows visual cues to perform a series of steps, with the focus on maximizing user’s efficiency while using the AR system. Generally, the literature demonstrates the effectiveness of AR technology in improving performance and training time on various manufacturing tasks including assembly [3, 66, 142], maintenance [30, 128], and inspection [100, 113, 139]. One of the first AR industrial exploration studies was conducted by Caudell and Mizell [18] in the context of aircraft manufacturing. They developed an industrial AR application to assist with assembling aircraft wire bundles, aiming to improve worker efficiency and lower costs by reducing reliance on traditional guidance materials (e.g., templates, diagrams, and masking devices). The information visualization in AR consisted in wireframe graphics displaying the path of the cable to be added to the bundle. Reiners et al. [116] conducted another early investigation that involved the use of AR to assist with car door lock assembly. The proposed system employed the usage of CAD models as well, namely the ones of the car door and the internal locking mechanism, for guiding users through the assembly process in a procedural step-by- step manner. They introduced voice commands to move from one assembly step to the next. The implementation however was not stable enough for novice users to gain tangible benefit from the proposed AR system in comparison with traditional guidance. Such early research works led to the formation of several research groups dedicated to exploring the use of AR in industry. One of these groups, ARVIKA, found that the use of AR in industrial contexts can be beneficial, as the expensive nature of AR systems is often offset by the reduced development time and product quality improvement. One example is represented by a use case where design engineers were able to rapidly evaluate ergonomic aspects of different aircraft cockpit prototypes by overlaying virtual layout elements over real cockpit mockups, leading to significant improvement in the design process [38]. A similar research group, STAR, proposed an AR system that allows a technician to capture and transmit real-time video feed of his working environment to an off-site specialist [114]. The specialist annotates the video with drawing and text, visual elements that would then appear spatially registered in the worker’s augmented view. This was one of the first AR
458
M. Preda and T. Lavric
remote collaboration proposals, which turned out to be an effective mean of communicating physical procedures by an assembly expert that is off-site, to one or multiple trainees that are on-site, eventually in different locations. Fournier-Viger et al. [36] proposed an AR application to support military mechanics conducting routine maintenance tasks inside an armored vehicle turret. The field experiment reported that AR allowed users to locate components 56% faster than when using traditional head-up displays and 47% faster than when using monitors. In addition, AR usage was more efficient from a physical perspective compared to the other methods, as it reported less overall head movement. Finally, a qualitative evaluation suggested that participants found the proposed AR system intuitive and satisfying. Numerous recent studies like the European research projects STARMATE [123, 124], SKILLS [148] and SYMBIO-TIC [59, 134], and companies such as Honeywell [80], Porsche [109], and Mercedes-Benz [105] also reported significant improvement in production time, with over 80% reduction in the error rate by using AR-based training solutions. Industrial reports show that AR has started to be adopted as a novel experiential training technology for faster training and upskilling of manufacturing workers on complex tasks with the potential to reduce new hire training time by up to 50% [22].
3.3 Non-industrial AR Training Research Works In addition to industrial applications, AR has been used as well to assist with assembly on a smaller scale, in controlled environments and laboratory settings. One such study was conducted almost two decades ago by Tang et al. [136], which compared assembly operations of toy blocks with four different instructional modes including traditional printed manual, instructions displayed on an LCD monitor, static instructions displayed via a see-through HMD and spatially registered 3D models displayed as well in an optical HMD device. Assembly time, error rate and mental workload were measured. The experimental results demonstrated that AR could significantly improve the performance (i.e., 82% reduction in the error rate) and relieve mental workload on assembly tasks, in comparison to printed manuals or images displayed on monitors or in HMD. The study uncovered that the AR approach as well was particularly useful for diminishing cumulative errors, i.e., errors resulting from previous assembly mistakes. However, AR does not appear to have a statistically significant time advantage compared to the other considered approaches. A similar study conducted by Robertson et al. [117], found that participants assembled toy blocks more quickly when using 3D registered AR contents in comparison with 2D non-registered AR and graphics displayed on a HUD. Baird and Barfield [5] conducted a real-world, laboratory setting assembly study that involved components on a computer motherboard. To execute the required assembly tasks, participants used different instructional media including printed material, slides on a computer monitor, and screen-fixed text on opaque and
Augmented Reality Training in Manufacturing Sectors
459
see-through HMDs. The HMD display reported significantly faster assembly time and error rate compared to the other methods, findings supported by related research works in various domains including furniture [154] and medical assembly [104]. The potential benefits of AR have been exemplified in numerous recent research works as well, including proposals and comparisons between innovative visualization methods and systems like spatial augmented reality. Such a study was conducted in Funk et al. [39], which compared four visualization methods including paper, smart glass (HMD), smartphone (HDD), and in-situ projection (SAR). The evaluation experiment consisted in conveying the same instructions by using each of the four methods for executing a series of Lego Duplo tasks. The task completion time, errors, and the perceived mental workload were measured. The results of the study reported no significant difference regarding the completion time between in- situ projections and paper instructions. For the HMD and HHD a significantly slower time was reported compared to the paper instructions. Regarding the error rate, more errors were reported by HMD participants compared to those using HDD devices and SAR. Regarding the cognitive load, SAR was perceived to be the lowest and the HMD was the highest. Blattgerste et al. [11] investigated different in-situ instructions for assembly tasks by comparing four visualization types including 3D in-situ, 2D in-situ, 3D wire, and side by side. The experiment was conducted with 24 participants and consisted as well in solving a standardized LEGO Duplo sequence. A faster completion time and lower error rate was reported by participants using 3D in-situ visualizations, but no significant difference was reported between the systems in terms of perceived mental workload. In another study conducted in Blattgerste et al. [12], AR devices including Microsoft HoloLens, Epson Moverio BT-200, and smartphone were compared to paper-based instructions. Similarly, as in Funk et al. [39], the paper-based instructions reported the fastest task completion time, while the Microsoft HoloLens reported the lowers error rate, but a significantly higher cognitive load. In a very similar evaluation environment (i.e., LEGO Duplo tasks), Smith et al. [130] analyzed the effect of different interaction modalities (i.e., touch and voice) and visualization modes (i.e., 3D model, text annotation, and in-situ video) regarding task completion, error rate and perceived mental workload. The in-situ video reported the fastest completion time, followed by world registered CAD models and finally text annotations. In-situ videos reported the lowest mental workload as well while text annotations the highest. The method by which the participants interacted with the AR content (i.e., touch versus voice) had little to no effect on the task performance. Finally, the authors conclude that in-situ videos presented in AR represent a very effective way to convey procedural instructions for assembly tasks. Radkowski et al. [115] investigated the effect of abstract (e.g., lines, colors, shapes) versus concrete visual features of 3D content for different degrees of difficulty in manual assembly tasks. The study was involved 33 participants and evaluated abstract visualization versus concrete visualization versus paper based. The experiment consisted in assembling a mechanical axial piston engine with a total of 16 manual assembly process steps; two of the steps were rated with a high degree of
460
M. Preda and T. Lavric
difficulty while the others with a low degree. The experimental results indicate that the abstract visualization leads to longer completion time and higher error rate compared to concrete visualizations. The concrete AR visualization and paper-based instructions reported similar completion times, while the abstract visualization was nearly twice as long. The study suggests that concrete visualization is more suitable for relatively simple tasks. The reported completion times for the two complex tasks were ambiguous among the three methods, therefore a claim could not be made. G. A. Lee and Hoff [69] proposed a technique to enhance the use of instructional videos by using AR. They observed that recoding and sharing videos has become easy and popular, however, as video clips are limited to two-dimensional representation of the task space, it is hard for the viewer to match the objects in the video to those in the real world. Following the instructions in the demonstration video becomes therefore difficult, especially when the task involves navigation in a physical space. To overcome this problem, the authors proposed augmenting task instruction videos with AR visualization of spatial cues including virtual world-registered circles for indicating the physical position of the task in the real world and screen- registered arrows of indicating towards the view focus of the video. The experimental results showed that video with AR spatial cues helped participants better understand where the instructions are referring to, which led to reduced mental effort and improved task performance compared to video only instructions. An approach to make AR systems more efficient and performant was proposed by Westerfield et al. [150]. They investigated the combination of AR with Intelligent Tutoring Systems (ITS) to assist with training for manual assembly tasks for a computer motherboard, by combining AR graphics with adaptive guidance from the ITS. The study found that their proposed intelligent system improved test scores by 25% and that task performance by 30% compared to the same AR training system without intelligent support. The authors found AR as the “ideal tool for situations which require objects manipulation such as manual assembly”.
3.4 AR Adoption in the Industry We observe that, despite the exponential progress that AR has experienced in recent years, no significant breakthrough can be noted in industrial sectors, due to various significant organizational and technical challenges [81, 82]. AR systems have been mostly designed and evaluated in controlled environments, under laboratory settings, recent surveys show. Dey et al. [26] found in an AR usability study that only 54 out of 369 are field studies. [85] noted as well that most user studies reported on MR/AR are conducted in laboratory settings. Egger and Masood [29] identified that only 30 out of 291 papers with a focus on AR in manufacturing have an industrial context and call future research to focus on AR in practice. Furthermore, it seems that requirements identified in the academic world differ from the ones identified in the industrial context [82]. de Souza Cardoso et al. [25] identified that a considerable number of developed applications (48%) are not focused on specific industry,
Augmented Reality Training in Manufacturing Sectors
461
but rather developed for a general segment and that 97% of the studied AR applications are tested to ensure their viability and compare AR pros and cons against traditional methods, while only 5% are implemented in production. Palmarini et al. [110] claimed that AR technology is not sufficiently mature for complying with strong industrial requirements such as robustness and reliability. Another recent study [82] identified and classified AR challenges into three main categories: technology (e.g., tracking/registration, authoring, UI, ergonomics, processing speed), organization (e.g., user acceptance, privacy, costs), and environment (e.g., industry standards for AR, employment protection, external support) and uncovered a gap between the challenges identified in the academic world versus the ones identified in the industry. The authors suggested that field studies must be conducted to identify the real needs and challenges of industrial sectors and therefore to ensure the successful implementation of AR systems in these environments. Kim et al. [63] defined a set of relevant AR-related research topics that should be considered when developing an AR system for the industry, including interaction techniques, user interfaces (UI), AR authoring, visualization, multimodal AR, AR applications and evaluation. When it comes to AR for manufacturing industries, X. W. S. K. Ong & Nee [108] uncovered challenges like time-consuming authoring procedures and appropriate guidance for complex, multi-step assembly tasks. Further, optimal ways for conveying work instructions in Industrial Augmented Reality (IAR), still an open topic of research [41], should be considered as well in the designing of an AR training tool dedicated to industrial usage. Industrial AR surveys show that most AR solutions are designed outside the context of their expected usage, without the direct involvement of the end users. Consequently, these systems fail to answer effectiveness and usability requirements imposed by real world use cases, potentially explaining the low adoption rate of AR solutions in industrial sectors. Moghaddam et al. [92] supported this hypothesis as well, by claiming that several AR fundamental questions must be addressed, including best modes of task information delivery through AR, task-specific effectiveness of AR versus traditional assistance, and identifying potentials and pitfalls of using AR technology as assistive tool in industrial sectors. The authors of the study observed as well that the state-of-the-art in industrial AR is focused on AR content creation and authoring [2, 16, 30, 42, 144, 156], object tracking and registration [66, 146, 147], effectiveness of various modes of AR (e.g., head-mounted, hand-held, projector, haptic) [12, 24, 113, 142], and AR for remote assistance [99]. Further, they argued that to support manufacturing workers in performing tasks that involve both complex manipulation and reasoning, AR requires addressing three fundamental questions: 1. What is the most effective way of conveying assembly task information to the worker? How the proposed information conveyance methods impact efficiency, error rate, learning, independence, and cognitive load? 2. What are the affordances of AR as a training tool prior to task performance versus as an assistive tool during task performance?
462
M. Preda and T. Lavric
3. How can future AR technologies transition from passive conveyance of task information to intelligent and proactive teaming with the worker?
3.5 Conclusion The potential of AR training and guidance has been evaluated in numerous research works and domains, in real world industrial use cases and laboratory settings as well. The findings reported in the literature regarding the benefit of using AR over traditional training methods are positive, even though some are contradictory. It has been consistently demonstrated that AR leads to significant less errors and better understanding of procedural assembly tasks and suggested as well that AR guidance generally decreases the perceived mental workload of workers. Literature indicates as well that AR improves assembly times more often than not; however, this evaluation metric seems to be closely related to the characteristics of the use cases. We note however that most of the proposed AR training systems do not consider content authoring challenges. Generally, studies and experiments are conducted to demonstrate the benefits of AR training, while the authoring efforts to create the corresponding AR work instructions, including the required expertise, time and resources are almost never evaluated simultaneously – i.e., during the same evaluation procedure and context, with the training proposal. We observe as well that most research papers, including recent industry-focused ones, either compare AR training systems with traditional methods such as paper- based and mentor-based instructions [149], or they focus on comparing AR-based instructions through different hardware types such as projection-based AR, hand- held devices, or HMDs. Only few studies compare different visualization types by using the same AR hardware type [61], which makes it difficult to identify best visualization techniques for specific assembly scenarios, while also taking in account human and environmental considerations. Further, we identified a set of concerns that should be considered when elaborating AR training systems for industrial usage. S. K. Ong et al. [106] suggested that an efficient and suitable user interface that can be conveniently used to interact with the augmented manufacturing environment should be provided. Nee et al. [103] discussed the importance of the design phase of an AR application, such as the development of highly interactive and user-friendly interfaces and providing valuable in-sight to make AR an interesting tool in the manufacturing and engineering field. Swan and Gabbard [133] argued that there is a need to further develop AR interface and systems from user-centered perspectives. It is interesting to note however that most of the technical concerns discussed in the literature have been at least partially addressed by state-of-the-art AR HMD devices like Hololens 2, including the optical system and field of view, sensors, registration and accuracy, processing power and latency, natural interaction and even battery life. Challenges like content authoring, information representation and visualization, user interface and interactions are becoming more prominent in recent
Augmented Reality Training in Manufacturing Sectors
463
AR surveys, potentially suggesting that state-of-the-art AR devices have reached a certain maturity that pushes the AR research more towards content- and human- related topics, rather than technological. Even though technological concerns are being addressed nowadays and will eventually be resolved or at least improved in the future, it seems that at present, more importance should be put on defining guidelines and elaborating AR applications that target specific use cases instead of general and eventually ineffective in the real-world context. We believe that adopting this strategy would not only help designing and implementing usable and more effective AR systems adapted to specific assembly use cases, but it will also provide feedback for refining technical requirements particularly for industrial sector.
4 Information Conveyance Mediums for AR In this section we present a summary of the most relevant visual information conveyance mediums employed by state-of-the-art AR training systems. The objective of this analysis is to classify AR transmission mediums and techniques proposed in the literature and identify the ones that addresses best the requirements of manufacturing sectors. Visual information conveyance in AR can be achieved by using multiple hardware technologies, including Head Mounted Display (HMD), Handheld Display (HHD), desktop PC and projector. A recent survey [41] shows that HMDs and HHDs are both used in 27% of the cases, desktop PC in 30%, whereas projection and SAR is only used in 5% of the reviewed AR used cases. We note that information in AR can be delivered by using audio and haptic means as well, however, these methods are not adapted for the considered research context shows our case study, except perhaps complementing the visual information, reason for which this thesis does not consider researching these topics in-depth.
4.1 Handheld Display (HHD) HHD AR applications are generally run on a smartphone or tablet, therefore the display is handheld rather than head worn. There are several important differences between using a smartphone as AR interface compared to a desktop interface, including limited input options, screen resolution and graphics support. Even though high-resolution cameras, touch screen and gyroscope have already been embedded in recent HHD AR devices [103], to become mainstream, AR interfaces should provide interaction techniques that allow people to interact with AR content in a much more intuitive way [9]. HHD devices provide great mobility, are cost-effective and easy to use compared to HMD devices, however these advantages come with significant implications from a usability perspective; they do not address hands-free requirement, the interaction techniques are extremely limited and not adapted for
464
M. Preda and T. Lavric
AR environments, while their relatively small display contribute as well to poor AR user experiences.
4.2 Head Mounted Display (HMD) Two types of HMD can be identified in the literature: see-through or optical HMD and video display HMD. See-through HMD is based on semi-transparent mirrors, allowing the user to see the real-world environment and, at the same time reflecting computer-generated information into the user’s eyes. With a video see-through HMD, the real-world view is captured with video cameras, and the computer- generated images are electronically combined with the video representation of the real world, and just then the result is displayed to the user [118]. A great disadvantage of video display HMD is the higher latency compared to the see-through HMD, due to the bigger amount of information that must be processed. We note that this technological limitation can have serious implications on the suitability of such a system in human-operated manufacturing environments, as a delayed visualization of the real-word view might represent a safety concern in manufacturing sectors. This concern is resolved by optical HMDs; however, a potential disadvantage of this technology is the limited field of view (FoV), a well-known technological challenge with a negative effect on the user experience. The physiological and psychological impact of wearing HMD AR devices represents another concern that can have a significant impact in the adoption of such technology particularly in industrial sectors. The main advantages of HMDs are portability, user experience because of spatially registered visual contents [110], and more importantly, they address the hands-free requirement, flexibility and do not require setup or maintenance. In addition, systems using Head-Mounted Displays (HMD) mimic how people access spatial information naturally [155].
4.3 Desktop/Monitor-Based Another medium of conveying AR information is a low-immersion technique through 2D monitors or screens [13, 45, 136]. This method has been intensively used to demonstrate the benefits of AR compared to paper instructions because it does not require complex setup, can be easily deployed, and tested, and is not costly. However, this technique lacks one of the most important AR principles, which is the immersion of the user in the AR environment. As noted by Billinghurst et al. [9], in an AR experience an intimate relationship between 3D virtual models and physical objects these models are associated with is expected. The poor expression of the spatial relationship between the virtual content and the real-world environment in a 2D medium does not take full advantage of AR, potentially making it hard for
Augmented Reality Training in Manufacturing Sectors
465
novice users to understand complex manufacturing operations and setups. Unlike immersive AR where users can move naturally to navigate the contents, in desktop- based AR controlling movement, orientation and interaction is provided through abstract navigation interfaces (e.g., keyboard, mouse, or joystick) [155], leading to a user experience that is less intuitive, eventually to a training that is more time- consuming and cognitive demanding as well.
4.4 Spatial Augmented Reality (SAR) A hands-free AR information conveyance alternative is represented by SAR (Spatial Augmented Reality). SAR does not require the use of HMDs or any other wearable devices, eliminating as well the need of visual multimedia assets. Numerous studies assess the potential of this AR information conveyance technique [13, 27, 40, 140], some claiming that it is the most fit for industrial constraints [84]. We argue that SAR and other AR-projection techniques are not adapted to many human-operated manufacturing sectors, because of the initial effort required to install and calibrate such systems, their lack of mobility and adaptability. The initial setup of SAR systems is generally extremely complex, composed of frames/supports/tripods, depth cameras, projectors, PC workstations, monitors, joysticks, lighting systems and markers [84, 140]. Another study dedicated to evaluating SAR technology for conveying assembly instructions, reveals a set of technical limitations of SAR systems as well, including the digital imagery that is projected at a fixed location and will not move along with the part when it is repositioned, the limited projection canvas and the projector's constraints (e.g., throw ratio, light output, contrast ratio and resolution).
4.5 Screen-Based Video See-Through Another hands-free AR medium alternative was proposed by Fiorentino et al. [32] with a Screen-Based Video see-through Display (SBVD) technique. This approach consists in an augmented visualization on a large screen and a combination of multiple fixed and mobile cameras, claimed to be a valid alternative to HMDs and an improvement compared to monitor-based desktop AR. This technique provides a less immersive augmentation (i.e., not co-located with the user’s line of sight), but without the drawbacks of current HMDs, suggest the authors. We note however that the hardware setup is complex and rigid while the potential benefits of SBVD systems are questioned by the features proposed in the state-of-the-art HMD devices like Hololens 2.
466
M. Preda and T. Lavric
5 Industrial Challenges and Requirements Even though the potential of AR has been demonstrated in numerous research works since more than two decades ago, the AR technology has not yet been widely adopted in industrial sectors [14]. Literature uncovers major concerns regarding industrial adoption of AR, including the high cost and the uncomfortable wearing experience of AR devices, susceptible tracking robustness and registration accuracy, unsuitable user experience, bad design, unrealized integration with enterprise systems and the lack of flexible and easy to operate AR work instruction authoring systems, to name a few [14, 37, 110]. Further we provide a classification of the most relevant challenges that an AR training system should address in order to be considered for industrial adoption.
5.1 Technological Challenges and Expert Knowledge Current design and authoring of AR work instructions depend on specifically customized code development claimed [103], emphasizing the importance of the design phase of an AR application, such as the development of highly interactive and user- friendly interfaces and providing valuable insight in order to make AR an interesting tool in manufacturing and engineering fields. It seems that most AR authoring tools are generally operated by highly qualified developers rather than end users. This situation leads to the high technical threshold, high cost and low adaptability [110]. It is important to note that from an organizational perspective, the technological challenges are not as relevant as one would think. Overcoming technological concerns, especially those related to tracking and registration in AR (e.g., real-time 3D object registration), require tremendous effort while success is not guaranteed. Therefore, to support iAR adoption in manufacturing sectors, we consider that finding ways to avoid or diminish the negative impact of these limitations on the user experience might potentially be the best strategy in designing an AR training system dedicated to industry, until technological advancements will be able to address these concerns reliably.
5.2 User Acceptance and Shop Floor Reluctance From the shop floor perspective, the goal is to obtain better training while relieving the work duties of people in charge of training. The authoring of the AR work instructions would come however as a new responsibility to shop floor managers or experts, the potential and eventually most adapted end users of the such AR system [78]. This concern becomes even more important when dealing with the reluctance
Augmented Reality Training in Manufacturing Sectors
467
of shop floor personnel to change, learn and assimilate new technologies and adopt new tools within the shop floor context [23, 76, 141]. Therefore, to address user acceptance, the authoring procedure should ideally not require technical expertise, even less the expertise of multiple individuals (e.g., multiple authoring stages that require different expertise), should be easy to learn and use, reliable and flexible to successfully address different assembly environments. Regarding training, the system should be designed as such, so that novice workers would prefer it over traditional training methods. To address reluctance, the training system should allow one to perform the training easily and effectively, independently on his AR or assembly experience, cognitive qualities, preferences, and learning style. Even though user acceptance requirements for training are assumingly less strict than for authoring, as novice workers do not have the same power of decision as experienced workers or shop floor managers, training feedback still represents a key aspect in the acceptance of such AR training system.
5.3 System Performance and Usability Performance and usability represent two critical indicators in supporting domain experts in authoring AR applications without technical knowledge [45]. To be a success on the shop floor, the AR training system must prove its usability and effectiveness right away. The authoring system should allow a shop floor expert to capture his assembly expertise fast and efficiently, while the information conveyance should be as explicit as possible to support an accurate execution of the assembly operations by novice workers. At the same time, the information conveyance should stimulate a fast assembly process, not compromising product quality. Human supervision during training should not be necessary or at least minimized, to relief the duty of the shop floor experts in charge of the training.
5.4 Safety, Psychological and Physiological Concerns The AR system must not introduce potential safety risks during the authoring and training procedures. Until reliable and comprehensive field experiments regarding the psychological and physiological impact that AR can have on humans are conducted, designing an AR system should delicately consider these implications. The most adapted AR devices, visualization methods and interaction techniques should be identified and employed in the system implementation, to optimally address human concerns in relation to application usage. For instance, minimizing the interaction time with the virtual elements to reduce eye stress, providing a user interface to limit head movements and hand gestures, or providing a fast user experience to shorten as much as possible the AR session to reduce physical and mental effort and
468
M. Preda and T. Lavric
discomfort are approaches that partially address human psychological and physiological concerns.
5.5 Efficiency and Viability From an organization perspective, the main challenge is to provide better training without compromising product quality. More specifically, to be considered a success, the proposed AR training should be reliable, more flexible, and less costly than the traditional one, while producing the same or better results and respecting organizational norms and requirements. To ensure industrial viability, the proposed AR training system must also be safe, effective, and adaptable to new scenarios without the need of expert knowledge or technical setup. The system should be mobile, easy to manipulate and ready to use on-demand, independently on the manufacturing location or context. A technical setup should ideally not be necessary, even in new assembly use cases, except perhaps initial (i.e., one-time) system calibrations or procedures which do not require expert support. The maintenance of the solution, if necessary, should as well be performed by the shop floor experts. The system should allow easy integration of technological features that are not yet available or did not reach a maturity level for industrial usage, even though this would potentially imply the need of specific technical expertise. However, system extensions should be carefully considered to not compromise usability and effectiveness. It is preferable that the authoring is not dependent on external services, especially those that involve recurrent costs. The creation of the AR work instructions should ideally not be dependent on existing resources or external services especially if these are not reliable or function as anticipated, consistently.
5.6 KPI and Costs A highly influential aspect in the adoption of novel technological tools in the industry, the KPIs cannot be measured unless the considered tool is deployed and used in a real-world use case for months, potentially years. In addition, to evaluate the performance of informatic systems powered by emerging technologies, standard KPIs need to be adapted to include both traditional metrics and specific indicators [89]. To partially overcome these challenges, the solution must be able to demonstrate potential benefits to the stakeholders very fast. To increase the success rate, the initial and running costs should be minimized, as the budgets for innovation in manufacturing factories are generally limited.
Augmented Reality Training in Manufacturing Sectors
469
6 An AR Training System Adapted to Manual Assembly In this section, we propose an AR training methodology dedicated to manual assembly environments, that focuses on addressing the industrial challenges and concerns discussed previously. The proposal is elaborated by considering the manual assembly environment of a boiler-manufacturing factory, where a long-term industrial case study was conducted to this purpose. The system implementation is developed for Microsoft® HoloLens 2 [86], by using Unity 3D [138] and MRTK [88]. We focused on providing a system that can be implemented, deployed, and evaluated in industrial sectors by using state-of-the- art AR technology, that is publicly available. We note that the proposed AR training system described in this section was presented and evaluated in Lavric et al. ([67, 68].
6.1 Authoring 6.1.1 Concept It this section we present the authoring methodology dedicated to creating AR step- by-step work instructions for manual assembly operations. We aimed to provide a human-centered AR authoring tool adapted to shop floor experts, while considering the variance of people, products, processes and working conditions, as suggested by literature findings regarding industrial AR systems [34, 108]. A summary of the most relevant challenges around which the proposed authoring methodology was elaborated include: • • • • •
Generic and consistent, independently on the manufacturing context No installation setup or prerequisites required No technical expertise required Formalized and easy to learn authoring procedure Dedicated to shop floor experts, fast, intuitive, and easy to use
Assembly Process Description Formalization The literature does not provide a standardized way of describing assembly expertise in AR. Standardizing the description of assembly operations seems to be essential in the industrial context, where the work instructions represent a directive document of the enterprise that must be designed and audited in a standardized way before it can be released to the production site [55, 73]. Needless to pinpoint other advantages of industrial process standardization in general.
470
M. Preda and T. Lavric
However, the literature uncovers a set of concerns when it comes to standardization. One is that too many rules might limit human actors do the right thing especially in states of abnormal operation where they would need strong, but flexible guidance [48]. Rules specifying the exact operations to execute can have a detrimental effect on action because they do not allow the performing person to develop an underlying plan of their own, but instead further the atomization of actions and the focus on micro-difficulties. In non-routine situations, reliance on standards may turn into an over-reliance [47], potentially persuading the creator of the AR work instructions to follow the rules more than using his abilities and knowledge to describe the assembly operations clearly and effectively. We address these concerns by trying to identify the best compromise between formalization and flexibility, as further described. Assembly Chunking An analysis of assembly instructions composition, as defined in existing paper instructions, reveal that generally, assembly instructions are composed of multiple actions or tasks. For example, the instruction “Screw the crosspiece with 4 screws using the gray screwdriver” requires the worker to perform additional actions that are not explicitly specified in the text description. The missing information includes the picking of the screws and the screwdriver and their location, the four screwing locations, and finally the tool manipulation. We note that in paper instructions the location of the assembly components is not specified or described, neither how the assembly operation should be performed, details that one would need to successfully and safely be able to execute the assembly operation. To address assembly description completeness (i.e., missing vital assembly details), and eventually avoid ambiguity during the creation of the AR work instructions, we propose a technique that requires the author to describe one and only one assembly action per AR work instruction. In other words, each AR work instruction should describe one assembly operation, composed by a single verb (i.e., assembly action) and a single subject (i.e., assembly component). By applying this technique, the assembly operation example presented above will be chunked into three discrete actions: • Action 1: “Grab four screws” • Action 2: “Grab the gray screwdriver” • Actions 3 to 6: “Screw the 1st/2nd/3rd/4th screw” Advantages of a similar chunking technique were demonstrated in Tainaka [135] and proposed as guidelines for designing assembly task support by using AR. Gattullo et al. [42] proposed dividing long instructions into atomic actions as well, a procedure which represents the first stage of their proposed methodology for converting traditional paper instructions to AR compliant documentation. Benefits of segmenting a dynamic presentation (e.g., video demonstration) into cognitively manageable chunks were demonstrated as well in Carroll and Wiebe [17], a study of spatial procedural instruction consisting of folding an origami figure.
Augmented Reality Training in Manufacturing Sectors
471
The 2W1H Principle As it can be observed in the assembly example just mentioned, each of the six decomposed actions specifies what should be done, in a specific order, without providing details about where the components are stored or installed, neither how the assembly operations should be executed. We note that each assembly operation, independently of its type and complexity, could be described by the three variables what, where and how, to ensure a complete description. What represents a brief description of what should be done, similarly to the ones already provided in the paper instructions. Where indicates the spatial location of where the assembly operation should be executed. Finally, how describes how the assembly operation is performed. We note that by using these variables for describing an assembly operation in the given order, what-where-how, we try replicating the oral human-to-human explanation of manual tasks as well, a technique that can be observed in written and digital tutorials and in real life explanations. During our case study observations and experiments, we noted this natural, generic explanation process adopted by the shop floor experts while explaining and demonstrating assembly instructions to novice workers. Firstly, the instructor “announces” the trainee what the assembly operation is about, e.g., “Next, we are going to grab the crosspiece”. Secondly, the instructor indicates where the assembly operation takes place by moving or reaching towards the location and pointing with his hand or finger, e.g., “…from here” or just “…here”. Finally, the instructor demonstrates how the operation is executed by performing and eventually explaining the assembly process at the same time, e.g., “…you do it like this...”. Further, we refer to this principle as the 2W1H (2-Where and 1-How) principle. This approach is based on the principle proposed by the Greek philosopher Aristotle, known as the Five Ws (who, what, when, where and why) and How, which represent the six basic questions in problem solving. In our case, who (i.e., the novice worker), when (i.e., on-demand, when the training is performed) and why (i.e., authoring the AR work instructions), are known and therefore not considered variables in the proposed approach. By adopting this principle, we aimed to propose an easy to follow, formalized authoring procedure, that is consistent and straightforward independently on the assembly operations (e.g., grab, place, insert, push, etc.). We expect as well that the authors of the AR instructions would commit less omissions and provide more accurate and effective descriptions of the assembly operations during the authoring of the AR instructions. Visual Representation of Assembly Expertise As noted in a recent study regarding different types of AR visualizations for assembly instructions, when dealing with complex operations, the error rate is often more crucial than the completion time [61], a claim supported as well by the findings of our case study which indicates that product quality is the top priority and should not
472
M. Preda and T. Lavric
be compromised to the detriment of faster assembly times. The literature suggests that concrete visualization is better adapted rather than abstract, particularly for conveying complex assembly information to novice workers. Further, the visual representation of the assembly should be as complete as possible, avoiding however redundancy. AR information representation and conveyance principles together with authoring concerns suggest that the optimal way to capture and represent assembly expertise in AR is by using a combination of multiple low-cost visual assets (e.g., text, image, video), as further detailed. A summary of the most relevant findings regarding low-cost visual assets and CAD data is presented in the following two sections. Low-Cost Visual Assets To address one of the most important authoring challenges, that is the creation of the AR contents, our approach relies on low-cost visual assets including text, images, video, and predefined auxiliary models. Unlike CAD data, low-cost assets including text, images and video can be captured by state-of-the-art AR devices, in-situ, during the AR authoring procedure itself, shows our preliminary studies. Our findings suggest that low-cost visual assets are sufficient for conveying complex assembly expertise to novice workers, questioning the worthiness of CAD data particularly in the initial learning phase. As suggested in Jasche et al. [61], videos should always be considered as an essential part of AR-based instructions as they lower the perceived complexity and implicitly the mental effort during the execution of assembly tasks, eventually leading to an error rate reduction as well. In addition, user feedback suggested that in addition to video, participants prefer concrete presentations in general and textual information as a complement. Finally, it was noted that the utility of video demonstrations is diminished with the assembly complexity, suggesting that abstract contents, potentially complemented by photos of the final assembly could be sufficient for representing simple or obvious assembly operations. CAD Data Proposing an AR training methodology which does not rely on CAD data for conveying assembly expertise might be intriguing. CAD content creation requires skills such as 3D modelling, computer graphics, animations, and programming. Shop floor experts often have little or no knowledge in creating AR content [42], which means that appropriate AR authoring tools are required to allow a quick and easy way to create content [61]. Another concern regarding the usage of CAD data for authoring AR work instructions is represented by the time and effort required to place them in the AR scene. Literature tries to deal with this concern by proposing automated registration of CAD models in the real-world environment [64, 132]; however, the tracking accuracy is generally not sufficient to make such technique reliable and scalable [126]. Our experiments strongly suggest that in similar industrial sectors, the usage of CAD data for AR authoring systems is not yet convenient. A summary of the most relevant concerns related to the usage of CAD data,
Augmented Reality Training in Manufacturing Sectors
473
identified during our case study, and supported by literature findings is further presented: • Availability and preparation (for AR usage). • Positioning during the authoring: tedious and time consuming, particularly for small assembly pieces like screws and clips. • Registration accuracy: the precision is not sufficient for complex assembly environments, e.g., two holes at 1 cm from each other. • Registration for objects in motion: state-of-the-art proposals for real-time world registration and 3D tracking for objects in motion do not produce acceptable results, particularly for industrial usage. • Occlusion, user distraction and eyestrain. Another potential advantage of not depending on CAD models is minimizing the negative effect on the UX, both during authoring and training, in the case of an imprecise spatial registration with their corresponding real-world objects. We believe that an imprecise registration of other graphical media elements like images and video, which do not require a superposition on the real-world objects, will not affect the UX as much as CAD models would, notably in the case of small objects and mobile components. In-situ Authoring As pointed out by Lorenz et al. [78], shop floor experts are the most suited to create the AR work instructions, instead of AR or IT experts. Our industrial case study findings support this hypothesis, claiming that any industrial tool should be designed for the people that are supposed to benefit from it, because they eventually are the end-users. To address these requirements, we propose therefore an authoring process that is conducted by shop floor experts, in the shop floor environment. By adopting an in-situ approach, we aim to provide the optimal authoring context during the creation of the AR instructions, which is the workstation itself and its components, a context that provides the “raw data” of the assembly process, which the AR work instructions are supposed to convey to novice workers during training. Immersive Authoring (WYSIWYG) Shop floor workers generally have no technical knowledge, and even less skills in manipulating AR authoring tools. Gonzalez et al. [45] claimed that one of the biggest disadvantages of AR was the lack of general awareness of the authoring progress. Better visualization techniques need to be explored to show users their current authoring state, for AR to have a real definitive advantage over desktop. To address this concern, we propose an immersive authoring of the AR work instructions, similarly to the well-known What You See Is What You Get (WYSIWYG) principle, or his AR variant proposed by G. A. Lee et al. [71], What You eXperience Is What You
474
M. Preda and T. Lavric
Get (WYXIWYG). Immersive authoring is the most prevalent form of visual authoring [35], its benefits being leveraged by the interaction usability and ease of use [70]. Quite a few research works have explored the use of immersive authoring tools [58, 71, 119, 154], in which virtual content is created in-situ. Results from prior studies show that interactively placing and manipulating the virtual objects is easier and faster for users to carry out authoring tasks compared to a non-immersive environment [70]. Other literature findings suggest as well moving from current desktop environments to web-based environments, or, in the considered context, to AR-based environments [33]. In addition, if history is an indicator, we can intuitively assume that what worked best in the past will also work in the future. It is reasonable therefore to expect that an immersive AR authoring will lead to similar accomplishments obtained in the world wide web in the past two decades. By adopting such technique, we aim therefore to minimize the authoring time while making it more efficient and intuitive for people without AR experience, while allowing the author as well to visualize and validate the AR work instructions right away. AR Contents Registration Content registration is a core function of many AR systems, still an open issue of research. The three main types of information registration methods for HMD-based AR are head, object, and environment [135]. Marker or fiducial-based represents the most utilized (57%) registration technique among industrial applications. Other techniques – i.e., 2D/3D recognition, sensor-based, location-based and marker less do not comply with industrial robustness and precision requirements and are generally limited to test environments [25]. Consequently, to address the WYSIWYG paradigm, we employ head (head-gaze technique) and environment (marker-based technique) registration methods for AR content registration during authoring. To ensure a precise world registration of the captured data during authoring, the proposed approach depends on fiducial markers, which represents the most reliable AR registration technique as of today. We note that fiducial-based registration requires static assembly environment, a condition that is generally met in similar manufacturing use-cases. It is desirable and expected however that future technological advancements will replace the need of fiducial markers by providing intelligent and precise real-time world registration that relies on natural features, to address any kinds of assembly, including dynamic and mobile. On-the-Fly Authoring To optimize the authoring process even further, we propose the creation of the AR work instructions on-the-fly (i.e., at the same time with the assembly process). We observe that in-situ, WYSIWYG authoring does not only support the adoption of such approach, but potentially makes the creation of the AR work instructions more intuitive and concrete, as the shop floor expert is required to capture his expert
Augmented Reality Training in Manufacturing Sectors
475
knowledge in AR while performing the assembly activity itself. Furthermore, on- the-fly authoring requires the shop floor expert to perform the workstation assembly cycle only once, while capturing the information successively, for each assembly operation, by applying the chunking and 2W1H principles discussed before. AR HMD Device (Hololens 2) From a hardware perspective, wireless, HMD AR devices seem to answer best human-operated manufacturing environments. Handheld devices (i.e., smartphones, tablets) do not fulfil the hands-free requirement, while SAR systems do not seem to be adapted to similar industrial sectors. Further, to support on-the-fly, in-situ WYSIWYG authoring, the use of an optical head mounted display AR device represents by far the optimal hardware choice. In addition to allowing the authoring of the AR work instruction to visualize the real- world assembly environment in real time, state-of-the-art HMD devices like Hololens 2 provide support for natural interaction and world registration, two of the most important features that an AR system is expected to provide for a rich user experience. It must be mentioned that today’s AR HMD devices rise numerous concerns including resolution, field of view, weight, short battery life, ergonomics, poor optical system, and costs [57, 75, 106]. As noted in the literature, until a pair of AR glasses can blend as well as any other pair of glasses with a user’s style, their popularity and trust will not take off [81]. On the other hand, compared to other AR devices and information conveyance techniques, AR HMDs provides portability and flexibility, and most importantly, a natural and immersive user experience. To minimize the potential negative effect of some of these concerns, our authoring proposal aims to be as fast as possible, without compromising the user safety and the quality of the authored AR instructions. Field studies which consider psychological and physiological considerations must be conducted to provide objective evidence regarding the limitations of state-of-the-art AR HMD devices and their implications on human, especially in the long term. Our experiments conducted during the considered case study suggest that generally, devices like Hololens 2, can be utilized for at least one hour without participants making any negative remarks about weight or comfort. Independent, One-Step Authoring Process Finally, the proposed authoring approach does not depend on existing resources, external services, the usage of Desktop applications, preparation, or post-processing steps, as proposed by most popular commercial AR authoring tools like Vuforia [143] and Guides [87]. The proposed authoring process is performed in a single authoring stage, requires the use of a single hardware device (e.g., Hololens 2 or other HMD AR device that provides the same functions), a unique QR code and an
476
M. Preda and T. Lavric
internet connection. The authored AR work instructions are ready to be used for training immediately, once the authoring is finished. 6.1.2 Content Authoring Methodology and Principles In this section we describe the methodology and the principles that a shop floor expert should follow during the authoring of the AR instructions, from a content perspective. We note that the authoring workflow and the way the author interacts with the user interface to capture the considered media contents is explored separately in the next section. The proposed methodology for capturing manual assembly expertise relies particularly on low-cost visual assets, by applying the assembly chunking technique and the 2W1H principle. The author of the AR work instructions must therefore follow the formalized description imposed by the 2W1H principle, for each assembly task. Furthermore, the authoring procedure is an independent process, conducted in an AR HMD, in-situ, in a WYSIWYG manner, performed during an assembly cycle. The visual description of any assembly operation is composed of three elements: • A text instruction, briefly describing what the assembly operation is about. • An arrow pointing to the where the physical location of the operation is. • A first-person view (FPV) image illustrating the final assembly result or a FPV video capture demonstrating how the assembly operation is executed (optional). Further, we describe how the author of the AR instructions is expected to capture his assembly expertise by applying the authoring principles discussed above. We provide explanations as well regarding specific authoring characteristics and how the author can leverage the flexibility provided by video demonstrations to overcome well-known AR technical limitations such as precise world registration of the virtual elements. What – Text Description The text instruction should describe the assembly operation, in a clear and concise manner. Even though the length of the text description should be reduced as much as possible, capturing important and subtle assembly details should be prioritized and therefore not omitted. The training interface does not use visual cues to highlight specific parts of the text description, therefore the author should bear in mind that every piece of information that he provides during the text insertion will eventually have the same level of important from the perspective of the trainee. We do not provide visual cues like bolding and underlining during text insertion simply because such a functionality would make the overall authoring procedure more complex and slower, while the potential benefits are not guaranteed. The author should also consider that this represents the first and potentially less relevant
Augmented Reality Training in Manufacturing Sectors
477
information regarding the considered assembly operation, as it only suggests what must be done without giving the specific details, provided at the next steps. Where – Auxiliary Data (i.e., 2D Arrow) Where represents the physical location of the assembly operation. The author must provide this information for every AR instruction as all assembly operations take place at a certain location in the real world. The author is expected to mark the assembly location in the physical world by placing and orienting the location arrow to where he would have pointed his finger at during a classical training procedure. The arrow should not interfere visually with the assembly location, and, depending on the assembly operation and environment, this should be at least few centimeters away from the actual assembly location, but oriented as precisely as possible towards the location. We recommend this to avoid occlusion during training. It is important to note that the main objective of where is to guide the trainee towards the assembly location and its associated media content, but not necessarily to indicate a very precise position which might be impossible to do. For example, indicating a screwing location on an assembly component with multiple wholes which are extremely close to each other (e.g., 0.5 cm away) will not only be nearly impossible to indicate during the authoring, but an imprecise registration (e.g., 0.26 cm, which as of today is considered excellent) would erroneously indicate the wrong assembly location during training. The author should be aware of this limitation, and, in the case of a highly sensitive assembly environment, the location arrow should be used only to serve as approximate real-world indication. Clarifying the approximation would require a demonstration video, as discussed in the next section. How – Assembly Photo or Video Demonstration Finally, the author is expected to demonstrate the assembly operation by either taking a photo of the final assembly or by capturing a video demonstration while performing the assembly. It is up to the author to decide, based on the assembly difficulty and his training experience, which of the two media contents would be required by a novice worker to understand and perform the assembly correctly. It is recommended that a video demonstration is captured in case of doubt, rather than providing insufficient information with a photo, that could lead to comprehension issues, assembly errors and eventually need of human intervention during the training procedure. In the case of a photo capture, the author should consider clearly indicating the final assembly result, eventually highlighting it by using the most adapted method (e.g., hand, finger, specific tool). The quality of the captured media should carefully be checked to ensure good focus and sharpness. During video capturing, the author can also explain the assembly orally; however, the assembly demonstration should not rely on audio instructions as listening to them during training might not be possible in very noisy industrial environments.
478
M. Preda and T. Lavric
It is recommended that the assembly demonstration is performed slowly, while the hands are always in the FoV of the camera. For complex assembly operations or environments, the author should carefully indicate and highlight details which are error prone. In the aforementioned location indication example, the author is expected to specifically indicate the real-world position in the video demonstration, so that a novice worker would be able to identify the exact location of the assembly, if the arrow could only approximate it. 6.1.3 Authoring an AR Work Instruction Further, let us describe from a usage perspective, how the content authoring is performed for a single instruction (Fig. 1), by following the proposed 2W1H principle. We note that the same process applies for authoring any AR step-by-step work instruction. At any time during the authoring procedure, the author uses hand gestures and voice commands to interact with a 2D panel (Fig. 1a, b, d), displayed in front of him by using head registration technique. The authoring panel has multiple functions, as further detailed. Firstly, it displays the current assembly instruction number and the authoring step within the current AR work instruction. Secondly, it allows the author to access the AR functions for text insertion, assembly location indication and FPV photo and video capture. Finally, it allows the author to validate the captured data, advance to the next authoring step and create a new AR work instruction. The proposed implementation provides in-app functions like visualizing, selecting, and editing existing AR work instructions; however, these are not discussed, as they are not essential for the authoring procedure itself and are subject to change from one use case to another. Further, let us describe how the author creates a step-by-step AR work instruction guided by the application interface which follows the 2W1H principle.
Fig. 1 AR authoring example. (a) Step 1. Insert text instruction by using the virtual keyboard or dictation; (b) Step 1 validated, step 2 active; (c) Step 2. Positioning of the location arrow by using far interaction technique; (d) Step 2 validated, step 3 active; (e) Step 3. Photo-capture view; (f) Step 3. Photo taken, the author validates or removes it.
Augmented Reality Training in Manufacturing Sectors
479
What: At this stage (1/3), along with the 2D authoring panel, a virtual keyboard is displayed in front of the user (Fig. 1a). The author uses the keyboard to insert a text for briefly describing the current assembly task, by using one of the two modalities: (1) natural hand gesture technique, which require touching the virtual keystrokes or (2) dictation by using voice, a function that is activated by the user by clicking the microphone button, part of the virtual keyboard. The user goes to the next authoring step by clicking the validation button, part of the 2D panel. Where: At this stage (2/3), the author is required to place the virtual arrow for indicating the physical location of the assembly operation (Fig. 1c). The arrow is displayed in front of the user by using head registration technique, as a static object. The author uses his hand to grab, place, scale and rotate it. For manipulating the virtual arrow, the author can use the close interaction technique (cf. [90]) by touching the virtual representation of the arrow with his fingers as it were a real object. The second method is the far interaction technique (cf. [91]) where the user can manipulate the virtual arrow from distance as illustrated in Fig. 1c). Finally, once the arrow placed at the desired real-world location, the author validates it by clicking a “validate” button displayed under the arrow. We note that at this stage, the only visible UI element is the arrow together with the corresponding validation button. We adopted this approach to address the HCI principles discussed in chapter “Digital Twin Architecture – An Introduction”. The interface is simple, straightforward and does not provide multiple authoring functions at the same time to avoid ambiguity and error. In addition, by only presenting the virtual elements with which the author is supposed to interact at a certain point, we avoid UI clutter and potential safety concerns. How: At this stage (3/3), the author decides if capturing a FPV image (Fig. 1d) or demonstration video is required along with the text description and the indication arrow to effectively describe the assembly operation. The author captures one of the two by using the multimodal user interface, either by interacting with the corresponding buttons of the authoring panel or by voice command: “photo”, “video” and “stop video”. We note that capturing the video demonstration from a first-person view is not only necessary because of the hands-free requirement but also because following video tutorials may be challenging when the user’s current viewpoint is not aligned with the one in the video [15]. To address this concern, our authoring approach requires the user to (1) capture first-person view (FPV) instructional videos and then (2) spatially register these videos with the real-world environment so that, during training, the video element viewpoint is aligned with the user’s current view of the assembly component. The author spatially registers the captured media at a convenient location in the real world by using hand interaction techniques. The author is required to position the media preferably in the same field of view with the assembly location, to limit the head movement during the training and potentially decrease the assembly time and the effort required by the trainee while following the AR instructions. We note that during training, as presented in the next section, the images and the videos automatically rotate towards the user; therefore, during the placement of
480
M. Preda and T. Lavric
these elements, the author is not required and recommended to not spend time trying to rotate them towards specific real-world locations, guessing the position and orientation of the worker during the AR training experience. We highlight the fact that a fixed orientation of the visual illustrations could as well have a negative impact during training, in those cases where the height of the author is considerably different than the one of the trainees, as observed during our informal experiments during the case study.
6.2 Training Regarding training, we aimed to propose an intuitive AR training methodology that relies on familiar visual assets and implicit interaction techniques, to support novice workers effortlessly follow the AR work instructions and perform the manual assembly in the least intrusive manner. As for the authoring, the elaboration of the proposed AR training methodology presented in this section prioritized human- centered paradigms. Even though novice workers do not generally have power of decision at organizational level, providing them with a complex and difficult application would most probably be detrimental from the user acceptance perspective, not necessarily because of user preferences, but rather because of the potential impossibility to use the system effectively in a short time span. It is important to note that in similar assembly environments like the one where our case study was conducted, learning the operations sequence of certain workstations might only require hours, even less. It would then not make sense to provide a training alternative to the traditional one unless the learning curve of the new one is extremely fast. 6.2.1 Information Representation of Assembly Operations in AR The visual representation of an assembly operation presented during the AR training is dependent on the content authoring presented in the previous section, the procedure where the shop floor expert captures his assembly expertise in a WYSIWYG manner, by applying the 2W1H principle. Consequently, any assembly operation will be represented by the three visual elements of the 2W1H principle: • (What) A text instruction, briefly describing the assembly operation, • (Where) A location arrow, pointing to the real-world assembly location, • (How) A FPV image or video, illustrating the assembly result or demonstrating the considered assembly operation (optional). During training, any combination of these elements can be used to convey the assembly information. As the authoring is performed in a WYSIWYG manner, it is expected that the corresponding visual elements are displayed in the same way or very similar as presented to the shop floor expert during the authoring procedure. As the proposed information conveyance method was specifically designed to
Augmented Reality Training in Manufacturing Sectors
481
accommodate novice workers especially, all the media contents captured during authoring are used to convey the assembly information during training, as further described. 6.2.2 Information Conveyance Methodology and Principles Regarding the information conveyance technique, the objective of our proposal was to apply the 2W1H principle as well, to replicate human-to-human assembly explanation. We note that the authoring and the information conveyance were intentionally coupled by the 2W1H and WYSIWYG principles, to allow a shop floor expert not only to create the AR instructions in-situ and on-the-fly, but also visualize them as they will be presented to the novice worker during training. By adopting this strategy, we provide an implicit validation process of the AR instructions, literally part of the authoring procedure itself. We note that the proposed information conveyance methodology aims to address in addition to requirements like information correctness and completeness, human considerations like cognitive load. As identified by Abraham and Annunziata [1], AR can reduce cognitive load by supplementing human labor with the right information at the right moment and in the right mode without disturbing the user’s focus. This requirement is supported by Sahu et al. [120] which claimed that seamlessly displaying AR procedural information to the user is essential to minimize the cognitive load of workers. In addition, to provide an appropriate AR experience, the system should not display redundant augmentation media during the assembly procedure [72]. Further, we present the information conveyance principles adapted to the manual assembly context of the considered case study, with the following characteristics: • the organization of the assembly is static, • the worker is standing and required to move during the assembly procedure. We exemplify the rationale of our approach by using the generic assembly explanation example provided in the case study. For a better understanding, a comparison between the classical human-to-human (H2H) explanation and the proposed AR information conveyance technique is provided. What H2H: First, the instructor “announces” (i.e., by voice) the novice worker what the assembly operation is about, e.g.: “Next, we are going to grab the crosspiece”. Supposedly, the worker asks for repetition if the audio information was not heard or assimilated. AR: The transient nature of spoken text suggests that conveying this information in AR via audio is not adapted. Providing an UI where the worker could relisten the audio instruction would break the HCI principles which suggest that presenting
482
M. Preda and T. Lavric
this information in a non-transitory and semi-invasive manner is the most adapted, particularly for novice workers. Conveying therefore this information via text, displayed in front of the worker, as the very first instruction of an assembly operation seems to replicate as closely as possible the classical information conveyance while addressing the transitory information effect and the signaling principle [83] as well, to avoid potential oversighting. It is important to note however, that to accommodate workers that cannot read, this information should be conveyed by using audio. Where H2H: The instructor indicates where the assembly operation takes place by moving and/or looking towards the corresponding assembly location, eventually pointing with his hand or finger, e.g.: “…from here”. AR: Human cues for indicating real world locations within reach, including body gestures and line of sight, are familiar and intuitive. Replicating these cues in AR is not obvious, unless using a realistic virtual avatar that imitates human gestures, which represents an unreasonable approach for the considered research context. G. Lee et al. [72] demonstrated the benefit of using AR tips to improve spatial understanding, ranging from procedural step-by-step instructions to simple navigation inside a building. For localizing assembly locations in AR, it is common to place virtual icons or text labels on the physical object to be located [101]. If the augmentation is outside the field of view, it is helpful to provide interface cues to direct user’s attention to the location of the assembly component [69]. Multiple techniques for localizing out-of-view objects in AR are proposed in the literature, including attention funnel [10] or virtual tunnel [53], 3D arrows and compass-like 2D arrows [125], and more complex systems like the one proposed in Gruenefeld et al. [49], i.e., EyeSee360. Among these, the most appropriate approach, consistent with the considered authoring constraints and HCI principles, is the usage of predefined arrows (2D or 3D). Our analysis indicates that an arrow guidance-based technique for localizing out-of-view objects in AR is adapted to the considered scenario for several reasons: (2) arrows are familiar visual cues, potentially easy to understand and follow by people without AR expertise in unfamiliar environments like AR; (3) visually, arrows are less intrusive and easier to integrate with other AR scene elements; (3) finally, from a technical perspective this approach does not represent a challenge. Therefore, the selected proposal for identifying the assembly location during training, consists of two arrows. The first is called directional arrow and directs the user towards the assembly location, if this is not in the field of view. The second one is the location arrow and points to the real-world assembly. As discussed in the previous section, the location arrow is placed by the author during the creation of the AR instruction, and it can represent an approximate location of the assembly. Only one of the two arrows will be visible at any moment, the first being head registered
Augmented Reality Training in Manufacturing Sectors
483
whereas the second is world registered. We note that the directional arrow is added automatically at the application level, based on the position and orientation of the user and the position of the location arrow, therefore the author is not required to take any action during the creation of the AR work instruction. How H2H: The instructor demonstrates how the operation is executed by performing and eventually explaining the assembly process at the same time, e.g.: “…you do it like this...”. AR: By considering cognitive aspects and the authoring cost, we argue that an FPV photo illustrating the final assembly result or an FPV video demonstration, depending on the complexity of the assembly operation, represents the best compromise of capturing and conveying complex assembly expertise via AR. We believe that there is a general preconception in the AR community that AR is mainly about world registered 3D contents, overlooking the potential of familiar, easy to author data types like photos and video demonstrations. We should as well bear in mind that a FPV video captures the assembly operation exactly as observed in the real-world demonstration, allowing one to execute operations and learn by imitating the actions visualized in the video. Literature findings regarding information conveyance and HCI principles, persuade us to believe that short video demonstration sequences, spatially registered (i.e., contextualized) next to the assembly location, is potentially the most adapted assembly information conveyance technique for most manual assembly operations similar to the ones of the considered manufacturing factory. We note that the both the photo and the video demonstration are captured and spatially registered by the creator of the AR instructions during the authoring procedure, as detailed in the previous section. Further, we adopt natural interaction techniques, eye gaze, for controlling the video playback and head gaze, for automatically rotating the video or image element to always face the user during training. The video playback during training is controlled by eye tracking for mainly three reasons. Firstly, this implicit or natural video playback technique allows the user to start and stop the video without requiring specific, deliberate action (e.g., button touch, voice command) as provided by classical video playback interfaces. Secondly, depending on his skills, the trainee can “synchronize” his assembly work and, at the same time, align the viewpoint of the real-world assembly components with the ones presented in the instructional video for a better understanding, by effortlessly watching short, consecutive chunks of the instructional video. Finally, this playback technique potentially optimizes the assembly time as, ideally, the trainee watches the video only once, performing the assembly at the same time. This is in line with the results found in Biard et al. [8], who demonstrated that users performed better when videos are automatically paused between steps. By using eye tracking, we allow the user to pause the video
484
M. Preda and T. Lavric
whenever he feels the need of switching his attention from the video to the assembly itself, either for operating or validating his assembly work. In other words, the user defines by himself, on the fly, the assembly steps of a video, by simply turning his attention to and away from the video. The second natural interaction technique proposed in our training methodology is the automatic orientation of the assembly illustration (i.e., the image or the video demonstration) towards the trainee, based on his real time head position and orientation. We propose this technique mainly to facilitate the access to information during training, without requiring the trainee to move to certain locations only for improving his viewpoint. This feature is useful as well for cases when the author of the AR instructions and the trainee have a significant height difference, a detail that was observed during our field experiments conducted during the case study, case in which a good viewpoint might not even be possible to a attain. By employing such techniques, we aim to address human-centered design principles, while providing hands-free interaction and avoiding UI clutter. At the same time, we encourage the trainee to focus on the assembly process and not on the application usage. We note that the quality of the video can have an important impact on the comprehension of the assembly operations described in the video demonstration. The light conditions of the considered factory and the quality of the video recordings captured by Hololens devices (i.e., 1 and 2) by using their default camera parameters were sufficient during our experiments for participants to easily comprehend and follow the video assembly demonstration. 6.2.3 Conveying an AR Work Instruction In this section, we describe how an AR work instruction is conveyed during the training procedure by using a Hololens 2 application that implements the proposed information conveyance methodology presented in the previous section. We focus on how the information is conveyed and how the trainee is supposed to interact with the visual elements during the training procedure. Figure 2 illustrates the user interface and the different visual elements of an AR work instruction conveyed during a training procedure. Note that Fig. 2d illustrates the usage of a CAD model, replacing the location arrow. This represents an extension of the proposed system, which includes the usage of CAD models as a replacement of the location arrows and a complement to the other visual assets describing the assembly operation. What: Each task starts by displaying a text instruction panel (Fig. 2a) in front of the user, between 0.6 and 0.7 m away. The instruction panel follows user’s head for 1 s (head registration) then it stops (environment registration). We ensure that the text instruction is not overlooked by the user and at the same time, that the panel does not visually interfere for more than necessary. The sticking time of 1 second, is adjusted for the considered assembly use case, based on the required movements of the user during the assembly procedure. The user hides the panel
Augmented Reality Training in Manufacturing Sectors
485
Fig. 2 Example of how the assembly information is conveyed via AR. Instruction 1: “Grab 2 uprights”. (a) Text & indication arrow; (b) Location arrow & FPV photo; Instruction 2: “Place the first upright”. (c) Text & indication arrow; (d) FPV video & CAD model
by clicking a "hide" button or by using the voice command “hide”. Complementing the “hide” button with a voice command was necessary for cases when the panel is rendered behind the physical environment, unreachable to hand touch. Viceversa is true for noisy assembly environments, where voice might not be suitable for interacting with the system [61]. Our use case validates the requirement of multimodal interfaces discussed in Irawati et al. [60]. Where: The next step consists in identifying the assembly location, pointed at by a spatially registered arrow (Fig. 2b). If the assembly location is not in the FoV of the user, a screen-registered directional arrow, Fig. 2a, c will guide the user towards it. The behavior of the two arrows is detailed in the previous section, where the authoring procedure is described. How: Once the assembly location is identified, the trainee should observe in addition to the location arrow, the corresponding FPV photo of the final assembly or the video demonstration (Fig. 2d). Capturing these elements is optional during authoring, therefore it is possible that for simple assembly operations they are not provided. If any of them exists however, they should be visible (at least partially) in the same field of view with the corresponding assembly location, if the authoring recommendations were carefully considered by the creator of the AR instructions. The trainee interacts with these visual elements and executes the corresponding assembly operation. The user visualizes the next/previous instruction by clicking the “next”/”previous” button or by using the corresponding voice command. The “help me” voice command brings the instruction panel in front of the user. We note that unlike G. A. Lee & Hoff [69], our proposal makes use of text and images, in addition to video and
486
M. Preda and T. Lavric
indication arrows. We use predefined arrows as AR visual cues, spatially registered during the authoring by using instinctual interaction (cf. [91]). In our approach, the FPV instructional video is presented during the training experience exactly as captured in the authoring procedure.
References 1. Abraham, M., & Annunziata, M. (2017). Augmented reality is already improving worker performance. https://hbr.org/2017/03/augmented-reality-is-already-improving-worker-performance 2. Angrisani, L., Arpaia, P., Esposito, A., & Moccaldi, N. (2020). A wearable brain- computer interface instrument for augmented reality-based inspection in Industry 4.0. IEEE Transactions on Instrumentation and Measurement, 69(4), 1530–1539. https://doi. org/10.1109/TIM.2019.2914712 3. Arbeláez, J. C., Viganò, R., & Osorio-Gómez, G. (2019). Haptic Augmented Reality (HapticAR) for assembly guidance. International Journal on Interactive Design and Manufacturing, 13(2), 673–687. https://doi.org/10.1007/S12008-019-00532-3 4. Azuma, R. T. (1997). A survey of augmented reality. Presence: Teleoperators and Virtual Environments, 6, 355–385. http://www.cs.unc.edu/~azumaW: 5. Baird, K. M., & Barfield, W. (1999). Evaluating the effectiveness of augmented reality displays for a manual assembly task. Virtual Reality, 4(4), 250–259. https://doi.org/10.1007/ BF01421808 6. Bellalouna, F. (2020). Industrial use cases for augmented reality application. In 11th IEEE international conference on cognitive infocommunications, CogInfoCom 2020 – Proceedings, September, pp. 10–18. https://doi.org/10.1109/CogInfoCom50765.2020.9237882 7. Bhattacharya, B., & Winer, E. H. (2019). Augmented reality via expert demonstration authoring (AREDA). Computers in Industry, 105, 61–79. https://doi.org/10.1016/j. compind.2018.04.021 8. Biard, N., Cojean, S., & Jamet, E. (2018). Effects of segmentation and pacing on procedural learning by video. Computers in Human Behavior, 89, 411–417. https://doi.org/10.1016/j. chb.2017.12.002 9. Billinghurst, M., Kato, H., & Myojin, S. (2009). Advanced interaction techniques for augmented reality applications. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5622 LNCS (Issue May 2014), 13–22. https://doi.org/10.1007/978-3-642-02771-0_2 10. Biocca, F., Tang, A., Owen, C., Xiao, F., & Lab, M. (2006). Attention funnel: Omnidirectional 3D cursor for mobile augmented reality platforms. In Proceedings of the SIGCHI conference on human factors in computing systems. https://doi.org/10.1145/1124772 11. Blattgerste, J., Renner, P., Strenge, B., & Pfeiffer, T. (2018). In-situ instructions exceed side-by-side instructions in augmented reality assisted assembly. In Proceedings of the 11th PErvasive technologies related to assistive environments conference, pp. 133–140. https:// doi.org/10.1145/3197768.3197778 12. Blattgerste, J., Strenge, B., Renner, P., Pfeiffer, T., & Essig, K. (2017). Comparing conventional and augmented reality instructions for manual assembly tasks. In ACM international conference proceeding series, Part F128530, pp. 75–82. https://doi.org/10.1145/3056540.3056547 13. Bosch, T., Könemann, R., De Cock, H., & Van Rhijn, G. (2017). The effects of projected versus display instructions on productivity, quality and workload in a simulated assembly task. In ACM international conference proceeding series, Part F1285, pp. 412–415. https:// doi.org/10.1145/3056540.3076189
Augmented Reality Training in Manufacturing Sectors
487
14. Bottani, E., & Vignali, G. (2019). Augmented reality technology in the manufacturing industry: A review of the last decade. IISE Transactions, 51(3), 284–310. https://doi.org/10.108 0/24725854.2018.1493244 15. Breedveld, P. (1997). Observation, manipulation, and eye-hand coordination problems in minimally invasive surgery. In Proceedings of the XVI European annual conference on human decision making and manual control, pp. 9–11. http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.497.4423 16. Cao, Y., Wang, T., Qian, X., Rao, P. S., Wadhawan, M., Huo, K., & Ramani, K. (2019). GhostAR: A time-space editor for embodied authoring of human-robot collaborative task with augmented reality. In Proceedings of the 32nd annual ACM symposium on user interface software and technology, pp. 521–534. https://doi.org/10.1145/3332165.3347902 17. Carroll, L., & Wiebe, E. N. (2004). Static versus dynamic presentation of procedural instruction: Investigating the efficacy of video-based delivery. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(7), 1059–1063. https://doi. org/10.1177/154193120404800712 18. Caudell, T. P., & Mizell, D. W. (1992). Augmented reality: An application of heads-up display technology to manual manufacturing processes (Vol. 2, pp. 659–669). https://doi. org/10.1109/HICSS.1992.183317 19. Caudell, T. P., & Mizell, D. W. (2003, February). Augmented reality: An application of headsup display technology to manual manufacturing processes (Vol. 2, pp. 659–669. https://doi. org/10.1109/hicss.1992.183317 20. Ceruti, A., Marzocca, P., Liverani, A., & Bil, C. (2019). Maintenance in aeronautics in an Industry 4.0 context: The role of Augmented Reality and Additive Manufacturing. Journal of Computational Design and Engineering, 6(4), 516–526. https://doi.org/10.1016/j. jcde.2019.02.001 21. Chang, M. M. L., Ong, S. K., & Nee, A. Y. C. (2017). AR-guided product disassembly for maintenance and remanufacturing. Procedia CIRP, 61, 299–304. https://doi.org/10.1016/j. procir.2016.11.194 22. Conway, S. (2019). The Total Economic Impact TM Of PTC Vuforia – Cost savings and business benefits enabled by Industrial Augmented Reality (Issue July). https://www.ptc.com/-/media/ Files/PDFs/Augmented-Reality/The-Total-Economic-Impact-of-PTC-Vuforia_2019.pdf 23. Courpasson, D., Clegg, F., & Clegg, S. (2012). Resisters at work: Generating productive resistance in the workplace. Organization Science, 23(3), 801–819. https://doi.org/10.1287/ orsc.1110.0657 24. Danielsson, O., Syberfeldt, A., Holm, M., & Wang, L. (2018). Operators perspective on augmented reality as a support tool in engine assembly. Procedia CIRP, 72, 45–50. https://doi. org/10.1016/j.procir.2018.03.153 25. de Souza Cardoso, L. F., Mariano, F. C. M. Q., & Zorzal, E. R. (2020). A survey of industrial augmented reality. Computers & Industrial Engineering, 139(November 2019), 106159. https://doi.org/10.1016/j.cie.2019.106159 26. Dey, A., Billinghurst, M., Lindeman, R. W., & Swan, J. E. (2018). A systematic review of 10 years of Augmented Reality usability studies: 2005 to 2014. Frontiers Robotics AI, 5(APR). https://doi.org/10.3389/frobt.2018.00037 27. Doshi, A., Smith, R. T., Thomas, B. H., & Bouras, C. (2017). Use of projector based augmented reality to improve manual spot-welding precision and accuracy for automotive manufacturing. International Journal of Advanced Manufacturing Technology, 89(5–8), 1279–1293. https://doi.org/10.1007/s00170-016-9164-5 28. Eckhoff, D., Sandor, C., Lins, C., Eck, U., Kalkofen, D., & Hein, A. (2018). TutAR: Augmented reality tutorials for hands-only procedures. In Proceedings – VRCAI 2018: 16th ACM SIGGRAPH international conference on virtual-reality continuum and its applications in industry, October 2020. https://doi.org/10.1145/3284398.3284399
488
M. Preda and T. Lavric
29. Egger, J., & Masood, T. (2020). Augmented reality in support of intelligent manufacturing – A systematic literature review. Computers & Industrial Engineering, 140(February 2020), 106195. https://doi.org/10.1016/j.cie.2019.106195 30. Erkoyuncu, J. A., del Amo, I. F., Dalle Mura, M., Roy, R., & Dini, G. (2017). Improving efficiency of industrial maintenance with context aware adaptive authoring in augmented reality. CIRP Annals - Manufacturing Technology, 66(1), 465–468. https://doi.org/10.1016/j. cirp.2017.04.006 31. Fernández del Amo, I., Erkoyuncu, J. A., Roy, R., Palmarini, R., & Onoufriou, D. (2018). A systematic review of Augmented Reality content-related techniques for knowledge transfer in maintenance applications. Computers in Industry, 103, 47–71. https://doi.org/10.1016/j. compind.2018.08.007 32. Fiorentino, M., Uva, A. E., Gattullo, M., Debernardis, S., & Monno, G. (2014). Augmented reality on large screen for interactive maintenance instructions. Computers in Industry, 65(2), 270–278. https://doi.org/10.1016/j.compind.2013.11.004 33. Fischer, G. (2001). User modeling in human-computer interaction. User Modeling and User- Adapted Interaction, 11(1–2), 65–86. https://doi.org/10.1023/A:1011145532042 34. Fite-Georgel, P. (2011). Is there a reality in Industrial Augmented Reality? In 2011 10th IEEE international symposium on mixed and augmented reality, ISMAR 2011, pp. 201–210. https:// doi.org/10.1109/ISMAR.2011.6092387 35. Foley, D. J., vad Dam, A., Feiner K. S., & Hughes F. J. (2004). Computer graphics: Principles and practice. https://books.google.fr/books?id=-4ngT05gmAQC&printsec=frontcover&re dir_esc=y#v=onepage&q&f=false 36. Fournier-Viger, P., Nkambou, R., & Mephu Nguifo, E. (2009). Exploiting partial problem spaces learned from users’ interactions to provide key tutoring services in procedural and Ill-defined domains. Frontiers in Artificial Intelligence and Applications, 200(1), 383–390. https://doi.org/10.3233/978-1-60750-028-5-383 37. Fraga-Lamas, P., Fernández-Caramés, T. M., Blanco-Novoa, Ó., & Vilar-Montesinos, M. A. (2018). A review on industrial augmented reality systems for the Industry 4.0 shipyard. IEEE Access, 6, 13358–13375. https://doi.org/10.1109/ACCESS.2018.2808326 38. Friedrich, W. (2002). ARVIKA-augmented reality for development, production and service. In Proceedings – International symposium on mixed and augmented reality, ISMAR 2002, pp. 3–4. https://doi.org/10.1109/ISMAR.2002.1115059 39. Funk, M., Kosch, T., & Schmidt, A. (2016). Interactive worker assistance: Comparing the effects of in-situ projection, head-mounted displays, tablet, and paper instructions. In UbiComp 2016 – Proceedings of the 2016 ACM International joint conference on pervasive and ubiquitous computing, July 2018, pp. 934–939. https://doi.org/10.1145/2971648.2971706 40. Funk, M., Mayer, S., & Schmidt, A. (2015). Using in-situ projection to support cognitively impaired workers at the workplace. In ASSETS 2015 – Proceedings of the 17th international ACM SIGACCESS conference on computers and accessibility, pp. 185–192. https://doi. org/10.1145/2700648.2809853 41. Gattullo, M., Evangelista, A., Uva, A. E., Fiorentino, M., & Gabbard, J. (2020). What, how, and why are visual assets used in industrial augmented reality? A systematic review and classification in maintenance, assembly, and training (from 1997 to 2019). IEEE Transactions on Visualization and Computer Graphics, 2626(c), 1–1. https://doi.org/10.1109/ tvcg.2020.3014614 42. Gattullo, M., Scurati, G. W., Fiorentino, M., Uva, A. E., Ferrise, F., & Bordegoni, M. (2019). Towards augmented reality manuals for industry 4.0: A methodology. Robotics and Computer-Integrated Manufacturing, 56(October 2018), 276–286. https://doi.org/10.1016/j. rcim.2018.10.001 43. Gavish, N., Gutiérrez, T., Webel, S., Rodríguez, J., Peveri, M., Bockholt, U., & Tecchia, F. (2015). Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interactive Learning Environments, 23(6), 778–798. https://doi. org/10.1080/10494820.2013.815221
Augmented Reality Training in Manufacturing Sectors
489
44. Geng, J., Song, X., Pan, Y., Tang, J., Liu, Y., Zhao, D., & Ma, Y. (2020). A systematic design method of adaptive augmented reality work instruction for complex industrial operations. Computers in Industry, 119, 103229. https://doi.org/10.1016/j.compind.2020.103229 45. Gonzalez, A. N. V., Kapalo, K., Koh, S., Sottilare, R., Garrity, P., & Laviola, J. J. (2019). A comparison of desktop and augmented reality scenario based training authoring tools. In 2019 IEEE conference on virtual reality and 3D user interfaces (VR), pp. 1199–1200. https:// doi.org/10.1109/VR.2019.8797973 46. Grimin, P., Haller, M., & Reinhold, S. (2002). AMIRE – Authoring mixed reality. In Augmented reality toolkit, The first IEEE international workshop. https://doi.org/10.1109/ ART.2002.1107008 47. Grote, G. (2004). Uncertainty management at the core of system design. Annual Reviews in Control, 28(2), 267–274. https://doi.org/10.1016/j.arcontrol.2004.03.001 48. Grote, G., & Weichbrodt, J. C. (2007). Uncertainty management through flexible routines in a high-risk uncertainty management through flexible routines in a high-risk organization Swiss Federal Institute of Technology (ETH Zürich) 2nd Annual Cambridge Conference on Regulation, Inspection. 49. Gruenefeld, U., Ennenga, D., Ali, A. El, Heuten, W., & Boll, S. (2017). EyeSee360: Designing a visualization technique for out-of-view objects in head-mounted augmented reality. In SUI 2017 – Proceedings of the 2017 Symposium on Spatial User Interaction, pp. 109–118. https:// doi.org/10.1145/3131277.3132175 50. Gupta, A., Fox, D., Curless, B., & Cohen, M. (2012). DuploTrack: A real-time system for authoring and guiding Duplo Block assembly. In Proceedings of the 25th annual ACM symposium on user interface software and technology. https://doi.org/10.1145/2380116 51. Ha, T., & Woo, W. (2007). Graphical tangible user interface for a AR authoring tool in product design environment. In CEUR Workshop Proceedings, 260(June 2014). 52. Hahn, J., Ludwig, B., & Wolff, C. (2015). Augmented reality-based training of the PCB assembly process. ACM International Conference Proceeding Series, 30-Novembe(Mum), 395–399. https://doi.org/10.1145/2836041.2841215 53. Hanson, R., Falkenström, W., & Miettinen, M. (2017). Augmented reality as a means of conveying picking information in kit preparation for mixed-model assembly. Computers & Industrial Engineering, 113(August), 570–575. https://doi.org/10.1016/j.cie.2017.09.048 54. Haringer, M., & Regenbrecht, H. T. (2002). A pragmatic approach to augmented reality authoring. In Proceedings – International symposium on mixed and augmented reality, ISMAR 2002, pp. 237–246. https://doi.org/10.1109/ISMAR.2002.1115093 55. Haug, A. (2015). Work instruction quality in industrial management. International Journal of Industrial Ergonomics, 50, 170–177. https://doi.org/10.1016/j.ergon.2015.09.015 56. Henderson, S., & Feiner, S. (2011). Exploring the benefits of augmented reality documentation for maintenance and repair. IEEE Transactions on Visualization and Computer Graphics, 17(10), 1355–1368. https://doi.org/10.1109/TVCG.2010.245 57. Henderson, S. J., & Feiner, S. (2009). Evaluating the benefits of augmented reality for task localization in maintenance of an armored personnel carrier turret. In Science and technology proceedings – IEEE 2009 international symposium on mixed and augmented reality, ISMAR 2009, pp. 135–144. https://doi.org/10.1109/ISMAR.2009.5336486 58. Hodaie, Z., Haladjian, J., & Bruegge, B. (2019). ISAR: An authoring system for interactive tabletops. In ISS 2019 – Proceedings of the 2019 ACM international conference on interactive surfaces and spaces, pp. 355–360. https://doi.org/10.1145/3343055.3360751 59. Holm, M., Danielsson, O., Syberfeldt, A., Moore, P., & Wang, L. (2017). Adaptive instructions to novice shop-floor operators using Augmented Reality. Journal of Industrial and Production Engineering, 34(5), 362–374. https://doi.org/10.1080/21681015.2017.1320592 60. Irawati, S., Green, S., Billinghurst, M., Duenser, A., & Ko, H. (2006). An evaluation of an augmented reality multimodal interface using speech and paddle gestures. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence
490
M. Preda and T. Lavric
and Lecture Notes in Bioinformatics), 4282 LNCS(January), 272–283. https://doi. org/10.1007/11941354_28 61. Jasche, F., Hofmann, S., & Ludwig, T. (2021, May 6). Comparison of diferent types of augmented reality visualizations for instructions. In Conference on human factors in computing systems – Proceedings. https://doi.org/10.1145/3411764.3445724 62. Jo, G., Oh, K., Ha, I., Lee, K., Hong, M., Neumann, U., & You, S. (2014, July). A unified framework for augmented reality and knowledge-based systems in a unified framework for augmented reality and knowledge-based systems in maintaining aircraft. 63. Kim, K., Billinghurst, M., Bruder, G., Duh, H. B. L., & Welch, G. F. (2018). Revisiting trends in augmented reality research: A review of the 2nd Decade of ISMAR (2008-2017). IEEE Transactions on Visualization and Computer Graphics, 24(11), 2947–2962. https://doi. org/10.1109/TVCG.2018.2868591 64. Knöpfle, C., Weidenhausen, J., Chauvigné, L., & Stock, I. (2005). Template based authoring for AR based service scenarios. In Proceedings – IEEE virtual reality, pp. 237–240. https:// doi.org/10.1109/vr.2005.1492779 65. Koumaditis, K., Venckute, S., Jensen, F. S., & Chinello, F. (2019). Immersive training: Outcomes from small scale AR/VR pilot-studies. In 26th IEEE conference on virtual reality and 3D user interfaces, VR 2019 – Proceedings, 2019-Janua(March 2020), pp. 1894–1898. https://doi.org/10.1109/VR44988.2019.9044162 66. Lai, Z. H., Tao, W., Leu, M. C., & Yin, Z. (2020). Smart augmented reality instructional system for mechanical assembly towards worker-centered intelligent manufacturing. Journal of Manufacturing Systems, 55(July 2019), 69–81. https://doi.org/10.1016/j.jmsy.2020.02.010 67. Lavric, T., Bricard, E., Preda, M., & Zaharia, T. (2021a). An industry-adapted AR training method for manual assembly operations. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 13095 LNCS, 282–304. https://doi.org/10.1007/978-3-030-90963-5_22 68. Lavric, T., Bricard, E., Preda, M., & Zaharia, T. (2021b). Exploring low-cost visual assets for conveying assembly instructions in AR. In 2021 international conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 – Proceedings. https://doi.org/10.1109/ INISTA52262.2021.9548570 69. Lee, G. A., & Hoff, W. (2020). Enhancing first-person view task instruction videos with augmented reality cues (pp. 666–676). https://doi.org/10.1109/ISMAR50242.2020.00078 70. Lee, G. A., & Kim, G. J. (2009). Immersive authoring of Tangible Augmented Reality content: A user study. Journal of Visual Languages and Computing, 20(2), 61–79. https://doi. org/10.1016/j.jvlc.2008.07.001 71. Lee, G. A., Kim, G. J., & Billinghurst, M. (2005). Immersive authoring: What you experience is what you get (WYXIWYG). Communications of the ACM, 48(7), 76–81. http://portal.acm. org/citation.cfm?doid=1070838.1070840 72. Lee, G., Ahn, S., Hoff, W., & Billinghurst, M. (2019). AR Tips: Augmented first-person view task instruction videos. In 2019 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct), pp. 34–36. https://doi.org/10.1109/ISMAR-Adjunct.2019.00024 73. Li, D., Mattsson, S., Salunkhe, O., Fast-Berglund, A., Skoogh, A., & Broberg, J. (2018). Effects of information content in work instructions for operator performance. Procedia Manufacturing, 25, 628–635. https://doi.org/10.1016/j.promfg.2018.06.092 74. Li, W., Agrawala, M., & Salesin, D. (2004). Interactive image-based exploded view diagrams. In Proceedings – Graphics interface, June 2014, pp. 203–212. 75. Lindberg, C. F., Tan, S., Yan, J., & Starfelt, F. (2015). Key performance indicators improve industrial performance. Energy Procedia, 75, 1785–1790. https://doi.org/10.1016/J. EGYPRO.2015.07.474 76. Lines, B. C., Sullivan, K. T., Smithwick, J. B., & Mischung, J. (2015). Overcoming resistance to change in engineering and construction: Change management factors for owner organizations. International Journal of Project Management, 33(5), 1170–1179. https://doi. org/10.1016/j.ijproman.2015.01.008
Augmented Reality Training in Manufacturing Sectors
491
77. Loch, F., Quint, F., & Brishtel, I. (2016). Comparing video and augmented reality assistance in manual assembly. In 2016 12th international conference on intelligent environments (IE), pp. 147–150. https://doi.org/10.1109/IE.2016.31 78. Lorenz, M., Knopp, S., & Klimant, P. (2018). Industrial augmented reality: Requirements for an augmented reality maintenance worker support system. In Adjunct proceedings – 2018 IEEE international symposium on mixed and augmented reality, ISMAR-Adjunct 2018, pp. 151–153. https://doi.org/10.1109/ISMAR-Adjunct.2018.00055 79. Markouzis, D., & Fessakis, G. (2015). Interactive storytelling and mobile augmented reality applications for learning and entertainment – A rapid prototyping perspective. In Proceedings of 2015 international conference on interactive mobile communication technologies and learning, IMCL 2015, pp. 4–8. https://doi.org/10.1109/IMCTL.2015.7359544 80. Marr, B. (2018). The amazing ways honeywell is using virtual and augmented reality to transfer skills to millennials. https://www.forbes.com/sites/bernardmarr/2018/03/07/the-amazing- ways-honeywell-is-using-virtual-and-augmented-reality-to-transfer-skills-to-millennials/?sh =2eb075c5536a 81. Martinetti, A., Marques, H. C., Singh, S., & van Dongen, L. (2019). Reflections on the limited pervasiveness of augmented reality in industrial sectors. Applied Sciences, 9(16), 3382. https://doi.org/10.3390/app9163382 82. Masood, T., & Egger, J. (2019). Augmented reality in support of Industry 4.0 – Implementation challenges and success factors. Robotics and Computer-Integrated Manufacturing, 58(March), 181–195. https://doi.org/10.1016/j.rcim.2019.02.003 83. Mayer, R. E., & Fiorella, L. (2014). Principles for reducing extraneous processing in multimedia learning: Coherence, signaling, redundancy, spatial contiguity, and temporal contiguity principles. In The Cambridge handbook of multimedia learning, second edition, pp. 279–315. https://doi.org/10.1017/CBO9781139547369.015 84. Mengoni, M., Ceccacci, S., Generosi, A., & Leopardi, A. (2018). Spatial augmented reality: An application for human work in smart manufacturing environment. Procedia Manufacturing, 17, 476–483. https://doi.org/10.1016/j.promfg.2018.10.072 85. Merino, L., Schwarzl, M., Kraus, M., Sedlmair, M., Schmalstieg, D., & Weiskopf, D. (2020). Evaluating mixed and augmented reality: A systematic literature review (2009–2019). https:// doi.org/10.1109/ISMAR50242.2020.00069 86. Microsoft HoloLens 2|AR Headset. (2021). https://www.insight.com/en_US/shop/partner/ microsoft/hardware/hololens.html 87. Microsoft Mixed Reality/AR Guides|Microsoft Dynamics 365. (2019). https://dynamics. microsoft.com/en-us/mixed-reality/guides/ 88. Microsoft MRTK v2.4.0. (2020). https://github.com/microsoft/MixedRealityToolkit-Unity/ releases/tag/v2.4.0 89. Miqueo, A., Torralba, M., & Yagüe-Fabra, J. A. (2020). Lean manual assembly 4.0: A systematic review. Applied Sciences (Switzerland), 10(23), 1–37. https://doi.org/10.3390/ app10238555 90. Mixed Reality direct manipulation with hands|Microsoft Docs. (2020). https://docs.microsoft.com/en-us/windows/mixed-reality/design/direct-manipulation#3d-object-manipulation 91. Mixed Reality instinctual interactions|Microsoft Docs. (2020). https://docs.microsoft.com/ en-us/windows/mixed-reality/design/interaction-fundamentals 92. Moghaddam, M., Wilson, N. C., Modestino, A. S., Jona, K., & Marsella, S. C. (2021). Exploring augmented reality for worker assistance versus training. Advanced Engineering Informatics, 50(April), 101410. https://doi.org/10.1016/j.aei.2021.101410 93. Mohr, P., Kerbl, B., Donoser, M., Schmalstieg, D., & Kalkofen, D. (2015). Retargeting technical documentation to augmented reality. In Proceedings of the 33rd annual ACM conference on human factors in computing systems – CHI ’15, 2015-April, pp. 3337–3346. https:// doi.org/10.1145/2702123.2702490 94. Mohr, P., Mandl, D., Tatzgern, M., Veas, E., Schmalstieg, D., & Kalkofen, D. (2017). Retargeting video tutorials showing tools with surface contact to augmented reality. In
492
M. Preda and T. Lavric
Conference on human factors in computing systems – Proceedings, 2017-May, pp. 6547–6558. https://doi.org/10.1145/3025453.3025688 95. Mossel, A., & Venditti, B. (2013). 3DTouch and HOMER-S: Intuitive manipulation techniques for one-handed handheld augmented reality. 96. Mota, R. C., Roberto, R. A., & Teichrieb, V. (2015). [POSTER] Authoring tools in augmented reality: An analysis and classification of content design tools. In 2015 IEEE international symposium on mixed and augmented reality, October 2017, pp. 164–167. https://doi. org/10.1109/ISMAR.2015.47 97. Moura, G. D. S., Pessoa, S. A., Lima, J. P. S. D. M., Teichrieb, V., & Kelner, J. (2011). RPR- SORS: An authoring toolkit for photorealistic AR. In Proceedings – 2011 13th symposium on virtual reality, SVR 2011, pp. 178–187. https://doi.org/10.1109/SVR.2011.14 98. Mourtzis, D., Angelopoulos, J., & Panopoulos, N. (2020). A framework for automatic generation of augmented reality maintenance & repair instructions based on convolutional Neural networks. Procedia CIRP, 93, 977–982. https://doi.org/10.1016/j.procir.2020.04.130 99. Mourtzis, D., Vlachou, A., & Zogopoulos, V. (2017). Cloud-based augmented reality remote maintenance through shop-floor monitoring: A product-service system approach. Journal of Manufacturing Science and Engineering. Transactions of the ASME, 139(6). https://doi. org/10.1115/1.4035721/366621 100. Murithi, J., & Lin, C. (2020). Markerless cooperative augmented reality-based smart manufacturing double-check system: Case of safe PCBA inspection following automatic optical inspection. Robotics and Computer-Integrated Manufacturing, 64(February), 101957. https://doi.org/10.1016/j.rcim.2020.101957 101. Nassani, A., Bai, H., Lee, G., & Billinghurst, M. (2015). Tag it! AR annotation using wearable sensors. In SIGGRAPH Asia 2015 Mobile graphics and interactive applications, SA 2015. https://doi.org/10.1145/2818427.2818438 102. Nebeling, M., & Speicher, M. (2018). The trouble with augmented reality/Virtual reality authoring tools. In 2018 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct), pp. 333–337. https://doi.org/10.1109/ISMAR-Adjunct.2018.00098 103. Nee, A. Y. C., Ong, S. K., Chryssolouris, G., & Mourtzis, D. (2012). Augmented reality applications in design and manufacturing. CIRP Annals, 61(2), 657–679. https://doi.org/10.1016/j. cirp.2012.05.010 104. Nilsson, S., & Johansson, B. (2007). Fun and usable: Augmented Reality instructions in a hospital setting. In Australasian computer-human interaction conference, OZCHI’07, pp. 123–130. 105. O’Donnell, R. (2018). How Mercedes-Benz uses augmented reality to train employees of all types|HR Dive. https://www.hrdive.com/news/ how-mercedes-benz-uses-augmented-reality-to-train-employees-of-all-types/530425/ 106. Ong, S. K., Yuan, M. L., & Nee, A. Y. C. (2008). Augmented reality applications in manufacturing: A survey. International Journal of Production Research, 46(10), 2707–2742. https:// doi.org/10.1080/00207540601064773 107. Ong, S. K., & Zhu, J. (2013). CIRP Annals – Manufacturing Technology A novel maintenance system for equipment serviceability improvement. CIRP Annals - Manufacturing Technology, 62(1), 39–42. https://doi.org/10.1016/j.cirp.2013.03.091 108. Ong, X. W. S. K., & Nee, A. Y. C. (2016). A comprehensive survey of augmented reality assembly research. Advances in Manufacturing, 4(1), 1–22. https://doi.org/10.1007/ s40436-015-0131-4 109. Palladino, T. (2017). Porsche adopts Atheer’s AR platform to connect mechanics with remote experts|Next Reality. https://next.reality.news/news/ porsche-adopts-atheers-ar-platform-connect-mechanics-with-remote-experts-0181255/ 110. Palmarini, R., Erkoyuncu, J. A., Roy, R., & Torabmostaedi, H. (2018). A systematic review of augmented reality applications in maintenance. Robotics and Computer-Integrated Manufacturing, 49(March 2017), 215–228. https://doi.org/10.1016/j.rcim.2017.06.002
Augmented Reality Training in Manufacturing Sectors
493
111. Park, J., Kim, S. S., Park, H., & Woo, W. (2016). Dreamhouse: NUI-based photo-realistic AR authoring system for interior design. In ACM international conference proceeding series, 25–27-February-2016. https://doi.org/10.1145/2875194.2875221 112. Pilati, F., Faccio, M., Gamberi, M., & Regattieri, A. (2020). Learning manual assembly through real-time motion capture for operator training with augmented reality. Procedia Manufacturing, 45, 189–195. https://doi.org/10.1016/j.promfg.2020.04.093 113. Polvi, J., Taketomi, T., Moteki, A., Yoshitake, T., Fukuoka, T., Yamamoto, G., Sandor, C., & Kato, H. (2018). Handheld guides in inspection tasks: Augmented reality versus picture. IEEE Transactions on Visualization and Computer Graphics, 24(7), 2118–2128. https://doi. org/10.1109/TVCG.2017.2709746 114. Raczynski, A., & Gussmann, P. (2004). Services and training through augmented reality. In IEE conference publication, pp. 263–271. 115. Radkowski, R., Herrema, J., & Oliver, J. (2015). Augmented reality-based manual assembly support with visual features for different degrees of difficulty. International Journal of Human Computer Interaction, 31(5), 337–349. https://doi.org/10.1080/10447318.2014.994194 116. Reiners, D., Stricker, D., Klinker, G., & Stefan, M. (1999). Augmented reality for construction tasks: Doorlock assembly. In Augmented reality, November 1999, pp. 51–66. https://doi. org/10.1201/9781439863992-10 117. Robertson, C. M., Maclntyre, B., & Walker, B. N. (2008). An evaluation of graphical context when the graphics are outside of the task area. In Proceedings – 7th IEEE international symposium on mixed and augmented reality 2008, ISMAR 2008, pp. 73–76. https://doi. org/10.1109/ISMAR.2008.4637328 118. Rolland, J. P., Holloway, R. L., & Fuchs, H. (1995). Comparison of optical and video see- through, head-mounted displays. Telemanipulator and Telepresence Technologies, 2351, 293–307. https://doi.org/10.1117/12.197322 119. Rumiński, D., & Walczak, K. (2013). Creation of interactive AR content on mobile devices. Lecture Notes in Business Information Processing, 160, 258–269. https://doi. org/10.1007/978-3-642-41687-3_24 120. Sahu, C. K., Young, C., & Rai, R. (2021). Artificial Intelligence (AI) in Augmented Reality (AR)-assisted manufacturing applications: A review. International Journal of Production Research, 59(16), 4903–4959. https://doi.org/10.1080/00207543.2020.1859636 121. Sanna, A., Manuri, F., Lamberti, F., Paravati, G., & Pezzolla, P. (2015). Using handheld devices to sup port augmented reality-based maintenance and assembly tasks. In 2015 IEEE international conference on consumer electronics, ICCE 2015, pp. 178–179. https://doi. org/10.1109/ICCE.2015.7066370 122. Sato, H., & Cohen, M. (2010). Using motion capture for real-time augmented reality scenes. In Proceedings of the 13th international conference on …, December 2010, pp. 58–61. http:// dl.acm.org/citation.cfm?id=1994503 123. Schwald, B., & de Laval, B. D. (2003). An augmented reality system for training and assistance to maintenance in the industrial context. In International conference in central europe on computer graphics, visualization and computer vision (WSCG), pp. 425–432. 124. Schwald, B., Figue, J., Chauvineau, E., Vu-Hong, F., Robert, A., Arbolino, M., Schnaider, M., de Laval, B., Dumas de Rauly, F., Anez, F. G., Baldo, O., & Santos, J. (2001). STARMATE: Using augmented reality technology for computer guided maintenance of complex mechanical elements. In EBusiness and EWork conference, pp. 196–202. https://www.researchgate. net/publication/237729326_STARMATE_Using_Augmented_Reality_technology_for_ computer_guided_maintenance_of_complex_mechanical_elements 125. Schwerdtfeger, B., Reif, R., Günthner, W. A., & Klinker, G. (2011). Pick-by-vision: There is something to pick at the end of the augmented tunnel. Virtual Reality, 15(2–3), 213–223. https://doi.org/10.1007/S10055-011-0187-9 126. Scurati, G. W., Gattullo, M., Fiorentino, M., Ferrise, F., Bordegoni, M., & Uva, A. E. (2018). Converting maintenance actions into standard symbols for Augmented Reality
494
M. Preda and T. Lavric
applications in Industry 4.0. Computers in Industry, 98, 68–79. https://doi.org/10.1016/j. compind.2018.02.001 127. Shao, T., Li, W., Zhou, K., Xu, W., Guo, B., & Mitra, N. J. (2013). Interpreting concept sketches. ACM Transactions on Graphics (TOG), 32(4). https://doi.org/10.1145/2461912.2462003 128. Siew, C. Y., Ong, S. K., & Nee, A. Y. C. (2019). A practical augmented reality-assisted maintenance system framework for adaptive user support. Robotics and Computer-Integrated Manufacturing, 59, 115–129. https://doi.org/10.1016/j.rcim.2019.03.010 129. Smith, E., McRae, K., Semple, G., Welsh, H., Evans, D., & Blackwell, P. (2021). Enhancing vocational training in the post-COVID era through mobile mixed reality. Sustainability, 13(11), 6144. https://doi.org/10.3390/SU13116144 130. Smith, E., Semple, G., Evans, D., McRae, K., & Blackwell, P. (2020). Augmented instructions: Analysis of performance and efficiency of assembly tasks. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12191 LNCS, 166–177. https://doi.org/10.1007/978-3-030-49698-2_12 131. SPA, E. I., AE, E. T., ZOO, H. S. P., & KUENSTLICHE, D. F. F. (2014). INTERACT – Interactive manual assembly operations for the human-centered workplaces of the future. 611007. http://www.interact-fp7.eu/wp-content/uploads/2015/04/INTERACT-D1.2.1.pdf 132. Stock, I., Weber, M., & Steinmeier, E. (2005). Metadata based authoring for technical documentation. In Proceedings of the 23rd annual international conference on design of communication documenting & designing for pervasive information – SIGDOC ’05. https://doi. org/10.1145/1085313 133. Swan, J. E., & Gabbard, J. L. (2005). Survey of user – Based experimentation in augmented reality. In Proceedings of 1st international conference on virtual reality, pp. 1–9. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.91.3957&rep=rep1&type=pdf 134. Syberfeldt, A., Danielsson, O., Holm, M., & Wang, L. (2015). Visual assembling guidance using augmented reality. Procedia Manufacturing, 1, 98–109. https://doi.org/10.1016/j. promfg.2015.09.068 135. Tainaka, K. (2020). Guideline and tool for designing an assembly task support system using augmented reality (pp. 654–665). https://doi.org/10.1109/ISMAR50242.2020.00077 136. Tang, A., Owen, C., Biocca, F., & Mou, W. (2003). Comparative effectiveness of augmented reality in object assembly. In Proceedings of the conference on human factors in computing systems – CHI ’03. https://doi.org/10.1145/642611 137. Tsihrintzis, G. A., Damiani, E., Virvou, M., Howlett, R. J., & Jain, L. C. (2010). Intelligent interactive multimedia systems and services. Smart Innovation, Systems and Technologies, 6. https://doi.org/10.1007/978-3-642-14619-0 138. Unity 2019.4.10. (2019). https://unity3d.com/unity/whats-new/2019.4.10 139. Urbas, U., Vrabič, R., & Vukašinović, N. (2019). Displaying product manufacturing information in augmented reality for inspection. Procedia CIRP, 81, 832–837. https://doi. org/10.1016/J.PROCIR.2019.03.208 140. Uva, A. E., Gattullo, M., Manghisi, V. M., Spagnulo, D., Cascella, G. L., & Fiorentino, M. (2018). Evaluating the effectiveness of spatial augmented reality in smart manufacturing: A solution for manual working stations. International Journal of Advanced Manufacturing Technology, 94(1–4), 509–521. https://doi.org/10.1007/s00170-017-0846-4 141. van Marrewijk, A. (2018). Digging for change: Change and resistance in interorganizational projects in the utilities sector. Project Management Journal, 49(3), 34–45. https://doi. org/10.1177/8756972818770590 142. Vanneste, P., Huang, Y., Park, J. Y., Cornillie, F., Decloedt, B., & Van den Noortgate, W. (2020). Cognitive support for assembly operations by means of augmented reality: an exploratory study. International Journal of Human Computer Studies, 143(October 2019), 102480. https://doi.org/10.1016/j.ijhcs.2020.102480 143. Vuforia Expert Capture| PTC. (2019). https://www.ptc.com/en/products/vuforia/ vuforia-expert-capture
Augmented Reality Training in Manufacturing Sectors
495
144. Wang, T., Qian, X., He, F., Hu, X., Huo, K., Cao, Y., & Ramani, K. (2020). CAPturAR: An augmented reality tool for authoring human-involved context-aware applications. In UIST 2020 – Proceedings of the 33rd annual ACM symposium on user interface software and technology, pp. 328–341. https://doi.org/10.1145/3379337.3415815 145. Wang, X., Ong, S. K., & Nee, A. Y. C. (2015). Real-virtual interaction in AR assembly simulation based on component contact handling strategy. Assembly Automation, 35(4), 376–394. https://doi.org/10.1108/AA-02-2015-012 146. Wang, X., Ong, S. K., & Nee, A. Y. C. (2016). Advanced Engineering Informatics Multi- modal augmented-reality assembly guidance based on bare-hand interface. Advanced Engineering Informatics, 30(3), 406–421. https://doi.org/10.1016/j.aei.2016.05.004 147. Wang, Y., Zhang, S., Wan, B., He, W., & Bai, X. (2018). Point cloud and visual feature-based tracking method for an augmented reality-aided mechanical assembly system. International Journal of Advanced Manufacturing Technology, 99(9–12), 2341–2352. https://doi. org/10.1007/s00170-018-2575-8 148. Webel, S., Bockholt, U., Engelke, T., Gavish, N., Olbrich, M., & Preusche, C. (2013). An augmented reality training platform for assembly and maintenance skills. Robotics and Autonomous Systems, 61(4), 398–403. https://doi.org/10.1016/j.robot.2012.09.013 149. Werrlich, S., Daniel, A., Ginger, A., Nguyen, P. A., & Notni, G. (2019). Comparing HMD- based and paper-based training. In Proceedings of the 2018 IEEE international symposium on mixed and augmented reality, ISMAR 2018, pp. 134–142. https://doi.org/10.1109/ ISMAR.2018.00046 150. Westerfield, G., Mitrovic, A., & Billinghurst, M. (2015). Intelligent augmented reality training for motherboard assembly. International Journal of Artificial Intelligence in Education, 25(1), 157–172. https://doi.org/10.1007/s40593-014-0032-x 151. Yamaguchi, M., Mori, S., Mohr, P., Tatzgern, M., Stanescu, A., Saito, H., & Kalkofen, D. (2020). Video-annotated augmented reality assembly tutorials. In UIST 2020 – Proceedings of the 33rd annual ACM symposium on user interface software and technology, pp. 1010–1022. https://doi.org/10.1145/3379337.3415819 152. Yang, Y., Shim, J., Chae, S., & Han, T. D. (2016). Mobile augmented reality authoring tool. In Proceedings – 2016 IEEE 10th international conference on semantic computing, ICSC 2016, pp. 358–361. https://doi.org/10.1109/ICSC.2016.42 153. Yu, J., Jeon, J. U., Park, G., Kim, H. I., & Woo, W. (2016). A unified framework for remote collaboration using interactive AR authoring and hands tracking. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9749, 132–141. https://doi.org/10.1007/978-3-319-39862-4_13 154. Zauner, J., Haller, M., Brandl, A., & Hartman, W. (2003). Authoring of a mixed reality assembly instructor for hierarchical structures. In Proceedings – 2nd IEEE and ACM international symposium on mixed and augmented reality, ISMAR 2003, pp. 237–246. https://doi. org/10.1109/ISMAR.2003.1240707 155. Zhao, J., Sensibaugh, T., Bodenheimer, B., McNamara, T. P., Nazareth, A., Newcombe, N., Minear, M., & Klippel, A. (2020). Desktop versus immersive virtual environments: Effects on spatial learning. Spatial Cognition and Computation, 20(4), 328–363. https://doi.org/1 0.1080/13875868.2020.1817925 156. Zhu, J., Ong, S. K., & Nee, A. Y. C. (2013). An authorable context-aware augmented reality system to assist the maintenance technicians. International Journal of Advanced Manufacturing Technology, 66(9–12), 1699–1714. https://doi.org/10.1007/s00170-012-4451-2 157. Zhu, J., Ong, S. K., & Nee, A. Y. C. (2014). A context-aware augmented reality assisted maintenance system. International Journal of Computer Integrated, 28(2), 213–225. https:// doi.org/10.1080/0951192X.2013.874589
496
M. Preda and T. Lavric Marius Preda received an engineering degree (Image Processing and Artificial Intelligence) from the University POLITEHNICA of Bucharest, in 1998, the Ph.D. degree in mathematics and informatics from University Paris V, Paris, in 2002, and the eMBA degree from the IMT Business School, Paris, in 2015. He is currently an Associate Professor with Institut Polytechnic de Paris, and also the Covenor of the 3D Graphics and Haptics Coding Group of ISO MPEG. He contributed to several international standards with technologies in the fields of 3-D graphics, virtual worlds, and augmented reality.
Traian Lavric received an engineer degree in Computer Science (Electronics) from the Faculty of Electrical Engineering and Computer Science of Brasov and a Master’s Degree in 2011 in the field of Embedded Systems, from the same University as well as a doctoral degree from Institut Polytechnique de Paris (IP Paris) in Computer Science (Industrial Augmented Reality) in 2022. Traian works as a Research Engineer at Institut Mines-Télécom where he was coeditor of MPEG ARAF standard, lead programmer of the ARAF browser and creator of several AR applications. His activities are focused on interactive enriched multimedia content and augmented reality applications.
Digital Twin Standards, Open Source, and Best Practices JaeSeung Song and Franck Le Gall
Abstract The 4th Industrial Revolution is improving the existing industry’s paradigm in a radical way by digitally transforming everything, including life and business, using digital technology. The digital twin technology, which forms the core of such a digital transformation, provides benefits such as optimal operation and cost reduction by implementing the real world in a virtual space and analyzing, controlling, and simulating. Digital twins are now being applied to all industries, starting with manufacturing, city, energy, agriculture, and healthcare. As digital twins, which started in manufacturing, spread to all areas of the industry, interoperability is emerging as an important issue. To provide interoperability in digital twins, various global standards development organizations (SDOs) have come up with specifications related to the digital twin. For example, the 3rd Generation Partnership Project (3GPP) is developing standards for 5G, which can provide high-speed and reliable communication functions required for digital twins, and the Open Geospatial Consortium (OCG) manages geospatial information used in smart cities and other domains. ISO/TC 184 covers industrial data standards used in various domains such as manufacturing, industrial automation, and information systems. Furthermore, oneM2M, a global initiative to standardize a service layer IoT platform, defines common service functions for digital twins. In this chapter, in particular, ISO/TC 184 for industrial data in the smart factory field, ISO/IEEE for the digital health data, IEC TC65 for the interoperability in the smart factory, and oneM2M for service function for digital twin services are introduced. In addition, open-source activities in the domain of digital twins are gradually expanding in particular for digital twin platforms and data management. In addition to this, we look at several best practices of digital twins around the world.
J. Song (*) Sejong University, Seoul, South Korea e-mail: [email protected] F. Le Gall EGM, Valbonne, France © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_18
497
498
J. Song and F. Le Gall
1 Overview A digital twin refers to a technology or platform that predicts the situations that may occur in the reality by creating a virtual world, a digital twin, which is identical to the physical world. The basic concept of digital twin emerged in 2002 when Dr. Michael Grieves of the University of Michigan used the term “conceptual ideal for product lifecycle management” to explain the concept that “the real space and the virtual space are connected through the flow of data”. In 2010, NASA used digital twins in the space exploration technology roadmap and technology developments, thus introducing and spreading this idea in space explorations systems. Later, General Electric (GE) combined the various related concepts into one using the term “digital twin”. According to the Top Ten Strategic Technology Trends for 2019 presented by Gartner, “a digital twin refers to the digital representation of an entity, process, or system existing in the real world.” The focus was especially placed on digital twins in the Internet of Things (IoT) environment, explaining that corporate decision-making can be improved by providing information on the more effective performance of products, data on new products, and information on improved efficiency based on the maintenance and reliability information of many things. Recently, the digital twin technology has been introduced in various industries and sectors, such as manufacturing, education, shipbuilding, civil engineering, and city maintenance and planning, which has spurred the further growth of the technology [1–4]. Digital twin, which works in conjunction with a variety of fields and technologies, can be defined in various ways. In fact, the following shows that different companies define the digital twin slightly differently. –– General Electric (multinational conglomerate company): “Digital Twin is most commonly defined as a software representation of a physical asset, system or process designed to detect, prevent, predict, and optimize through real time analytics to deliver business value. “ –– Gartner (global research and advisory firm): “A digital twin is a digital representation of a real-world entity or system” –– Wikipedia (digital dictionary): “A digital twin is a virtual representation that serves as the real-time digital counterpart of a physical object or process.” –– IBM (computer and cloud company): “A digital twin is a virtual representation of an object or system that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning and reasoning to help decision-making” Furthermore, slightly different definitions are given to a digital twin depending on which industrial sector it is applied to. For example, in the smart city sector, a digital twin can be defined as an urban management technology that collects data for urban problems and provides solutions by reflecting the real-world information in real time in a virtual space model that simulates the real space to solve various urban problems. In the manufacturing sector, a digital twin is defined as a technology that
Digital Twin Standards, Open Source, and Best Practices
499
simulates the manufacturing resources of the physical world in a virtual space and provides the optimal result by performing various simulations required for design, production, and maintenance/repair of products in the virtual space linked to the physical world in real-time. Because many industries that need the digital twin technology aim for large- scale and optimized production, interoperability between production technologies and factories scattered around the world is essentially required. Furthermore, as software and machines of various manufacturers are used, their compatibility must be also ensured. The standardization of digital twin technologies is required to achieve interoperability and compatibility at the global level, and many international standardization organizations are working on the standardization of digital twins. Because the definition and purpose of digital twins vary among industry domains, many international standardization organizations are developing the digital twin standards for their areas of interest. The standards development organizations (SDOs) that are currently developing standards related to digital twins include the European Telecommunications Standards Institute (ETSI),1 International Electrotechnical Commission (IEC),2 Internet Engineering Task Force (IETF),3 Industrial Internet Consortium (IIC),4 International Organization for Standardization (ISO),5 Institute of Electrical and Electronics Engineers (IEEE),6 oneM2M,7 Open Geospatial Consortium (OGC),8 Plattform Industrie 4.0,9 and World Wide Web Consortium (W3C).10 For example, the Institute of Electrical and Electronics Engineers (IEEE) focuses on smart health, ISO focuses on industrial automation, IIC focuses on accelerating the development and adoption of IoT technology, and oneM2M develops standards for common service functions required by smart factories. This chapter describes the digital twin–related standards in ISO/TC 184, ISO/IEEE, IEC, oneM2M, and ETSI’s NGSI-LD among various standards. Furthermore, in recent years, some organizations have been operating open-source communities in connection with developer ecosystems as well as developing standards, and are providing open-source codes along with open standards. Therefore, this chapter also examines open-source activities related to digital twins.
European Telecommunications Standards Institute (ETSI): https://etsi.org International Electrotechnical Commission (IEC): https://global.ihs.com 3 Internet Engineering Task Force (IETF): https://www.ietf.org 4 Industrial Internet Consortium (IIC): https://www.iiconsortium.org 5 International Organization for Standardization (ISO): https://www.iso.org 6 Institute of Electrical and Electronics Engineers (IEEE): https://www.ieee.org 7 oneM2M: https://onem2m.org 8 Open Geospatial Consortium (OGC): https://www.ogc.org 9 Plattform Industrie 4.0: https://www.plattform-i40.de 10 World Wide Web Consortium (W3C): https://www.w3.org 1
2
500
J. Song and F. Le Gall
2 ISO/TC 184 (Industrial Data) ISO/TC 184 Industrial Automation Systems and Integration Committee establishes international standards for the integration of separate parts of manufacturing, that is, various technologies such as industrial automation and information system, machinery and equipment, and electronic communication. ISO TC 184/SC 4 (industrial data), a subcommittee, establishes international standards such as Standard for the exchange of product model data (STEP) used for exchanging product information between different industrial automation systems [5]. ISO TC 184/SC 4 is developing standards for industrial data in general manufacturing industry, shipbuilding and marine industry, nuclear power plants, etc. The standards for product data and quality quantification are essential resources for supporting smart manufacturing, and they can be used for such purposes as interoperability between systems, integration of automation systems, inspection and maintenance, and distrusted structure and review of smart manufacturing functions. The scope of standard development in ISO TC 184/SC 4 includes standards for the following industrial data related to products: –– –– –– –– –– –– ––
Geometric design data and tolerance data Material and functional specifications Product differentiation and configuration Product information catalogs and libraries Process and project data Product production data Product life cycle data (product characteristics related to product support, maintenance, and disposal)
Furthermore, business-related standards have been developed, including organization data such as relationships between companies or internal departments for supplier identification purposes, personal data for authorization and identification, and data related to processes and procedures that assure the quality of industrial data. The standards developed and managed by ISO TC 184/SC 4 include ISO 10303, ISO 15926, ISO 8000, and ISO 23247. ISO 10303 is a standard for exchanging design and production data in the manufacturing industry, and ISO 8000 is a standard for product data quality quantification. ISO 23247 is a standard for creating digital twins. So far, a total of 765 standards have been developed for the industrial data sector [6]. Next, we examine in depth the standard developed by ISO 23247 for the creation of digital twins.
Digital Twin Standards, Open Source, and Best Practices
501
2.1 ISO 23247 As cyber physical system (CPS)/digital twin technologies have begun to be applied to manufacturing sites, the need for standardization of conceptual models, function reference architecture, modeling, information exchange, etc., has been discussed. The digital twin manufacturing framework series of ISO (ISO 23247: Digital Twin manufacturing framework) is a standard for applying the digital twin technology that represents physical objects, including physical assets, processes, and systems, on the computer in manufacturing [6]. The four cases of the standard series developed by ISO 23247 are as follows: • ISO 23247-1 [7], Digital Twin framework for manufacturing – Part 1: Overview and general principles (defines the overview, principles, and requirements of digital twins for manufacturing) • ISO 23247-2 [8], Digital Twin framework for manufacturing – Part 2: Reference architecture (defines the reference architecture of digital twins for manufacturing) • ISO 23247-3 [9], Digital Twin framework for manufacturing – Part 3: Digital representation of physical manufacturing elements (defines the digital representation method of manufacturing resource information, including products, processes, and resources) • ISO 23247-4 [10], Digital Twin framework for manufacturing – Part 4: Information exchange (defines the method of exchanging information on digitally represented twins) In the observable manufacturing sector, digital representations include static and dynamic information. Information that does not change during the manufacturing process is classified as static information, and information that changes during the manufacturing process is classified as dynamic information. The key members of the group include Boeing, STEP Tools, and National Institute of Standards and Technology (NIST) in the United States; Airbus and Schneider Electric in France; Sandvik and KTH Royal Institute of Technology in Sweden; China National Institute of Standardization (CNIS) in China; and Electronics and Telecommunications Research Institute (ETRI) in South Korea. Let us take a closer look at the four standards being developed by ISO 23247.
2.2 ISO 23247-1 SO 23247-1 [7] defines the overview, scope, principles, and requirements of digital twins for manufacturing (see Fig. 1). As shown in Fig. 1, the basic concept of digital twins for manufacturing can be described as follows:
502
J. Song and F. Le Gall
Fig. 1 Concept of digital twin for manufacturing
• Observable manufacturing elements: observable manufacturing elements such as personnel, equipment, materials, processes, facilities, environment, products, and relevant documents; • Data collection and device control: data are collected from manufacturing elements, or manufacturing elements are controlled; • Digital twin: manufacturing resources, including not only the external shape but also attributes such as functions, are represented digitally, and the data collection and device control are linked to synchronize between the manufacturing elements and the corresponding digital twins in the same state; • Application of digital twin: real-time control, off-line analytics, health check of manufacturing resources, predictive maintenance, etc.; • Benefits of digital twin: real-time planning based on simulations, validation and manufacturing process coordination, risk management based on real-time monitoring and prediction, predictive maintenance, cost reduction, etc. For these, ISO 23247-1 defines the requirements from the perspectives such as data acquisition from manufacturing resources, digital representation of manufacturing resources, data analysis, management, synchronization, data storage, and simulation. Furthermore, as requirements for digital twin modeling, it defines the fidelity of the model, extensibility that supports the integration/addition/enhancement of digital models, interoperability with different types of digital models, and a variety of granularity.
Digital Twin Standards, Open Source, and Best Practices
503
2.3 ISO 23247-2 ISO 23247-2 [8] provides a reference architecture of digital twins for manufacturing, which is constructed, as shown in Fig. 2, by expanding the reference structure of IoT, a basis of digital twins. Figure 3 shows the reference architecture of digital twins for manufacturing from the functional perspective. The entities of a digital twin and the functional entities (FEs) of the entities in the reference architecture of digital twin are defined as follows: • Observable manufacturing elements (manufacturing resources): characteristic functional elements of resources (e.g., drilling) • Data collection and device control elements (consist of elements for data collection and data control): –– Elements of a digital twin system such as data collection function, data pre- processing function, manufacturing resource control function, manufacturing resource operating function, and identification function for identifying manufacturing resources (consisting of sub-system elements for operation and management, application and service, resource access and exchange) ISO 23247-1
Requirements (principles, general, modelling, and information exchange) Considered by
ISO 23247-2 Reference models (domain-based and entity-based) Described by Architectural views ISO 23247-2 Functional view (functional entities of reference model) ISO 23247-3 Information view (information attributes) ISO 23247-4 Networkking view (information exchange and protocols) Fig. 2 Outline of Digital Twin reference architecture for manufacturing
504
J. Song and F. Le Gall
User Entity
Digital Modeling FE
Maintenance FE
Simulation FE
Reporting FE
Presentation & RepresentationFE
Synchronization FE
Analytic Service FE
Application Support FE
Resource Access and Interchange Sub-Entity Interoperability Support FE
Plug & Play Support FE
Data Assurance FE
Application and Service Sub-Entity
Access Control FE Peer Interface FE
Security Support FE
Operation and Management Sub-Entity
Data TranslationFE
Core Entity
Cross -System Entity
User Interface FE
Data Collection and Device Control Entity Data Collection Sub -Entity Data Collecting FE
Data PreProcessing FE
Device Control Sub -Entity
Identification FE
Controlling FE
Observable Manufacturing Elements
Actuation FE
Identification FE
Digital Twin Framewor
Resource-specific FEs
Fig. 3 Reference architecture of digital twin for manufacturing (functional perspective)
–– A digital modeling function for analyzing the physical characteristic information of manufacturing resources, a function for representing real objects as digital objects using the analyzed information, a synchronization function for maintaining the real objects and the digital twins in the same state, operation and management support functions, etc. • Data analysis function, application and service support function, simulation function, reporting function, etc.: interoperation support function, access control function, plug-and-play support function, peer interface function for linking with other digital twin system elements, etc. • Digital twin user elements: user interface function, etc. • Cross-system elements: information exchange function, a function that ensures the accuracy and integrity of data, information protection function, etc.
2.4 ISO 23247-3 ISO 23247-3 [9] defines the information for digitally representing manufacturing elements, including products, processes, and resources. The information attributes required for this are provided in Table 1, but they can be expanded depending on the use case. IEC 62264-2 was referenced to compose Table 1. For each manufacturing element, Table 1 can be used to define the information attributes. Among these attributes, the definitions of information attributes for personnel are provided in Table 2.
505
Digital Twin Standards, Open Source, and Best Practices Table 1 Information attributes for manufacturing elements Information Description Identifier Value used to uniquely identify a manufacturing element Characteristics Ordinary and important features of a manufacturing element Schedule Time information for directly/indirectly deploying the manufacturing element in the manufacturing process Status The situation of a manufacturing element involved in a manufacturing process Location Geographic and relative location information of a manufacturing element Report Description of activity performed by a manufacturing element Relationship Relationship information between two or more manufacturing elements
Mandatory (M), Optional (O) M M M M M M M
Table 2 Information Model Information Description Identifier ID assigned to a person Characteristics Attributes of the employee such as skill level and job classification, e.g., Skill level 1: master, 2: journeyman, 3: apprentice Job classification 1: researcher, 2: administrator, 3: technician, 4: driver Schedule Personal working schedule, e.g., working, day-off Status Working status of the person Location Location of the person Report Relationship
Description of the work activity of the person Collaboration relationship among personnel and other manufacturing elements
Example Employee ID: 11223 Skill level: 2 Job classification: 3
Work schedule: 8:00 am to 5:00 pm On duty/on break Operator #1: Work Unit #3 of Work Center #2 May 14th, 2020 (Thursday: 8 hours of work) Operator #1 is the supervisor of operator #2 At least 4 persons must work at Work Unit #3 for safety reasons
2.5 ISO 23247-4 ISO 23247-4 [10] defines a method for exchanging information in a digital twin. The interface for information exchange is defined using the following four entities in the reference architecture of the digital twin of ISO 23237-2, as shown in Fig. 4: user entity, core entity, inner interface, and data collection and device control entity. The requirements for these entities are described as follows:
506
J. Song and F. Le Gall
Fig. 4 Networking view of Digital Twin reference architecture for manufacturing
• User network (user entity - core entity) –– Visualization function for showing the manufacturing resources in digital forms to the user –– Data exchange provided in a standardized method –– Transaction function, including PULL, PUSH, and PUBLISH –– Information protection function, etc. • Service network (inner interface of core entity) –– Information exchange function related to the digital twin service provided by the core entity, such as situation reproduction, monitoring, and simulation –– Digital representation for manufacturing elements defined in ISO 23247-3, etc. • Access network (core entity - data collection and device control entity) –– Connection support and real-time communication functions for communication with manufacturing elements –– Standardized method of data exchange –– Manufacturing element identification function –– Manufacturing element control information and synchronization information delivery function –– Transaction function, including PULL, PUSH, and PUBLISH –– Function for maintaining the connection with the digital twin when the network location of DCDCE changes –– Information protection function, etc. • Proximity network (data collection and device control entity - observable manufacturing element)
Digital Twin Standards, Open Source, and Best Practices
507
–– Connection support function through a local network, such as industrial Ethernet –– Functions for supporting data pre-processing and protocol conversion, etc.
3 ISO/IEEE for Smart Health The ISO/IEEE 11073 standards were jointly developed as part of a cooperation between the International Standardization Organization (ISO) and the Institute of Electrical and Electronics Engineering Standard Association (IEEE-SA). For reference, ISO is an independent non-governmental international organization with members from 165 national standards organizations. Market-related international standards are developed based on voluntary participation and consensus, whereby experts share knowledge, support innovations, and provide solutions to global challenges. IEEE-SA is an organization responsible for the development and certification of global technical standards within IEEE. IEEE-SA develops and supports projects related to standards and test certifications for a variety of fields, including IoT technologies. In particular, IEEE-SA endeavors to lead the global standards through various technical standards and global cooperation for IoT, 5G, edge computing, etc., which are attracting attention as major technologies that will lead the Fourth Industrial Revolution. In this section, we will examine ISO/IEEE 11073, the standards related to the digital twin standard technology of smart health developed jointly by ISO and IEEE [11, 12].
3.1 ISO/IEEE 11073 ISO/IEEE 11073 Personal Health Device (PHD) standards were introduced in 2008 to facilitate communication between personal health devices, such as computer systems and smartphones, and administrators. ISO/IEEE 11073 aims to provide real- time interoperability of plug-and-play and, at the same time, promote the exchange of status data. The 11073 standards provide standardized solutions for personal health systems adopted by both the research communities and industry. Recently, as the interest and utilization of smart health and wearable devices have increased, the number of research and utilization cases using the 11073-standardization system has also been increasing. For example, health-related systems in smart homes can be developed using the ISO/IEEE 11073 standards. These health-related systems can provide functions for detecting a person’s movement and injury due to a fall, remote monitoring of user’s health parameters, etc. [13–15]. Table 3 lists some of the documents for the smart health–related standards developed by the ISO 11073.
508
J. Song and F. Le Gall
Table 3 Smart Health–related Standards Developed Jointly by ISO and IEEE # 1 2 3 4 5
6
Spec. number 11073-10404- 2008 11073-10406- 2011 11073-10407- 2008 11073-10415- 2008 11073-00103- 2012
11073-20601- 2016 7 P11073-10426 8 P11073-10428 9 P11073-10471a 10 P11073-10472a
Title Device specialization – pulse oximeter Device specialization – blood pressure monitor Device specialization – thermometer Device specialization – Weighing scale Guide for Health informatics – personal health device communication - Overview Application profile – optimized exchange protocol Home Healthcare Environment Ventilator HHEV Electronic stethoscope Independent living activity hub Medication monitor
3.2 Healthcare Systems Based on ISO/IEEE 11073 Standards for Digital Twins The explosive increase of wearable technology and the availability of personal health devices show the potential for providing digital twin-based healthcare services in the near future. The standards-based interoperability between healthcare devices and systems may expand efficient healthcare services and reduce the technical complexity of various healthcare devices. The ISO/IEEE 11073 standards can meet these requirements. 11073 Standards-Based Personal Health Devices Pulse oximeters developed according to the 11073 standards can be used for well- being in mobile Android-based systems. 11073-based blood pressure monitoring devices can be used to monitor elderly people who are vulnerable to diseases at home. Furthermore, the 11073 standards may be used to build various healthcare devices, such as weighing scales, blood glucose meters, body temperature sensors, electrocardiogram (ECG) monitors, thermometers, and shoe soles, that can monitor human bodies to construct digital patients that represent real patents. 11073 Standards Used in Healthcare Systems The 11073 standards can be applied not only to personal devices used to digitize human body information, but also to design healthcare systems that can use it in medical treatment through cloud systems. For example, digital twin health systems developed by the 11073 standards can be designed to use a variety of medical content obtained through the mobile phones of patients. This information can help
Digital Twin Standards, Open Source, and Best Practices
509
identify the patient’s profile and determine the patient’s symptoms that are difficult to check with the naked eye. If the information of patients is digitized in the 11073 standards-based digital twin healthcare systems, the physicians can select the best treatment by checking the information of “digital patient” most similar to their respective patients and easily perform simulations in advance to predict which treatments will be effective and which ones should be avoided. In other words, the medical digital twin technology helps physicians and patients respond quickly to diseases. Furthermore, a variety of data compatible with the standards can be integrated and used on a digital twin platform to select data-based treatment methods, through which effective treatments can be facilitated. In conventional treatment methods, the symptoms of a patient are partially determined, based on which diagnosis and treatment are performed. This makes it difficult to comprehensively identify the patient’s detailed health conditions and relatively less pronounced symptoms, which are difficult to observe with the naked eye. Therefore, in the case of diseases that have small precursor symptoms, such as heart failure, it is difficult to identify the symptoms in advance and take appropriate actions. In the case of “digital patient” based on the data of a digital twin–based healthcare platform, both the body conditions and small parts of symptoms can be integrated and checked, enabling accurate diagnosis in the virtual world. In particular, through the virtual body of the “digital patient”, which can be constructed using 11,073-based digital healthcare devices, physicians can diagnose abnormalities of the body quickly and predict and provide appropriate treatments in advance through the digital data by comprehensively identifying kidney function problems, heart size, irregular heart rate, changes in blood sugar level, etc., which are difficult to check with the naked eye.
4 IEC TC65 IEC TC65 has established many standards related to industrial automation over the past 20 years and is known as an organization that plays a vital role in the development of smart manufacturing standards. IEC TC65 is developing standards related to industrial process measurement, control, and automation, with an interest in standardizing elemental technologies for the implementation of smart factories, as well as standards for process control devices, process control valves, etc. [16]. IEC TC65 noted the Reference Architectural Model for Industry 4.0 (RAMI 4.0) [17], a national Industry 4.0 standard of Germany, which was developed as a reference model of smart manufacturing in Germany. In the process of standardizing numerous standards related to smart manufacturing, deficient parts were discovered, and a reference model, RAMI 4.0, was developed to standardize them in a new way. As shown in Fig. 5, RAMI 4.0 has a box-shaped structure with three axes representing three perspectives that are essentially required in the Industry 4.0.
510
Layers Business
J. Song and F. Le Gall
Life Cycl e Va IEC lue S 6289 tream 0
IE
Functional
ls eve yL h c rar 512 Hie EC 61 I / / 4 226 C6
Information Communication Integration Asset Deve lopm e
nt Type
Main tanan Usag ce e
Prod
uctio
nm
Insura
Maint ntan anc Usag e e nce
CCoonnnn eeccttee En dd W Woorrld Work terprise ld Cent Statio ers n Cont rol D Field evice Devic Pro e duc t
Fig. 5 RAMI 4.0 layered architecture
IEC TC 65 and ISO’s TC 184 (Automation Systems and Integration) have formed a Joint Working Group (JWG21) to develop Industry 4.0 standards, and the JWG21 established the RAMI 4.0 as a major reference model of Industry 4.0 and an international standard [18]. The RAMI 4.0 was successfully established in 2017 as a standard in the IEC PAS 63088, and research on analysis and application is underway in the US, Japan, China, among other countries. IEC TC 65 comprises four standardization steering committees (SCs) and multiple working groups (WGs) for the development of standards. The following describes the standards developed by the IEC TC 65’s SCs: • SC65A: standards for industrial process measurement and control systems • SC65B: standards for industrial process measurement and control devices • SC65C: standards for digital data communication for industrial process measurement and control • SC65E: standards for digital representation of characteristic functions, methods, and application programs of devices The following describes the standards being developed by the WGs related to digital twins among the WGs of IEC TC 65: • WG10: IEC 62443 Part 1–4 standards based on the United States ISA99 standards for security in the manufacturing sector • WG16: IEC62832 standards for construing virtual production systems • WG17: technical standards related to system interfaces between industrial facilities and smart grids Furthermore, IEC TC65 establishes technical standards to ensure interoperability between smart factory components. The major standards include OPC-UA, IEC 62443, and IEC 62832. IEC TC65 operates ad-hoc groups. In particular, an ad-hoc
Digital Twin Standards, Open Source, and Best Practices
511
group, AHG 3, and its four working groups have been formed for smart manufacturing terminology, use cases, standards status, and smart manufacturing coordination, and are discussing the concept and architecture of smart manufacturing. AHG 3 (Smart Manufacturing Framework and System Architecture) standardizes the reference framework and system architecture for smart manufacturing, collects/defines representative cases of smart manufacturing, derives smart manufacturing requirements, and discusses the analysis results for the methods that use the standards in various stages of the life cycle. The following shows the roles of four WGs operated by AHG 3: • AHG 3 TF: Smart Manufacturing Coordination. It reviews and coordinates current issues within the IEC TC65 and among the external standardization groups related to smart manufacturing, such as ISO • AHG 3 TF: Smart Manufacturing Terms and Definitions. Typical and essential terms and definitions related to smart manufacturing in the smart manufacturing sector for additional development of IEC TC65 standards • AHG 3 TF: Use Cases. Compiles typical and required use cases in the smart manufacturing domain for additional development of IEC TC 65 standards • AHG 3 TF: Standards Landscape. It will be operated until the JWG 21 is actively used, and it aims to integrate and improve the standards status information as one. Other major standards that have been developed by the IEC TC 65 are as follows: • Related to functional safety and functional safety network: IEC 61508 (standards for functional safety) [19], ISO 26262 (standards that compiled the safety standards for vehicles based on IEC 61508) [20] • Related to high availability network: IEC 62439 (standards for high availability of a network) [21] • Related to real-time networks: IEC 61158 (standards for real-time communication) • OPC UA (OLE for process control unified architecture specification) standards: OPC is a standard interface that enables various applications to send and receive data from various types of process control equipment (DCS, PLC, etc.) • Digital factory (IEC 62832) standards: These allow a factory’s physical production system to be built by simulating it as a virtual production system in the information system, and deal with technologies used for various purposes, such as preliminary analysis of physical production system, analysis of problems during production, and post-analysis of improvement. Real-time automation networks, commonly referred to as Fieldbus, are used in the industrial environment, and IEC TC65/SC65C develops the following network standards for these: • Industrial communication standard technologies are developed by dividing them into real-time (MT9, IEC 61158), profile (MT9, IEC 61784-2), functional safety communication (WG12, IEC 61784-3), and high-availability communication (WG15, IEC 62439) fields
512
J. Song and F. Le Gall
• Initially, IEC TC65/SC65C tried to integrate them in a single Fieldbus technology, but considering various factory environments, Fieldbus and real-time industrial Ethernet standards are defined by region and company
5 oneM2M IoT platform is an essential element of the digital twin technology that implements the physical objects and data existing in the real world identically in the virtual world. The data collected from machines or sensors in the major domains of digital twins, such as smart manufacturing, smart factory, smart city, and smart health, are managed and stored through IoT platforms. Particularly, a standard-based IoT platform is one of the most basic elements for successful digital twins in the environment, where devices produced by different manufacturers are interconnected. oneM2M [22] is an international standardization organization founded to provide the fragmented IoT market as horizontal services of global units, and it develops standards for common functions of IoT service layer platforms. It was co-founded by eight SDOs around the world: ETSI in Europe, TTA in South Korea, TIA and ATIS in the US, CCSA in China, ARIB/TTC in Japan, and TSDSI in India. About 200 member companies around the world, including telecommunication companies, equipment manufacturers, and service companies, are actively involved, along with the eight Partner Type 1 SDOs. Since the first meeting in August 2012, they have been developing standards for the basic functions of IoT platforms, the interworking function between the IoT protocols and other sectors’ communication protocol standards, management functions for data and things, etc. In the case of oneM2M Release 3,11 standards were developed for interoperability between the oneM2M platforms and the OSGi, OPC-UA, Modbus, and Web of Things technologies used in various industrial domains; edge computing that supports real-time mission-oriented services; and testing and certification. The industrial protocols that provide interoperability in oneM2M are also standards used in digital twins. Furthermore, the values of data produced from devices that operate in the real world using various protocols can be accurately displayed in the virtual space provided by the digital twins. Therefore, this section examines the interoperability with various industrial protocols developed by oneM2M. Proximal IoT Interoperability oneM2M develops standards for methods of interworking with devices, which were developed by various network protocols due to the characteristics of the platform that supports the RESTful architecture, to a oneM2M platform and converting them into resources. For example, if Z-Wave and Zigbee devices used in a smart manufacturing digital twin exist, oneM2M provides the resources and APIs that represent those devices in the oneM2M platform to facilitate the use of the data collected from oneM2M Release-3 specifications: https://www.onem2m.org/technical/published-specifications/ release-3 11
Digital Twin Standards, Open Source, and Best Practices
513
those devices and enable the control when necessary. The proximity IoT interoperability technology defines not only the method of using non-oneM2M devices and services in the oneM2M systems but also the method of using oneM2M devices and services in non-oneM2M devices. Based on this, the digital twin services and applications can manage the data of various sensors and devices, obtain the location information, and use real-time updated information through the oneM2M platform. 3GPP Rel-13/14 Network Interoperability In digital twins, a network is an important medium that connects the real world and the virtual world. Particularly, in the case of digital twins for things with mobility, a mobile communication network such as 3GPP should be used. The 3GPP network interoperability function provided by oneM2M has been continually updated after its development early during the development of oneM2M standards. oneM2M Release 2 defined the device triggering and network traffic configuration functions that interwork with 3GPP Rel-12. In Rel-3, standards were developed for additional functions that control mobile communication devices used in various digital twins, such as device triggering recall, UE monitoring, background data transfer, and node schedule management. Interoperability with a Variety of Industrial Domain Technologies The industrial sectors closely related to digital twins have requirements that are stricter than those of other domains (e.g., smart home and smart office). For example, requirements such as error handling, assurance of real-time characteristics, and time data processing must be included in the smart services needed in industries such as smart factories, smart vehicles, and energy management. Therefore, the industries developed the services using network protocols suitable for the relevant existing domains. Zigbee, Modbus, and Data Distribution Service (DDS) are popular network protocols widely used in the industries. oneM2M provides a binding function to interwork with devices that use such separate standard protocols and device platforms. Furthermore, the compatibility is provided by having a data conversion table that allows conversion between the pertinent protocol and oneM2M message at a proxy node located in the middle. As a side note, oneM2M provides oneM2M primitive, a standard message format that defines all actions. Here, for the major protocols, such as HTTP, CoAP, and MQTT, the binding is supported by defining how the oneM2M primitive is matched to the message format of the pertinent protocol.
6 ETSI NGSI-LD ETSI’s Industry Specification Group (ISG) for cross-cutting Context Information Management (CIM)12 has developed the NGSI-LD API (GS CIM 004 and GS CIM 009). The main objective of ISG CIM is to create technical specifications and reports
12
https://www.etsi.org/committee/cim
514
J. Song and F. Le Gall
to enable multiple organizations to develop interoperable software implementations of a cross-cutting Context Information Management (CIM) Layer. The standards work is designed to bridge the gap between abstract standards and concrete implementations [23]. The ISG CIM has developed an API called NGSI-LD13 to enable applications to update, manage, and access context information from many different sources, as well as publishing that information through interoperable data publication platforms. It provides advanced geo-temporal queries, and it includes subscription mechanisms, in order for content consumers to be notified when content matching some constraints becomes available. The API is designed to be agnostic to the architecture (central, distributed, federated or combinations thereof) of the applications which share information. NGSI-LD uses Linked Data paradigm to represent information in a property graph made up of nodes (vertices) connected by directed links, where nodes and arcs both may have multiple optional attached properties (i.e. attributes). It allows defining a cross-domain information model through which allows ingesting and exchanging information across heterogeneous domain specific models. In 2020, ETSI ISG CIM has initiated a work item aimed at identifying the various (historical) definitions and types and characteristics of Digital Twins (e.g. in areas of representing human actions, in health/biological areas, for smart manufacturing, etc) to consider the usage of the NGSI-LD information model and API for realising such systems. The document shows to what extent various Digital Twin types can be realized or facilitated by NGSI-LD and identify new features for NGSI-LD which would make it more useful for such areas of usage. It advocates the use of NGSI-LD property graphs as holistic digital twins, maintaining multi-level and multi-scale descriptions of complete IoT/CPS environments, such as cities, buildings or factories. The nodes of the proposed multilevel graph stand for connected devices and non-connected things, physical entities and systems of all kinds that make up these environments, at different scales. The arcs of the graph represent relationships between these entities, which capture the physical structure of a system. They can capture top-down and bottom-up system composition relationships, or transversal connectors (like cables, pipes, etc.) in a distributed network-like system. A further level of description captures distributed or loosely- coupled “systems of systems” as “graphs of graphs”, i.e. graphs, whose “hypernodes” encapsulate other graphs). The NGSI-LD specification is one of the few in use within the open-source ecosystem for Digital Twins.
13
https://en.wikipedia.org/wiki/NGSI-LD
Digital Twin Standards, Open Source, and Best Practices
515
7 Open-Source Activities Digital twin–related organizations such as IIC, Platform Industrie 4.0, and Open Industry 4.0 Alliance are striving to create digital twin ecosystems through the open-source and promotional activities of digital twins, as well as the development of standards. These digital twin–related open-source activities include the Open Manufacturing Platform14(OMP), Digital Twin Consortium (DTC),15 Smart Manufacturing Innovation Institute (CESMII),16 and Industrial Digital Twin Association (IDTA).17 Recently, these open-source communities have been developing not only open-source codes but also relevant open-source standard specifications. In other words, various problems identified in the development processes or new functions are reflected in the documents of the standards for the pertinent open- sources, and the new functions reflected in the open-source standards are applied to the open sources to accelerate the commercialization. This way, the ecosystems are created. This section examines these digital twin–related open-source organizations and their activities. • Open Manufacturing Platform (OMP) OMP is a project of the Joint Development Foundation of Linux Foundation established by Microsoft and BMW in 2019. In the manufacturing platform sector, it is the only initiative set up as an open-source project from the beginning, and the goal is to “accelerate innovation of industrial IoT to shorten time-to-value”. The OMP aims at both disclosing best practices and encouraging open-code development. Open industrial IoT is an open manufacturing process platform and was created based Microsoft Azure Industrial IoT Cloud Platform. It provides the community members with industrial standards, common data models, and reference architectures that have open-source components based on open data models and open manufacturing standards. It is designed to eliminate complex and proprietary manufacturing processes to support the acceleration of innovation at scale through access to new technologies, as well as inter-industry collaboration and knowledgeand data-sharing of companies. Furthermore, the standardization can be achieved beyond the data models managed by conventional propriety systems by allowing machines to learn on their own, which is expected to contribute to solving problems at various manufacturing sites through time reduction of the manufacturing process, and improvement of production efficiency. The manufacturing sector has been especially focusing on the development of digital twins. Automation and robotic systems are widely used in production lines, and many factories have adopted IoT technology and used digital information to optimize production performance.
Open Manufacturing Platform (OMP). https://open-manufacturing.org/ Digital Twin Consortium (DTC). https://www.digitaltwinconsortium.org/ 16 Smart Manufacturing Innovation Institute (CESMII). https://cesmii.org/ 17 Industrial Digital Twin Association (IDTA). https://idtwin.org/ 14 15
516
J. Song and F. Le Gall
The OMP does not only focus on digital twin technology but also contributes to the WG of semantic data architecture. This WG deals with “the need to share, combine, and reuse heterogeneous data in the manufacturing sector by providing common semantics to various stakeholders through comprehensive semantic data homogenization and delivering manufacturing data along with contextual information”. The OMP’s WG is developing semantic data models to help understand IoT connections and machine data and help explain the relationships and dependencies of machine data. Such a semantic data architecture ensures improvement of the total value chain and enables AI-based business models on a large scale. Digital twins are primarily based on the sensor IoT data collected from actual physical systems. The OMP technology based on the industrial IoT data management is also used to synchronize interoperable digital twins through a cloud-based approach for data collection and data integration. The OMP GitHub repository that provides public publications, specifications, and source codes is: https://github.com/OpenManufacturingPlatform • Microsoft Digital Twin Description Language (DTDL) Microsoft is proposing a Digital Twin Description Language18 (DTDL) based on JSON-LD which is programming-language independent. DTDL is made up of a set of metamodel classes that are used to define the behavior of all digital twins (including devices). There are six metamodel classes that describe these behaviors: Interface, Telemetry, Property, Command, Relationship, and Component. In addition, DTDL provides a data description language that is compatible with many popular serialization formats, including JSON and binary serialization formats. DTDL provides semantic type annotations of behaviors, so that analytics, machine learning, UIs, and other computation can reason about the semantics of the data, not just the schema of the data. In cooperation with Open and Agile Smart Cities (OASC), a mapping of DTLD to ETSI NGSI-LD has been made for the smart city context. It allows consumption of Digital Twin data though the ETSI standardized interface. This is illustrated though use cases relevant to cities given such as air quality monitoring, understanding the noise level in a district, the crowd flow in a road segment, traffic flow in a road segment, monitoring on-street parking in parking spots, availability of EV-Charging, or monitoring streetlights and reducing energy consumption. • Digital Twin Consortium (DTC) The Digital Twin Consortium (DTC) launched in May 2020 is an organization co-founded by Microsoft, one of the world’s leading digital twin solution companies, in cooperation with other companies, including Dell, ANSYS, and Lendlease, based on open partnerships. The DTC was established under the Object Management Group (OMG), a standards organization that supports the development of open standards. Although the DTC is not a standards organization, it provides guidelines for
18
https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md
Digital Twin Standards, Open Source, and Best Practices
517
managing inconsistent developments of the digital twin–related markets, interoperability problems, and difficulties arising when implementing digital twins in industrial and commercial environments. DTC industry working groups focus on the vertically divided market, and currently, they consist of Manufacturing, Infrastructure, Healthcare & Life Science, Aviation & Defense, and Natural Resource groups. The DTC’s Horizontal working group includes the focus areas of marketing & technology, terminology, and taxonomy (3T). The 3T working group has additional sub-groups that focus on open source and platform stack, as well as security and trustworthiness. The main focuses of the DTC are the life cycle of digital twins and various requirements of digital twins evolving through this process. The life cycle of digital twins includes digital threads and the physical entities or assets that they support. To accelerate the adoption and application of digital twin technologies by suppliers and final users, the DTC provides an open-source repository where they can contribute various elements. The contributions of the open-source community include open-source code implementations, guide and tutorial development collaboration documents, open-source models, and other assets that are valuable to the digital twin community. By maintaining the openness between industry-focused groups of different industries, the DTC provides an opportunity to collaborate on architecture, tools, and open-source contributions among different industries. • FIWARE FIWARE was created with the ultimate goal of creating an open sustainable ecosystem around public, royalty-free and implementation-driven software platform standards easing the development of smart solutions and supporting organizations in their transition into smart organizations. From a technical perspective, FIWARE brings a curated framework of Open-Source software components which can be assembled together and combined with other third-party platform components to build platforms easing the development of smart solutions and smart organizations in multiple application domains: cities, manufacturing, utilities, agri-food, etc. FIWARE is among the founding member of ETSI ISG-CIM and uses NGSI-LD as a main interoperability interface across its components. FIWARE is a member of the Digital Twin Consortium and any software architecture “powered by FIWARE” gravitates around management of a Digital Twin data representation of the real world. It also shares with ETSI a working group on Digital Twins aimed at supporting implementation driven evolutions of the NGSI-LD specification for Digital Twins. This Digital Twin data representation is built based on information gathered from many different sources, including sensors, cameras, information systems, social networks, end users through mobile devices, etc.19 It is constantly maintained and accessible in near real-time (“right-time” is the term also
“FIWARE for Digital Twins”, Position Paper, v1.0, June 2021, https://www.fiware.org/wp- content/uploads/FF_PositionPaper_FIWARE4DigitalTwins.pdf 19
518
J. Song and F. Le Gall
often used, reflecting that the interval between the instants of time at which some data is gathered and made accessible is enough short to allow a proper reaction). Applications constantly process and analyze this data (not only current values but also history generated over time) in order to automate certain tasks or bring support to smart decisions by end users. The collection of all Digital Twins modelling the real world that is managed is also referred to as Context and the data associated with attributes of Digital Twins is also referred to as context information. In FIWARE, a Digital Twin is an entity which digitally represents a real-world physical asset (e.g. a bus in a city, a milling machine in a factory) or a concept (e.g., a weather forecast, a product order). Each Digital Twin: • is universally identified with a URI (Universal Resource Identifier), belongs to a well-known type (e.g., the Bus type, of the Room type) also universally identified by a URI, and • is characterized by several attributes which in turn are classified as: –– properties holding data (e.g., the “current speed” of a Bus, or “max temperature” in a Room) or –– relationships, each holding a URI identifying another Digital Twin entity the given entity is linked to (e.g., the concrete Building where a concrete Room is located). • Eclipse Ditto Eclipse Ditto is an open-source software solution for designing a software pattern of “digital twins” in the form of IoT development pattern. Eclipse Ditto builds connection points between IoT devices and digital twins. Eclipse Ditto provides various APIs for interaction with IoT devices. Examples include WebSocket and HTTP API. Recently, connection functions with other systems based on MQTT, AMQP, and Apache Kafka have been added. Ditto supports the following three: –– Device-as-a-service: A digital twin system must support interaction with heterogeneous IoT devices, various protocols, and communication patterns. –– Status management for digital twins: The status related to the attributes and setting information of IoT devices that compose the digital twin should be managed. The status of the digital twin includes the recent transmission information, desired attribute values of the target device, and current status. –– Organize your set of digital twins: There are interactions among many devices in digital twins. To find and control these devices again in the future, metadata, such as manufacturer, model number, and serial number, are added to the basic information of the devices and managed. Bosch IoT Things is an example of commercial products based on Eclipse Ditto (GitHub webpage: https://github.com/eclipse/ditto). Bosch IoT Things is a fully managed cloud service equipped with the open-source core of the Eclipse Ditto project. The Things service is a part of the Bosch IoT Suite portfolio and supports applications to manage digital twins of IoT devices.
519
Digital Twin Standards, Open Source, and Best Practices IoT Solution
Device Modeling (schemas,repository,tooling)
Ditto (digital twin states, device-as-a-service)
Device Communication
Custom Application
(protocols, credential, messaging)
Devices
Device/ Gateway Management
Software Provisioning
Historical Data
(Stream) Analytics
Identity Provider Integration Framework
Fig. 6 Ditto Digital Twin architecture
Ditto is especially useful in large IoT environments. In these environments, other important aspects of IoT solutions, such as device communication and data analysis, can be addressed by individual components of Ditto. Figure 6 shows the architectural configuration of IoT solutions that include functions provided by Ditto. • ITwin.js20 An iModel is a specialized information container for exchanging data associated with the lifecycle of infrastructure assets. iModels are self-describing, geometrically precise, open, portable, and secure. iModels were created to facilitate the sharing and distribution of information regardless of the source and format of the information. iModels encapsulate component information, including business properties, geometry, graphics, and relationships in a format that is open, providing standard interfaces for business, engineering, construction, and operations applications from multiple vendors. iModels are an essential part of the digital twin world. But a digital twin means a lot more than just an iModel. A digital twin means a vast ocean of federated data sources, one of which will be an iModel. The iTwin.js open-source library provides a comprehensive set of APIs that can be used anywhere an iModel may be relevant. It is designed to be modular and extensible, with the expectation that iTwin.js will be used in environments with many other JavaScript frameworks. iTwin.js strives to be as consistent as possible with established JavaScript conventions.
20
https://www.itwinjs.org
520
J. Song and F. Le Gall
• Industrial Digital Twin Association (IDTA) Industrial Digital Twin Association (IDTA) was founded in September 2020 by Verband Deutschen-und Anlagenbau (VDMA, the largest association in the mechanical engineering industry in Europe), Zentralverband Electronik-und Electronik Industries (ZVEI, an association of electric and electronic product manufacturers in Germany), Bitkom (a digital association in Germany), and 20 companies, including ABB, Asentics, Bitkom, Bosch, Bosch Rexroth, Danfoss, Endress Hauser, Festo, Homag, KUKA, Lenze, Pepperl Fuchs, Phoenix Contact, SAP, Schneider Electric, Schunk, Siemens, Trumpf, Turck, Volkswagen, and Wittenstein. The goal of the IDTA is to take digital twins to the next level, prepare companies using open-source development models, and provide a one-stop-shop for industrial digital twins, which is known as Asset Administration Shell (AAS) in Industrie 4.0. Ultimately, it focuses on five areas to enable customers to operate the platform more efficiently: open technology, integration, quality management, training, and marketing. Let us take a closer look at these five areas. In the case of open technology, the specifications needed to implement digital twins are provided, and an active open- source community is built to get started with AAS technology. In integration, long- term plans of harmonious sub-models for digital twins are established, and sub-models from the manufacturing and process industries are integrated. In quality management, a pre-competitive training curriculum is developed, and existing know-how for overall management and experts is secured. In training, a pre- competitive training curriculum is also developed, and existing know-how is provided in a way suitable for the target group. Finally, in marketing, AAS is disseminated as the core technology of digital twins, and international visibility of IDTA and the digital twin is ensured. The existing open-source activities of AAS are supported under the IDTA, and it is one of IDTA’s important goals to build an open-source community based on these activities. The open-source projects that have implemented AAS are as follows: –– admin-shell-io (https://github.com/admin-shell-io): This project hosts a viewer/ editor for AAS (AASX Package Explorer), AAS server, AAS specifications and schema (aas-specs), training screencast, and frequently asked questions (FAQ). –– BaSyx (https://projects.eclipse.org/projects/technology.basyx): This projects SDK or AAS in addition to other modules. –– PyI40AAS (https://git.rwth-aachen.de/acplt/pyi40aas): This project hosts Python modules for manipulating and validating AAS. –– SAP AAS Service (https://github.com/SAP/i40-aas): This project hosts systems based on Docker images that implement the reference architecture of RAMI 4.0, including AAS. –– NOVAAS (https://gitlab.com/gidouninova/novaas): This project implements the concept of AAS using JavaScript and a Low-code Development Platform (LCDP), Node-Red.
Digital Twin Standards, Open Source, and Best Practices
521
• GAIA-X GAIA-X announced the project plans in the fall of 2019 and was officially established in 2020. Led by Germany, GAIA-X was presented as a key execution strategy in the EU data strategy presented in February 2020, with the goal of applying it not only in Germany but also throughout Europe. It is believed that a high level of interoperability and reliability can be ensured by identifying and developing common requirements of the European data infrastructures, and based on this, the use of data can be maximized and data ecosystems can be built for companies. There are seven basic principles of GAIA-X (see Fig. 7): European data protection, openness, trust, digital sovereignty, data value creation, interoperability, and user-friendliness. GAIA-X provides the infrastructure needed for the integration of data space, ensures secure data exchange, and facilitates seamless use of data in different domains. Particularly, the use of the digital twin in the health sector is specified as one of the use cases. In conventional systems, it is difficult to collect patient data because each hospital uses a distributed database. Furthermore, even if the data have been collected, the data analysis is time-consuming because the data have different formats and attributes. However, digital twins are constructed using an infrastructure defined by GAIA-X, and a patient’s information is managed as an object in the virtual environment. As a result, data are easily collected to manage patients, and pharmaceutical companies and medical technology and manufacturing companies can receive analysis on the effectiveness of their products.
Fig. 7 GAIA-X use case
522
J. Song and F. Le Gall
• CESMII Clean Energy and Smart Manufacturing Innovation Institute (CESMII), a non- profit organization supported by the U.S. government, was founded in 2016 for the advancement of smart manufacturing. CESMII has been branded as a research institute of clean energy, smart manufacturing, and innovation, with the goal of driving digital innovation to increase performance and quality, reduce costs, and save energy. CESMII is engaged in various activities to implement interoperability, such as collaboration, standardization, and open-source activities to increase the innovative business value of manufacturers. CESMII has implemented the Smart Manufacturing Innovation Platform (SMIP) to support interoperability between small and medium-sized enterprises and large manufacturers. This platform supports data collection, data contextualization, data management, and data integration, and through the marketplace, it supports the smooth interoperability of manufacturing-oriented technologies, including detection, control, modeling, and analysis, irrespective of suppliers. The core technology that enables the interoperability of SMIP lies in SM Profile. By adopting or replicating the SM Profiles technology, suppliers can provide products and services to various customers through the CESMII SM marketplace. The digital twin plays an important role in the SM Profile. The digital twin supports virtual models of actual products, processes, or services, which can monitor, analyze, and improve the platform’s performance. Furthermore, it is being developed for a variety of uses, including comparable 3D software, to track parameters required in the SM Profile and process and visualize data collected from sensor and manufacturing models. The goal of SM Profile is to provide open data models through a globally hosted cloud database. These data models can be used to implement digital twins, and code samples for APIs are provided as open-source through the CESMII GitHub (https:// github.com/cesmii) so that anyone can use them (Fig. 8). SM PROFILE A STANDARDS-BASED APPROACH FOR INDUSTRIAL ASSET INFORMATION MODELS, ENABLING INTEROPERABILITY
Profile
App
Process Level Profile A collection of Machines (line, supply chain...)
Captures the key attributes of a devices, asset, or process in a structured manner
Provides abstraction and programming interface enabling the flow of data from disparate systems
An Example:
Developed collaboratively by subject matter experts with deep familarity with the device, machine or process
Device Level Profile (sendor, driver, robot...)
Captured in an open OPC standard and accessed from the CESMII SM Marktplace
Extrusion Cylinder
Machine Level Profile A collection of devices (machine, compressor...)
Provides mechanism for software to automatically learn of changes and take appropriate actions
Measured Outputs
Modeled Measurements
Modeled Forms
Workflow Members
Equipment Specs
Operating Inputs
Cylinder Diameter
Cylinder Stroke
Load
Pressure Distribution
Abnormal Dection
Extruding Started
Max Cylinder Stroke
Cylinder Pressure
Cylinder Pressure
Temperature Hot Spots
Diagnostics
Do Machine Changeover
Max Load
Die Temperature
Die Temperature
Constrain Proximity
Prediction
Do Maintenance
Max Cylinder Press
Speed
Speed
Speed
Health
Do Material Change
Max Die Temperature
Etc.
Etc.
Etc.
Etc.
Etc.
Max Speed
Fig. 8 CESMII SM Profile
Digital Twin Standards, Open Source, and Best Practices
523
8 Best Practices Around the World (Table 4) In this section, various cases around the world applying the digital twin technology are examined.
8.1 General Electric (Fig. 9) General Electronics (GE) has built a remarkable factory to implement the “20/20 Vision” that aims to improve 20% efficiency in the product development cycle and 20% efficiency in the manufacturing/supply network, with the goal of establishing more than 500 smart factories around the world. It is a high-tech manufacturing concept for improving productivity by utilizing the latest technologies of the Fourth Industrial Revolution in various fields, such as aviation, railways, petrochemicals, and healthcare. A cloud platform called Predix21was developed to integrate all software and facilities of the GE Group with industrial Internet [24, 25]. Applications are built and GE’s digital twins are implemented for efficient management of facilities, spanning from the hardware used for controlling to a variety of software that can collect and analyze data). Table 4 Summary of digital twin technology application cases Number Title 1 General Electric
Year Domain 2020 Smart factory
2
DHL & tetra Pak
2019 Logistics
3
Virtual terminal
2013 Terminal
4
FIWARE4WATER 2020 Water distribution network
21
Country USA
Description Data collection and analysis and efficient facility management are facilitated by applying the digital twin technology to improve the factory’s productivity. Germany The dependency on manual & Sweden work is reduced, and the warehouse status and the inventory system are managed digitally by systemizing data. South The port container facility Korea control without human intervention is digitized. Europe ETSI NGSI-LD is used as a data model and API to integrate the water distribution network EPANET simulator within a Digtal twin for opertaion management and forecasting.
GE Predix IoT platform. https://www.ge.com/digital/iiot-platform
524
J. Song and F. Le Gall
Fig. 9 GE Predix IoT platform architecture
Predix is mainly linked with major applications for machine/equipment status, reliability management, maintenance and repair optimization, etc. The GE Power’s Monitoring and Diagnostics (M&D) Center at the world-top class level performs real-time monitoring of 5000 gas turbines, generators, and other equipment installed in about 900 power plants, which supply power to 350 million people in about 60 countries around the world. In 2017, an engineer found a turbine bearing before it failed as a result of requesting an on-site regular inspection after detecting an abnormality in the process of monitoring 200 billion data that are sent from 1 million sensors to the cloud and edge computers every day. Through the detection of abnormalities in advance, the loss of about US$2 million was prevented. Particularly, the predictive power was enhanced by applying the asset performance management solution that notifies advance warnings to managers as early as possible for problems that might occur due to service disconnection.
8.2 DHL & Tetra Pak In August 2019, DHL and a Swedish paper container manufacturer, Tetra Pak, jointly established a digital twin technology–applied warehouse in Singapore. The Singapore warehouse was one of the largest warehouses of Tetra Pak in the world and the first warehouse implemented using digital twin technology by DHL in the Asia-Pacific region [26]. Tetra Pak adopted an IoT-based digital twin warehouse and constructed a system that monitors relevant information in real time (see Fig. 10). The digital twin warehouse enabled Tetra Pak to predict field situations and respond immediately, facilitating optimal decision-making. As a result, abnormal or peculiar situations in restricted areas were identified automatically, and the product storage security was strengthened. The temperature sensors were linked to the Internet, and the temperature management efficiency in the cold chain logistics was improved. Particularly, it was expected that the operational efficiency would increase by about 16% if the logistics equipment–related information is identified in real time, and appropriate decision is made for each piece of equipment. Until now, the warehouses were not easy to control optimally because the dependency on manual work
Digital Twin Standards, Open Source, and Best Practices
525
Fig. 10 Smart warehouse from DHL & Tetra Pak. (Source: DHL)
was high and data were not systemized. However, this problem can be solved using the digital twin, and the warehouses incorporating related technologies can significantly improve corporate competitiveness.
8.3 Virtual Terminal In South Korea, a high-efficiency port that minimizes human intervention was developed using the digital twin technology. Virtual Terminal provides a real-time monitoring and control system for the port container terminal. It is an example of a digital twin application case in the logistics sector, which was led by a South Korean company, Rockwon IT, and it received a New Software Grand Award from the Ministry of Science and ICT in 2016. Virtual Terminal is a port control monitoring system that provides real-time visualization of the location and status information for a variety of equipment and vehicles on a 3D geographic information system (GIS) map. Virtual Terminal is a somewhat unique system that combines a 3D terminal and a CCTV solution to display the on-site situation in the port on the 3D screen and CCTV screens simultaneously. It aims for an automated port terminal that minimizes human intervention while checking the container and crane locations in real time.
526
J. Song and F. Le Gall
Starting with the installation at the Busan New Port in 2013, Virtual Terminal has been evolving by continually adding technologies related to digital twins. Since then, it has been installed and operated at King Abdullah Port in Saudi Arabia and Dubai Jebel Ali Port Terminal 3 operated by DP&W in the United Arab Emirates, and it has been reported that the port productivity has improved by 65% as a result of its application. 8.3.1 Fiware4water Fiware4Water is a project funded by the European Commission which intends to link the water sector to the FIWARE ecosystem by demonstrating its capabilities and the potential of its interoperable and standardised interfaces for both water sector end-users (cities, water utilities, water authorities, citizens and consumers), and solution providers (private utilities, SMEs, developers). By integrating the OWA- EPANET 2.2, a simulator for pressured water networks, with FIWARE, the project demonstrated the way of linking a tool for hydraulic and quality simulations of water distribution systems (WDS) with real time data from multiple sources in a standardized manner; this removed challenges associated with interoperability of different tools and data management systems, and enables an up-to-date digital twin (DT) to be maintained. Provision of a DT of a WDS then yields multiple benefits, including, for example, the ability to run simulations to determine pressure, flow and quality throughout the system in both forecast and hindcast scenarios to support decision making. A Water Network Management NGSI-LD data model has been developed to reflect the structure of an EPANET model,22 with individual entities including junctions, tanks, reservoirs, pipes, valves and pumps. In addition to these physical objects, entities are also defined for time patterns (used to vary demands, quality source strength and pump speed settings at fixed time intervals) and data curves (used to describe relationships between two quantities, such as head and flow for pumps, or volume and water level for tanks). Every entity has a unique Uniform Resource Identifier (URI), enabling it to be referenced. Each of these entities has specified properties and/or relationships, some of which are mandatory when defining an instance. For valves, for example, it is mandatory to provide an ID, valve type, diameter, setting, minor loss coefficient, initial status, start node and end node. These specifications of properties broadly reflect the required and optional properties of each network component when defined in an EPANET model. Two example entities from the Water Network Management data model are illustrated in Fig. 11. Figure 11a shows a ‘Junction’ entity, complete with its associated attributes, as an example of a network node; ‘Tanks’ and ‘Reservoirs’ are defined similarly, with additional properties as required (e.g. tank diameter) and properties
22
https://github.com/smart-data-models/dataModel.WaterDistribution/tree/master
Digital Twin Standards, Open Source, and Best Practices
527
Fig. 11 Illustrative example of the properties, relationships and values used to define (a) a junction and (b) a pipe in the Water Network Management data model
such as ‘location’ and ‘elevation’ being common to all. Fig. 11b shows a ‘Pipe’ entity and its associated attributes, as an example of a network link. Included in this are the ‘startsAt’ and ‘endsAt’ relationships, which much connect to a ‘Junction’, Tank’ or ‘Reservoir’ entity, and indicate how the network components are configured and the direction in which flow measurements are reported. ‘Valve’ and ‘Pump’ entities are modelled similarly, with the same relationships, and additional properties as required to capture their characteristics.
9 Conclusions and Remarks In this chapter, we examined international standards–related activities of developing and establishing standards for digital twins that operate in conjunction with various fields and technologies. The standards are developed differently according to the goals and fields of each standards organization. Nevertheless, ultimately, their goal is to ensure the interoperability of heterogeneous devices and promote wider use of digital twin technologies by developing standards for the digital twin’s reference models, devices, communication protocols, and data. In particular, this chapter examined the following standards organizations: • • • •
ISO/TC 184 for Industrial automation systems and integration ISO/IEEE for smart health IEC TC65 for industry automation including RAMI 4.0 oneM2M for common service function for Digital Twins
In recent years, rather than just working on the development of standards documents, which is the main work, the international standards organizations have been supporting or directly operating open-source communities so that developers can
528
J. Song and F. Le Gall
use their standards more easily to secure the ecosystem of the technology they are developing, respectively. Therefore, this chapter examined open-source activities related to digital twins. Finally, actual application cases related to various digital twins being developed around the world are examined.
References 1. Fuller, Z., Fan, C. D., & Barlow, C. (2020). Digital twin: Enabling technologies, challenges and open research. IEEE Access, 8, 108952–108971. https://doi.org/10.1109/ ACCESS.2020.2998358 2. He, B., & Bai, K. J. (2020). Digital twin-based sustainable intelligent manufacturing: A review. Advances in Manufacturing, 9(1), 1–21. https://doi.org/10.1007/s40436-020-00302-5 3. Sharma, A., Kosasih, E., Zhang, J., Brintrup, A., & Calinescu, A. (2020). Digital twins: State of the art theory and practice, challenges, and open research questions. arXiv.org. 4. Gartner. (2019). Top 10 strategic technology trends for 2019: Digital Twins. https://www.gartner.com/en/documents/3904569/top-10-strategic-technology-trends-for-2019-digital-twin 5. Lee, B. N., Pei, E., & Um, J. (2019). An overview of information technology standardization activities related to additive manufacturing. Progress in Additive Manufacturing, 4, 345–354. https://doi.org/10.1007/s40964-019-00087-5 6. Guodong, S., & Moneer, H. (2021). Framework for a digital twin in manufacturing: Scope and requirements. Manufacturing Letters, 24, 105–107. 7. ISO/DIS 23247-1. Automation systems and integration – Digital Twin framework for manufacturing – Part 1: Overview and general principles. 8. ISO/DIS 23247-2. Automation systems and integration – Digital Twin framework for manufacturing – Part 2: Reference architecture. 9. ISO/DIS 23247-3. Automation systems and integration – Digital Twin framework for manufacturing – Part 3: Digital representation of physical manufacturing elements. 10. ISO/DIS 23247-2. Automation systems and integration – Digital Twin framework for manufacturing – Part 4: Information exchange. 11. Laamarti, F., Badawi, H. F., Ding, Y., Arafsha, F., Hafidh, B., & Saddik, A. E. (2020). An ISO/ IEEE 11073 standardized Digital Twin framework for health and well-being in smart cities. IEEE Access, 8, 105950–105961. 12. Gámez Díaz, R., Yu, Q., Ding, Y., Laamarti, F., & El Saddik, A. (2020). Digital twin coaching for physical activities: A survey. Sensors, 20, 5936. 13. Badawi, H. F., Laamarti, F., & El Saddik, A. (2019). ISO/IEEE11073 personal health device (X73-PHD) standards compliant systems: A systematic literature review. IEEE Access, 7, 3062–3073. 14. D. F. S. Santos, H. O. Almeida, and A. Perkusich, “A personal connected health system for the Internet of Things based on the constrained application protocol,” Computers and Electrical Engineering, vol. 44, pp. 122–136, May 2015. 15. A. Talaminos, D. Naranjo, G. Barbarov, L. M. Roa, and J. Reina-Tosina, “Design and implementation of a standardised framework for the management of a wireless body network in an mobile health environment,” Healthcare Technology Letters, vol. 4, no. 3, pp. 88–92, Jun. 2017. 16. Villagrán, N. V., Estevez, E., Pesado, P., & Marquez, J. D. J. (2019). Standardization: A key factor of Industry 4.0. 2019 Sixth International Conference on eDemocracy & eGovernment (ICEDEG) (pp. 350–354). https://doi.org/10.1109/ICEDEG.2019.8734339.
Digital Twin Standards, Open Source, and Best Practices
529
17. Schweichhart, K. An introduction (Vol. 40). 18. Han, S. (2020). A review of smart manufacturing reference models based on the skeleton meta- model. Journal of Computational Design and Engineering, 7(3), 323–336. 19. Parts 1 – 7 IEC 61508. Functional safety of electrical/electronic/programmable electronic safety-related systems. 20. ISO: ISO 26262 Road Vehicles– Functional Safety. (2011). ISO Standard. 21. IEC: IEC 62439 High availability automation networks. (2011). IEC Standard (parts updated in 2011). 22. Swetina, J., Lu, G., Jacobs, P., Ennesser, F., & Song, J. (2014). Toward a standardized common M2M service layer platform: Introduction to oneM2M. IEEE Wireless Communications, 21(3), 20–26. https://doi.org/10.1109/MWC.2014.6845045 23. Viola, F., Antoniazzi, F., Aguzzi, C., Kamienski, C., & Roffia, L. (2019). Mapping the NGSI-LD context model on top of a SPARQL event processing architecture: Implementation guidelines. In 2019 24th Conference of Open Innovations Association (FRUCT) (pp. 493–501). https:// doi.org/10.23919/FRUCT.2019.8711888. 24. Lohr, S. (2016). G.E., the 124-year-old software start-up. New York Times. Retrieved from http://www.nytimes.com/2016/08/28/technology/ge-the-124-year-old-software-start-up.html 25. Morris, D. H., et al. (2014). A software platform for operational technology innovation. International Data Corporation, 1–17. 26. DHL. DHL supply chain partners tetra Pak to implement its first Digital Twin warehouse in Asia Pacific. https://www.dhl.com/global-en/home/press/press-archive/2019/dhl-supply- chain-partners-tetra-pak-to-implement-its-first-digital-twin-warehouse-in-asia-pacific.html JaeSeung Song is a professor, leading the Software Engineering and Security group (SESlab) in the Computer and Information Security Department at Sejong University. He received a PhD at Imperial College London in the Department of Computing, United Kingdom. He holds BS and MS in Computer Science from Sogang University. He holds the position of oneM2M Technical Plenary Vice and a director on IoT platforms at KICS. Prior to his current position, he worked for NEC Europe Ltd. between 2012 and 2013 as a leading standard senior researcher. At that time, he actively participated in IoT related R&D projects such as the Building as a Service (BaaS) FP 7 project and IoT/M2M standardizations (i.e., ETSI TC M2M and oneM2M). From 2002 to 2008, he worked for LG Electronics as a senior researcher leading a 3GPP SA standard team. He occupied leadership positions in 3GPP and oneM2M standard groups as a rapporteur and active contributor. He is the co-author and co-inventor of over 100 international publications, conference papers and patent applications. He also holds leadership roles (series editor and TPC co-chair) in several journal and conferences such as IEEE IoT journal associate editor, IoT series editor of IEEE Communications Standards Magazine and TPC Co-Chair of IEEE Conference on Standards for Communications and Networking.
530
J. Song and F. Le Gall Franck Le Gall, PhD Eng. in Physics and Telecommunications, is CEO at EGM, an innovative SME focused on integration and validation of emerging technologies. He is driving company development from IoT sensors development up to development and application of data platforms in vertical domains. Previously, he has participated in large R&D projects within the big industry (Orange, Alcatel, Thomson) and spent 9 years as Director within an innovation management company. He directed more than 10 large scale projects and studies related to the evaluation and monitoring of innovation and technical programs as well as research projects. He is now participating in several EU research projects in domains such as water management, cities, aquaculture and agriculture providing technical knowledge on the whole sensors to applications data chain. He has authored many scientific papers as well as patents. He is vice-chairman of the ETSI ISC-CIM standardization committee, chairman of the FIWARE-ETSI digital twins working group, elected member of the FIWARE technical steering committee.
Open Source Practice and Implementation for the Digital Twin Stephen R. Walli, David McKee, and Said Tabet
Abstract This Chapter provides an overview of Open Source, including the origination with specific references that trace the history of Open Source Code projects, including successes and failures. This approach gives the reader an understanding of an early concept of best practices, including the roles of the teams involved in the process and critical areas concerning the economic engineering of project consumption. Considerations spanning the development phase through the lifecycle of Open Source projects, both partners and projects for expanding the ecosystem, are explored. The Chapter concludes with a description of the Digital Twin Consortium’s Open Source Collaboration Community reference design platform. The reference architecture platform stack’s primary components are further detailed through representative Open Source Use Cases delineating the primary elements. Keywords Best practices · Digital twin · Implementation · Open source · Open source economics · Open source license · Open source practice · Open source projects
S. R. Walli Microsoft Azure Office of the CTO, Redmond, WA, USA D. McKee (*) Slingshot Simulations and Co-Chair at the Digital Twin Consortium, Leeds, UK e-mail: [email protected] S. Tabet Distinguished Engineer, Dell Technologies, Austin, TX, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_19
531
532
S. R. Walli et al.
1 Introduction Using an open-source license to share software is a powerful way to share innovation. Building a community around such projects creates an economic engine to capture innovation back to the project. This chapter considers: • • • •
The definition of open source. How to think about the economic engineering of project consumption. Identifies the attributes of healthy project communities. How to think about the economics of producing Open Source projects, and the business considerations when companies produce such projects. • Puts non-profit organizations in perspective. It then finishes with the considerations of how the DTC can enable a collection of partners and projects for growth.
2 What Is Open Source At its simplest, Open Source software is software that is shared using a license approved by the Open Source Initiative (OSI) as meeting the Open Source Definition (OSD). It is a statement of outbound sharing, with no expectation in return. The definition is now more than 20 years old. The U.S. Congress applied copyright law to computer software in 1980. For the previous decades, software was freely shared by developers collaborating. The IBM conference that began in the 1950s was called SHARE. DECUS conferences thrived in the 1960s through the 1980s around DEC computing equipment and a regular aspect of the conference was the acquisition of tapes full of software, liberally shared. USENIX began in the 1970s around UNIX systems research and development. In 1980, sharing suddenly required a copyright license. This provoked 18 years of experimentation with licenses to allow developers to continue to collaborate. Early work at MIT on Project Athena saw DEC, HP, IBM, and others collaborating on X11 and Kerberos leading to the MIT license. UC Berkeley CSRG was the hub for all the BSD UNIX work leading to the BSD license that would become the heart of Sun Microsystems SunOS and DEC Ultrix. Stallman articulated software freedom and we saw the rise of the GNU projects and the experiments and evolution of the GNU Public License. The Perl world was where system administrators traded code under the Artistic License. Netscape created the first attempt at a corporate-language collaboration license with the first Mozilla license. The Apache world thrived in the wild under an Apache-named hack of the BSD license. In 1998, all these experiments were brought together under the Open Source Definition (OSD), based on the Debian Free Software Guidelines. The Open Source Initiative (OSI) was formed as caretaker of the definition, and one of its primary
Open Source Practice and Implementation for the Digital Twin
533
functions is to host the open discussions that apply the Open Source Definition to licenses to determine if a license qualifies. Anyone can participate in such discussions, but the important take away is to understand that collectively people that span the breadth and depth of the past 40 years of collaboration since copyright was applied to computer software participate in these discussions. Collectively this broad, diverse evolving group of engineers, lawyers, business leaders and collaborators understand more about software collaboration than any simple ideas about what ‘open source’ should be. What makes the Open Source definition important isn’t that it was ‘new’ 20 years ago, but rather that it was the observed framework that drives collaboration best after almost 20 years of experimentation and has stood the test of this past 20 years. Said differently, the Open Source definition is 20 years old. But software collaboration, liberally licensed under copyright is 40+ years old. And software-collaboration itself is basically 70 years old. The OSD is the ten tenets that best drive a particular broad system of software collaboration. Change the definition and it will balkanize the very engineering economics on which we collectively succeed.
3 The Engineering Economics of Open Source Software (Component Consumption) Every product, project, and development manager understand the traditional build versus buy decision. With well-crafted, neutrally owned, OSI-licensed software components one extends the build versus buy decision to include “borrow and share”. The engineering economics of consuming open-source licensed project components is compelling. Here’s an example to demonstrate that idea. Interix was a product in the late 1990s that provided the UNIX-like face on Windows NT. It encompassed ~300 software packages covered by 25 licenses, plus a derivative of the Microsoft POSIX subsystem, plus the Interix team’s code. This was before the open-source definition. The product team started with the 4.4BSD- Lite distro because that was what the AT&T/USL lawyers said was free of their software. The gcc compiler suite would provide critical support for the Interix tool chain (beyond the Microsoft compilers) as well as an SDK to enable customers to port their UNIX application base to Windows NT. It took a senior compiler developer on the order of 6–8 months to port gcc into the Interix environment. It was a little more work when one included testing and integration, etc., therefore the engineering investment was on the order of $100K. The gcc suite was about 750K lines of code in those days, which a COCOMO calculation suggests was worth $10M–$20M worth of value depending on how much developers were earning. That is roughly two orders of magnitude in cost savings instead of writing a compiler suite from scratch. This was a well-maintained, robust, hardened compiler suite, not a new creation created from scratch in a vacuum. That is the benefit of using open-source licensed component projects. One can
534
S. R. Walli et al.
see a similar net return on the 10% year-on-year investment Red Hat makes on their Linux kernel contributions as they deliver Fedora and RHEL. Once the initial work was completed, however, the Interix team was now living on a fork from the main gcc project. This means they were drifting further away from the functionality and fixes on the mainline of gcc development. A back of the envelop estimate suggested that every new major revision of gcc would cost the Interix product team another 6+ months to re-integrate, but if the Interix changes could be contributed back upstream and integrated into the mainline code base, the team was looking at a month of integration testing instead. From approximately $100K the engineering costs were approaching $10K–$20K i.e., another order of magnitude cheaper by not living on a fork. The team approached the preeminent gcc engineering consulting company (employing several gcc committers). The price to integrate quoted to the Interix team was approximately $120K, but the consultants were successfully oversubscribed with enough other gcc work that they couldn’t begin for 14 months. Another gcc consulting company would only charge around $40K and could begin the following month. Understand that part of the challenge was that there were five project communities hiding under the gcc umbrella. While some projects respected the quality of engineering from the Interix team, others were hostile to the fact that Interix ran on Microsoft products. Hiring available consultants that were gcc project maintainers was an expedient way to get the work upstream, shortening the time on the fork. This wasn’t contributing back out of altruism. It was engineering economics. It was the right thing to do and contributed back to the hardening of the compiler suite that the Interix team was using itself, as well as gaining the additional break in the cost of engineering and managing forks. It was what makes well run open-source projects work. There are critical requirements on the components being consumed. 1. The OSI-licensed project community needs to be healthy with respect to how it makes decisions, arrives at consensus, and the software engineering processes in place to manage growth and software change. 2. The OSI-licensed project intellectual property needs to be neutrally held. A project owned by a company with most of the contribution flow coming from that company’s engineers can change the license at any time. A project owned by a company with a healthy contribution flow from external developers can still change licensing, but it is much harder. A neutrally held project (e.g., a non- profit holds the intellectual property) can still be controlled by a company if most of the developers in positions of authority in the project have a single employer but is harder still to manipulate. Consuming OSI-licensed component projects into products and services, while participating in the project community and contributing changes back into the project is the essential economic engine of success for well-run open-source projects.
Open Source Practice and Implementation for the Digital Twin
535
4 Healthy Open Source Licensed Projects An open-source licensed project is the functional unit of sharing. Building a healthy community around such a project is the basic unit of work. A healthy flow of contributions into a project community is the life blood of the project. Let’s look at what makes a healthy OSI-licensed community project from a structural perspective. Using inbound contributions rather than outbound use is a very deliberate perspective. If one thinks about outbound uses and users, it can be difficult to count. • Did someone star a GitHub repo because they are a simple end user, or because they follow the developer, or because they are a developer using the source? • Did someone clone/fork a repo and use it, or abandon the project fork, or simply review and learn from it? • Counting downloads is prone to similar problems. Looking at an inbound contribution flow, regardless of what one counts as a contribution, then one can have some confidence that the contributor is an active user. Projects have simple role definitions: • Maintainer: a primary author of a project with full privileges to write the project directory tree. There are typically only a few core maintainers in a project. • User: any person that is using a software project for its intended purpose. • Developer: any person using a software project but furthermore also modifying the source to their own selfish needs. • Contributor: any person offering a direct artifact back to the project, including source code patches, bug reports, configuration, documentation. A maintainer has a responsibility to the project. The creator or creators of a project are the authors and maintainers. They agreed on the initial license as their outbound social contract. They published the initial software, sharing it outbound using a liberal (OSI-approved) license in the hope that others would find it useful. They set the direction and roadmap for the project. They share the responsibility for the outcomes. Users enjoy the fruits of the labor of the project maintainers and contributors. They still identify with the tribe around the project and act as advocates for the project. Developers are users that have taken the further step of downloading the project source to extend the project to some personal end. It could be to fix a bug. It could be to add substantial new functionality. Contributors give back to the project. In earlier discussion, we saw how living on a fork of a project as a developer can be costly over time. One isn’t keeping up with the new functionality or bug fixes that may be flowing back to the project. Not having one’s extensions and fixes accepted back into the main project tree means the cost of new integration is higher. It pays to contribute back any project extensions. But it also pays to contribute back other artifacts.
536
S. R. Walli et al.
• Contributing a bug report still helps the project. The bug indicates a new test path or use. While the reporter may not know how to fix the bug, others might. • Contributing new configuration information broadens the user base. • Contributing documentation (answering forum questions, creating tutorials, etc.) broadens the user and developer base. • Forum time answering questions broadens the user and developer base. • Translations (program text strings e.g., error messages, as well as documentation). From an open-source project community perspective, maintainers need to drive three on-ramps into a project to build a healthy flow of contributions: 1. How to encourage people to use the project? (That is where project maintainers will find bugs reporters and developers.) 2. How to encourage developers selfishly to experiment? (These are the future potential contributors.) 3. How to encourage developers to share their work back to the project? (One can’t rely on developers necessarily contributing unless it is easy.) Another way to cast these questions is: 1. How to make it easy to install, configure, and use the software project? 2. How to make it easy for a developer without prior knowledge to build and test the software project to a known state? 3. How to make it easy to contribute changes back to the project maintainers?
4.1 Project Activities There are two groups of activities from the perspective of the maintainers needed to support the on-ramps: • Software construction activities. • Community development activities. The Open Source Community practices can be classified in two distinct groups: (1) those that pertain to the development of the Community; and (2) those that pertain to the actual development and construction of code (Fig. 1).
4.2 Software Construction Activities These are the block-and-tackle activities for the maintainers that put in place the infrastructure to make it easy for users to adopt the project, and developers to consume the project and hopefully turn those developers and users into contributors.
Contribuoion Guidelines
Governance
Events
Project Build Automated II
Project Test Automated II
Basic Arch Description
Code of Conduct
Comms Platform
Mission Statement
Forums, Email
FAQs, How to
Project License
537
Project Test Automated I
Project Build Automated I
Complete Src published
Project Bug. Tracking
Project Install Automated
Community Development Project Exes published
Open Source Community Practices
Open Source Practice and Implementation for the Digital Twin
Software Construction Maturity
Fig. 1 Open source community practices
This list is not meant to be perfect nor comprehensive, but it gives a sense of how certain activities should be considered and their intent. 1. Project executables are built and available on known platforms. 2. Project executable software has an automated installer for known platforms. 3. Bug tracking or issue tracking is available. 4. Complete source is published and easy to download. 5. Project build is automated or scripted for known platforms. 6. Project can be tested to a known state for known platforms. 7. Project build is sophisticated enough to support easy contributions. 8. Project testing is sophisticated enough to support easy contributions. 9. Architectural documentation exists as well as roadmap discussions.
4.3 Community Development Activities These are the building block activities for the maintainers that begin to create a sense of community for users, developers, and contributors. to make it easy for users to adopt the project, and developers to consume the project and hopefully turn those developers and users into contributors. Again, this list is not meant to be perfect nor comprehensive, but it gives a sense of how certain activities should be considered and their intent.
538
S. R. Walli et al.
1. The project license is easy to find. 2. There is easy on-boarding documentation like a Frequently Asked Questions (FAQ), How-to documents, or startup tutorials. 3. There is an easy engagement mechanism like an IRC channel, email distribution list, or forum. 4. There is a mission statement that is clear. 5. There is a Code-of-Conduct. 6. There is a well-organized communications ‘platform’ where there is clear direction on what channels to use for what purpose. 7. There are contribution guidelines. 8. The project governance is well documented. 9. There are real world events such as conference Birds-of-a-Feather (BoF) gatherings or Meet-ups.
Fig. 2 Open source community patterns
Events
Contribuoion Guidelines
Governance
Basic Arch Descripon
Project Test Automated II
Project Test Automated I
So
ware Construcon Maturity
Encourage Contributors Project Build Automated II
Code of Conduct
Mission Statement
Comms Plaorm
Project Build Automated I
Project Bug Tracking
Project Install Automated
Complete Src published
FAQs, Howto
Project License
Forums, Email
Community Development Encourage Developers
Encourage Users Project Exes published
Open Source Community Paerns
These activities in these two collections (software construction and community building) also group nicely around which group of people you are looking to engage from an on-ramp perspective (Fig. 2). The groupings of activities by level of involvement and participation and by. of stakeholder (User, Contributor, or Core Developer). All these activities are essential and normal activities when you consider delivering software at scale. As stated at the start of the section.
Open Source Practice and Implementation for the Digital Twin
539
5 The Engineering Economics of Open Source Software (Project Production) There are two separate cases to understand for producing a project instead of consuming (and contributing) a project. The first and simplest is the idea of an individual owner sharing a software project outwardly under an OSI license which we will refer to as an “in the wild” project. The second is a company producing a project and publishing it under an OSI-approved license.
5.1 “In the Wild” Projects The first “in the wild” case is interesting. Why would people share their inventions and hard work to no apparent monetary gain? As it happens, this is a well-researched space. Many innovators and inventors don’t necessarily want to create a company, license their technology, hire lawyers to protect their inventions, etc., and are happy to share. They often become the expert identified with the work and are happy (and well rewarded) to consult and train others in the space. A lot of this was originally documented in the 1980s by Eric von Hippel, [1] studying sail boarding. Many happy innovators were hacking their boards, sails, masts, and booms on the beach, and sharing their ideas with one another. Sail boarding companies (e.g., Windsurfer) didn’t want to invest heavily in R&D at the extreme edge but would happily support these extreme inventors, and the inventors were the cutting edge of the field. They were sponsored, trained others, and consulted. These experts never lacked for work or sponsorship, and they got to do what they loved. This works in the software space as well. Perl is an easy example. Larry Wall created a scripting language in the UNIX system administration domain Reference number in the 1980s [2]. He happily shared it and continues to maintain the language project. Healthy projects have a solid base of software engineering practices to ensure the project can scale for the number of users, developers and contributors, and the complexity of the software itself as it evolves. The software engineering practices fall into the two broad collections of activities of software construction discipline (e.g., version control, test and build automation, packaging, and release management) and community development activities (e.g., how-to tutorials, bug, and feature tracking, OSI licensing clarity). Projects can grow well with this simple structure, making it easy for the original inventor-maintainer to grow a small core of maintainers to support a growing community. Eventually however, the project growth hits a ceiling defined by liability and risk. A maintainer’s personal risk for hosting a meetup or mini-conference, or chasing and defending a project trademark, or holding the “bank account” for simple expenses can become too great. Likewise, if the project is gaining corporate/ commercial followers that want to use the project in products and services, the
540
S. R. Walli et al.
corporate risk and liability also increases when the project is held closely by an individual (or very small group of individuals). This is the time when a non-profit legal structure can be created to support the project. Non-profits remove the personal risk and liability from core maintainers and will be discussed further in the next section. They become a neutral home for the intellectual property of the project as well, allowing commercial interests to feel more confident in the use of the project in products and services, and in contributing back to the project for the long-term economic value. Increasing the steady pool of contributors to the project also establishes a broader ecosystem for the project. Recognize that the project’s growth is still defined by the engineering practices and community ethic of the project participants. Publishing software projects with an OSI license is sharing innovation. Organizing a community (accepting contributions) captures innovation to the project. That contribution flow is the life blood of a healthy OSI-licensed project and requires the work to build on-ramps for users, developers, contributors. A well-run community can capture twice the value of the software from outside the “core” team. The GeoNode project supported by the UN, [3] shows this level of innovation value capture. This assumes an open community and not simple OSI- licensed publishing and free executables from a for-profit company.
5.2 For-Profit Companies Producing OSI-Licensed Projects This is the second more complex, nuanced type of production of an open-source project. A company essentially produces three types of software: • Direct support for a company’s core value proposition to customers. (The software embodies the solution the customer buys.) • Complement value-add to the core value proposition. (Few products stand alone. Building a ‘complete’ solution makes the core value proposition more valuable and ‘sticky’.) • Simple context that doesn’t drive core value. (It is the tooling one develops to support the development of the business.)
5.3 Context Projects Sharing a software project under an OSI-license in a context space, and building a community has several benefits for a company: • Validates the approach to a problem with an external test bed of users. • Captures valuable innovation from outside sources. • Demonstrates expertise that can be used in recruitment.
Open Source Practice and Implementation for the Digital Twin
541
• Improves the quality of recruitment candidates by training and pre- qualifying them. • Demonstrates committed values to collaboration amongst developers that further recruitment (and partnership) goals. The community building activities can remain simple because this isn’t a revenue producing engagement with customers or partners. A non-profit isn’t necessary because the company legal structure can absorb any liability risk. Providing a neutral non-profit home isn’t necessary for growth because the project isn’t contributing to bottom-line growth. The project isn’t strategic. The core maintainer(s) of the project dictate direction and accept contributions. Easy examples of such context projects are many of the OSI-licensed projects from companies like Netflix (e.g., Chaos Monkey, Spinnaker) [4]. There is enough business value to be captured in outbound sharing of the context project that it is worth publishing and maintaining a simple community. There is nothing strategic for growth of the primary business (video streaming to all your devices in a Netflix example) that would need a neutral non-profit home for partner coordination.
5.4 Complement Projects Building an OSI-licensed project community in complement value-add spaces is an interesting step. All the benefits of sharing projects and building community in context spaces accrue in this type of project, but there are business benefits that are also more strategic because they read on the core value of the solution to customers: • Creates stickiness/inertia for the core value proposition to customers. • Creates a more “whole product” experience in conjunction with the core value proposition. • Creates experts, advocates, and evangelists around the technology. • Hardens the complements with new configurations and contributions. • Captures direct value to the complements (indirectly to the core). • Is possibly disruptive to competitors. To a certain extent, this is a variation on the “give away razors to sell razor blades” playbook. A company is making a complement freely available. They need to be willing to absorb the entire cost of development of both the software and its community, but if the investment in community building is done well, the software engineering costs of development of the project component can be halved, as can the cost of community development. A non-profit isn’t necessary to remove liability risk, but it is very useful mechanism to signal neutrality to partners to collaborate around an OSI-licensed project that is strategic and complementary to a set of partners’ businesses. Depending on the cost structure of the non-profit, the organization around the project can be used to defray community building costs. One can see the difference in two examples:
542
S. R. Walli et al.
• VS Code from Microsoft, which is a strong complement supporting Microsoft core businesses and a belief in supporting developers but doesn’t require a non- profit for partner collaboration. • Google created the Kubernetes project, strategically moving the conversation in the industry around cloud-based applications deployment from virtual machine images and AWS AMI to containers on Kubernetes. They worked to create the Cloud Native Computing Foundation with strategic partners as the center of gravity for messaging around Kubernetes and a growing ecosystem of OSI- licensed projects [5]. While Google gave up brand control (IP control) of Kubernetes, they maintained project control through direct participation. Fifty percent of the code and innovation in VS Code, Typescript, and .NET Core from Microsoft and Kubernetes from Google come from the community.
5.5 Core Value Proposition Projects Publishing your core value proposition under an open-source license needs careful consideration. If you’re a company whose core competency is the development/ distribution of software in a particular problem domain, then publishing your core value proposition under an open-source license: • Makes a potential partner into a competitor because other companies may also have the resources to turn your project into competing products. (They need not directly compete. They may simply cover a similar solution space from a customer perspective.) • Allows users with an advanced understanding of their IT environment (and how they invest in it) to solve their problems with the project without paying you for your product/service/solution. They may make a very different buy versus build decision from the one you want them to make, using ‘borrow + share’ with your open-source licensed project. • Creates market confusion for your sales and marketing team from similar products built from your project. While you can claim copyright and idea ownership and expertise, you’re still losing center of gravity in messaging. Many companies try to share the software that enables their core value proposition to customers using an OSI-approved license then struggle with business model design problems and branding problems. One needs to run a business. The OSI license is a statement of outbound sharing with no expectations in return. A company’s business model is grounded in the idea that customers pay for a solution that solves their problems. Most of the complaining about the inability to build businesses using the current OSI-approved licenses seem to come from companies creating open-source projects and not thinking about the business model design problem, or rather they incorrectly believe that they can convert a community member into a customer.
Open Source Practice and Implementation for the Digital Twin
543
This thinking around core, complement, and context has been captured in multiple ways by Geoffrey Moore all the way back in ‘Crossing the Chasm’ (1991) [6] as he discusses core value propositions and complements, and later core competency (enabling core value propositions) versus context in ‘Dealing with Darwin’ (2005). [7] The idea of value moving around a product/service network over time is further supported by Christensen in the ‘Innovator’s Dilemma’ (1997) [8]. Early in Christensen’s ‘Innovator’s Solution’ (2003), [9] he offered the following tests for a new disruptive business: For each of the competitive groups listed in the business plan, can one affirmatively answer: For a new market disruption, at least one and generally both the following questions need to be true: 1. Is there a large group of people who historically have not had the money, equipment, or skill to do this thing for themselves, and as a result have gone without it altogether or have needed to pay someone with more expertise to do it for them? 2. To use the product or service, do customers need to go to an inconvenient, centralized location? For a low-end market disruption, the following two conditions must be true: 1. Are there customers at the low end of the market who would be happy to purchase a product with less (but good enough) performance if they could get it at a low price? 2. Can we create a business model that enables us to earn attractive profits at the discount prices required to win the business of these overserved customers at the low end? Then once you know if you have either a new-market or low-end market disruption: Is the innovation disruptive to ALL the significant incumbent firms in the industry? If it appears to be a sustaining innovation to one or more significant players in an industry, then the odds are stacked in that firm’s favor, and the new entrant unlikely to win. If you’re a company whose core competency is the development/distribution of software in a problem domain, we have seen that building a community around an OSI- licensed project in complement value spaces can provide real engagement and value. But by publishing your core value proposition, you have failed all of Christensen’s tests that would have given your new market or low-end market business the advantages it needs to succeed. You have given incumbent firms the tools to compete. You have given potential customers the tools they need to ignore your value proposition, and the agency in community to build their own solution. If you invest in building a community around the project that sits on your core value proposition: • You create tension when competitors and [potential] partners and advanced IT users contribute value they want into your core value proposition. The power of innovation capture in a community around a complement becomes the problem
544
S. R. Walli et al.
of innovation dilution in your core value proposition. (This may have software engineering implications as well as the “simple” solution is being expanded into places the design wasn’t necessarily created to accommodate. This may have further social implications in the community as managing the lines of communication gets messier — the cost of Brook’s Law.) • You are accelerating the creation of a community of early adopting users that aren’t interested in paying for your software, instead of creating Moore’s early adopting customers that understood your product solution sufficiently to validate it by giving you money. This can all possibly be summed up best by an observation from Mårten Mickos, CEO of MySQL Inc., in 2006, [10]. Community members have time and no money. Customers have money and no time. There is no direct conversion path (or ratio) from community member to customer. While community members may evolve into consumers over time, based on being adept at the technology, a complement space buyer of a complementary product is much more likely than a direct community member conversion.
5.6 Projects, Products, and Branding Challenges Brand management is an important skill for a company. This isn’t an open-source skill. In the ‘80s, Relational Technology Inc. (RTI) rebranded the company to Ingres Inc. because that was the product the customer bought. RTI wasn’t as meaningful. Red Hat rebranded Red Hat Linux, their first-generation product, into Red Hat Enterprise Linux, their second-generation product, and created Fedora as a nicely linked brand for a fast-moving innovative community focused open-source project. This is perhaps the corollary to the Mickos observation: Projects aren’t products. Projects are the community’s focus and building community in complement spaces is a powerful developer relations way of creating sticky community members for your technologies. Products and services are what customers buy. The brands need to be different. Parking your identity brand on any open-source project you own, instead of the product/solution your customers buy, creates confusion for your message to customers.
5.7 Evolving Non-Profit Engagements As discussed, in a project’s growth, regardless of project ownership, two problems will arise to constrain growth.
Open Source Practice and Implementation for the Digital Twin
545
• Companies wanting to use the project in their solutions to customers often want clear IP practices, and neutral or clarified ownership or they will not participate. • Project maintainers cannot sustain the personal liability for the project for activities like events or costs around the infrastructure out of their own pocket. Non-profit organizations solve these problems, allowing new corporate users to participate and the project message to grow. Bringing a project to an existing non-profit (or creating a new non-profit) benefits the project’s growth (Fig. 3). • The non-profit adds stability to a project. The non-profit can: • Provide direct services out of the budget (website, infrastructure, communications, etc.). • Provide mentorship and experience (e.g., tooling, processes, governance, Code-of-Conduct, etc.). • Guide, aid, and support capturing attention (events, meetups, collateral). • Provides a legal backstop to protect the project. • The non-profit removes liability from the project maintainers (holding the bank account, signing the contract for meetups, etc.). • The non-profit is a center-of-gravity in the industry to attract relevant attention to the project from new participants, members, and the industry broadly (meaning it is an opportunity to attract users - > developers - > contributors to the project). • The non-profit as a holder of IP rights provides a more neutral space and known IP governance, encouraging further use of projects in product and service portfolios for project growth from other companies. • The non-profit provides a community focus on discussions and collaborative innovations across related projects. An interesting simple demonstration was an early non-profit in the late 1990s. The Apache httpd project had been growing well for a few years. Companies wanted to
A Non-Profit Enables The Next Evolution Corporate Contributors Services
Products
Ecosystem
Project
Maintainers
Contributors
Community
Books Training
IP Neutrality, Liability Management Business Management, Marketing, Events
Copyright Stephen R. Walli
Fig. 3 Non-profits enabling the open source evolution
Customers
546
S. R. Walli et al.
include the project as a component in their product offerings to customers (e.g., IBM wanted to use Apache httpd in Websphere, their line of products oriented to building websites). The IP controls in the Apache project were non-existent. The liability on the maintainers would be alarming if the software project was used in IBM products. IBM and others helped fund the Apache Software Foundation into existence, organizing the intellectual property management better, re-writing the Apache Software License 1.0 (a cribbed version of the BSD license) into the more commercially understood Apache 2.0 license, and removing liability from the maintainers as they were now legally protected by the non-profit corporation [11]. The growth in the code base of pent-up modifications was startling. The code base tripled in the first quarter after the formation of the non-profit (Fig. 4). The idea of growth holds. An engineer in 2010 performed an enormous data crunch across a data set of open source projects and demonstrated that the nine largest, most vibrant project collections were all protected inside non-profit foundations. The tenth largest was inside a company. Companies that own and control an open-source project, have a slightly different set of things to consider. While the maintainers are employees and have no personal liability risk, other companies are seldom interested in using and contributing to the project owned and controlled by one company. It puts their software supply chain of open-source licensed components into products at risk. Again, put the open-source licensed project into a non-profit creates a more neutral environment for collaboration. These non-profit organizations are corporations. They are typically structured in one of two ways:
The Apache Soware Foundaon Forms June 1999 Code Lines of Code 1,000k
Aug 1999 87,571 LoC
500k
0
May 1999 27,623 LoC
2000
2005 Code
2010 Comments
2015 Blanks
Copyright Stephen R. Walli
Fig. 4 Evolution of open source as seen through the growth of the code base of the apache foundation
Commiers Indemnified
Trademark Management
Provenance Management
Provenance Tracking
Contribuons Audied
Commier Governance
Dependencies Documented
Repositories Protected
Project License
547
Encourage/ Manage Corp Contribuons
Events
Contributoion Guidelines
Governance
Basic Arch Descripon
Project Test Automated II
Project Test Automated I
The 4th On-ramp
Encourage Contributors Project Build Automated II
Code of Conduct
Comms Plaorm Project Build Automated I
Project BugTracking
Complete Src published
Community Development Encourage Developers
Encourage Users Project Install Automated
Mission Statement
Forums, Email
FAQs, Howto
Project License
IP Management Acvies
Project Exes published
Open Source Community Paerns
Open Source Practice and Implementation for the Digital Twin
Soware Construcon Maturity
Fig. 5 Open source community patterns with commercial participation
• For the common good (i.e., a charitable organization providing services for the good of society). Many of the original non-profits (e.g., The Free Software Foundation, Perl Foundation, Python Foundation, Apache Software Foundation). • For technical corporate members (i.e., companies are members instead of individuals and can collaborate, protected from anti-trust rules). Most modern non- profits are organized this way (e.g., The Eclipse Foundation, The Linux Foundation, The OpenStack Foundation, The Digital Twin Consortium). Just as there were a set of activities that a project’s maintainers worked on to grow a healthy community and ensure scaled software delivery, there are additional activities (performed in the context of the non-profit) that enable companies to participate more easily, and to remove liability from the maintainers as individuals (Fig. 5). This essentially creates the fourth on-ramp for commercial participation to enable the next wave of growth.
6 The Digital Twin Consortium Open Source Collaboration Community Bringing everything together, one can see that the Digital Twin Consortium (DTC) acts as a center of gravity for a set of partner members to work together on growing a collection of OSI-licensed projects to share innovation broadly to everyone’s benefit, and to build community together to capture innovation back to the projects.
548
S. R. Walli et al.
The following section covers the DTC Reference Architecture Stackup as part of the work product from the Platform Stack and Open Source Group within the DTC’s Technology, Terminology and Taxonomy Working Group This information is referenced as part of the DTC Glossary, [12] a project of the DTC Open Source Collaboration Community: • The primary components of a digital twin system are abstracted to outline four key layers of components (bottom-up): IT/OT platform, virtual representation, service interfaces, applications & services. Key points: –– The Virtual Representation is the core with Service Interfaces for integration & interoperability Including elements of synchronisation –– The value to all & different stakeholders is realized through Applications & Services –– Digital twin systems run on IT/OT Platforms
6.1 Components of a Digital Twin System (Fig. 6) Each of the designated layers in the abstract stack are presented in the following section as excerpted from the recently published DTC Glossary (glossary (digitaltwinconsortium.org) that resides within the Digital Twin Consortium Open Collaboration Community GitHub repository: Digital Twin Consortium (github.com)
Fig. 6 Abstract stack of a digital twin system outlining 4 key layers of components (bottom-up): IT/OT platform, virtual representation, service interfaces, applications & services
Open Source Practice and Implementation for the Digital Twin
549
Fig. 7 IT/OT platform layered components
6.1.1 The IT/OT Platform The set of Information Technology and Operational Technology infrastructure and services on which the subsystems of a digital twin system are implemented. Subsystems of the IT/OT Platform include (Fig. 7): • • • •
A software platform and tooling stack. Platform APIs Orchestration of low-level infrastructure Compute, storage, and networking infrastructure The compute, storage, and networking infrastructure as well as the orchestration middleware that underpins all aspects of the digital twin system. For the purpose of this reference architecture, we do not explore how this supports the cyberphysical elements of the system i.e., the “real world entities and processes”.
Any of these could be cloud-based, on-premises, embedded, mobile, distributed or a hybrid of the above. Comparably to the approaches listed in Comparable Approaches this relates specifically to the communication layer from RAMI 4.0; the communication and physical layers from the Industry 4.0 approach; and incorporates the IIC Communication and Networking frameworks. This also correlates to the IoT Stack within the IBM architecture, the data platform and hybrid infrastructure layers of the AWS stack. 6.1.2 Virtual Representation A virtual representation is a complex, cohesive digital representation comprised of stored representations, computational representations, and supporting data which collectively provide an information-rich “virtual” experience of their subject matter. The integration representation/function of a digital twin system “virtually” joins information of various kinds together into the cohesive, multi-faceted, representation of reality that we call a “virtual representation” (Fig. 8).
550
S. R. Walli et al.
Fig. 8 Virtual representation layered components
Fig. 9 Service interfaces for synchronization and data interoperability
6.1.3 Service Interfaces A system’s service interface is a digitally addressable endpoint that implements a protocol through which other systems and services may interact with the system. • Each subsystem may have its own service interface, e.g. a virtual representation service interface, or a service interface for a particular kind of stored representation, or a visualization service interface, etc (Fig. 9). For context, the service interfaces are shown as interfaced to Real World and Data with requisite synchronization and data interoperability. 6.1.4 Application and Services Digital twin applications are software applications that leverage digital twin services and/or service interfaces. These applications may be considered to be “using” a digital twin system or to be “part of” such a system and are driven by use-cases.
Open Source Practice and Implementation for the Digital Twin
551
Digital twin services are functional subsystems of a digital twin system that provide value by leveraging the digital twin. Examples of digital twin services include: • Visualization services for web, mobile, desktop, Virtual Reality (VR), and Augmented Reality (AR) devices. • Analysis services of various kinds. • Many other kinds of services that will be enumerated later. Most digital twin services provide a service interface for use by other services and digital twin applications (Fig. 10). Though not indicated in the 4 higher level layers, the critical area of Trustworthiness must be considered to ensure the Security, Trust and Governance of the Digital Twin System (Fig. 11). In-depth detail is provided in the Security Framework document from the comprehensive work of the Industry IoT Consortium (Security Framework (iiconsortium.org) IIC:PUB:G4:V1.0:PB:20160926), [13]. Figure 12 Combines the above abstracted layers and Trustworthiness aspect to provide an overall view of the DTC Reference Architecture Platform Stack. For additional detail, the reader is referred to the Digital Twin Open Collaboration Community GitHub repository: Digital Twin Consortium (github.com) [14]. Figure 13a highlights the correlation between the Reference architecture to the Buildings as battery use case where the objective is to provide an approach for the decentralization of the power grid that is commercially viable at-scale.
Fig. 10 Application and services layered components
Fig. 11 Convergence of IT and OT trustworthiness
552
S. R. Walli et al.
Fig. 12 Digital Twin Consortium (DTC) reference architecture platform stack
Fig. 13a Digital Twin Consortium (DTC) reference architecture platform stack and an example use case – “Buildings as batteries”
Figure 13b highlights the correlation between the Reference architecture and the example Use Case to the Emergency Communications System where the objective is toallow for real-time interactions of operators and of emergency operations. The Fig. 13c highlights the correlation between the Reference architecture to the BuildinEColCafe where the objective of the EColCafe is to showcase an Open Source training platform to unify manufacturing and distribution process through shared language using Open-Source.
Open Source Practice and Implementation for the Digital Twin
553
Fig. 13b Digital Twin Consortium (DTC) reference architecture platform stack and an example use case – emergency response system
Fig. 13c Digital Twin Consortium (DTC) reference architecture platform stack and an example use case – EColCafe
The Fig. 13d highlights the correlation between the Reference architecture to the Quality Control focused High speed cobot based system. This Use Case objective is focused on Manufacturing Flexibilitywith both a Virtual Copy of Manufacturing Cell,including VR Manipulation and Inspectionshowcase an Open Source training platform to unify manufacturing and distribution processes. Further description of the preceding figures is available for review as part of the Technology Showcase on the DTC public site. The use case examples as shown in Figs. 13c and 13d, are part of the Digital Twin Consortium Open Collaboration
554
S. R. Walli et al.
Fig. 13d Digital Twin Consortium (DTC) reference architecture platform stack and an example use case – high speed pick and place cobot based system for quality control
Community and can be accessed through the DTC Github site: Digital Twin Consortium (github.com). The GitHub process and tooling is used to enable revision requests or access to build new projects using existing Open Source projects. The DTC Glossary provides further description details on the terminology used to describe the layers of the platform stack. The process to submit and incorporate additional content and / or terminology is outlined on the DTC GitHub site: digitaltwinconsortium/dtc-glossary: A glossary of terms related to digital twins and digital twin technology (github.com). In the spirit of Open Source and DTC Open Source Collaboration Community, readers can submit their own projects and contributions for approval using the Project Application Form with Guidelines and Assessment Criteria as accessed on the Digital Twin Consortium Open Source site: Open Source | Digital Twin Consortium® or access the DTC repository of Open Source Projects for reference / adding own contributions to a wide variety of Digital Twin projects: Digital Twin Consortium (github.com). Summary This Chapter has incorporated many Open Source fundamentals, including historical and current examples, while opening the door for future open source project proposals, with the hope of enabling interested parties to participate in the Open Source Collaboration Community with an open source contribution. In addition to the fundamentals, this Chapter described the composition of an Open Source project “healthy team “along with the business and economic considerations associated with Open Source project contributions for both non-profit and for-profit organizations. A description of the Digital Twin Consortium Open Source Collaboration Community as a means of illustrating not only a platform for assistance but an open source project that serves as the Digital Twin Consortium reference architecture platform stack up.
Open Source Practice and Implementation for the Digital Twin
555
In the concluding section, an Open Source Use Case illustrated an actual world application based on the DTC reference architecture showing a collection of partners and projects for future growth.
References 1. Eric von Hippel. 2. Larry Wall ;created a scripting language in the UNIX system administration domain in the 1980s. 3. The GeoNode project supported by the UN. 4. OSI-licensed projects. 5. Cloud Native Computing Foundation with strategic partners as the center of gravity for messaging around Kubernetes and a growing ecosystem of OSI-licensed projects. 6. Geoffrey Moore all the way back in ‘Crossing the Chasm’ (1991), 7. Core value propositions and complements, and later core competency (enabling core value propositions) versus context in ‘Dealing with Darwin’ (2005). 8. The idea of value moving around a product/service network over time is further supported by Christensen in the ‘Innovator’s Dilemma’ (1997). 9. Early in Christensen’s ‘Innovator’s Solution’ (2003), he offered the following tests for a new disruptive business: 10. This can all possibly be summed up best by an observation from Mårten Mickos, CEO of MySQL Inc., in 2006: 11. Apache 2.0 license. 12. a project of the DTC Open Source Collaboration Community. 13. Security Framework (iiconsortium.org) IIC:PUB:G4:V1.0:PB:20160926). 14. Digital Twin Open Collaboration Community GitHub repository: Digital Twin Consortium (github.com). Stephen R. Walli is a principal program manager in the Azure Office of the CTO and adjunct faculty at Johns Hopkins University. He has worked with open source software in the product space for 30+ years. Stephen has been a technical executive, a founder and consultant, a writer and author, a systems developer, a software construction geek, and a standards diplomat. He loves to build teams, and products that make customers ecstatic. Stephen has worked in the IT industry since 1980 as both customer and vendor and was a Distinguished Technologist at Hewlett Packard Enterprise. The development of the international software market remains an area of deep interest for Stephen. He has had the privilege of working with software start-ups in Finland, Spain, and China.
556
S. R. Walli et al. Dr. David McKee is the CTO and founder at Slingshot Simulations who specialise in building digital twin solutions for cities to support their net zero journey’s. He is also the co-chair of the OMG Digital Twin Consortium Capability and Technology working group. He has led their work on defining the best practice reference architecture. David is also one of the leading members at the Responsible Computing initiative. He is a Royal Academy of Engineering enterprise fellow and alumnus from University of Oxford Said Business School Creative Destruction Lab. David is also a guest lecturer on digital twins at the universities of Leeds and Oxford.
Dr. Said Tabet is the Lead Technologist for IoT and Artificial Intelligence Strategy at Dell EMC. Said is a member of the Object Management Group Board of Directors and the Principal Dell Technologies representative to the Industrial Internet Consortium and member of its Steering Committee. Said is Chair of the INCITS Secure Cloud Computing Ad-Hoc Group, and ISO Editor of the Cloud Security SLA project. Said is leading Deep Learning testbed activity within the IIC. Dr. Tabet is also a member of the Cloud Security Alliance International Standardization Council. With over two and a half decades in the industry, Said played the role of a technology advisor to a number of large multinational companies. Said contributes to technology innovation forums, guides startups through mentorship and coaching, is himself an entrepreneur and a supporter of industry efforts encouraging new technology adoption in multi-disciplinary environments. Said currently focuses on Artificial Intelligence, Industrial IoT and Deep Learning exploring challenges in Smart facilities, Manufacturing, Utilities/Energy, Connected vehicles, IoT Security, Big Data Analytics, Model-based engineering, and future technology innovation. Dr. Tabet is a regular speaker and panelist at industry conferences and international standards meetings, co-founder of RuleML, Artificial Intelligence and IoT Expert, as well as author and editor of several book series and articles. Said worked closely with W3C founder Sir Tim Berners Lee and with other globally recognized technology leaders.
Part III
The Digital Twin in Operation
Welcome to the Complex Systems Age: Digital Twins in Action Joseph J. Salvo
Abstract Digital Twins are an essential tool of Digital Transformation and are one of the enablers that allow us to deal with complex problems and complex systems. The technical foundations for Digital Twins start with deep domain knowledge in the area of application and fundamentally rely on information and operation technologies. A key ingredient is the unrelenting trend to recast stand-alone mechanical systems with digitally analogs where operations are data driven and managed through sophisticated digital control systems. The consequence is a remarkable explosion of performance improvements and new functionality. The enablers are the emergence of computing power that is inherently distributed and widely available, the virtualization of previously mechanical functions and processes through software, and the hyperconnectivity of organizations, the products, and devices they provide, and consumers. What is new and exciting is the use of Digital Twins to understand the dynamics and flows in complex systems by creating a Metaverse where it is possible to explore actions and decisions faster than real time The successful implementation strategy for Digital Twins relies just as much on human and organizational factors as it does on the technology. The important aspects of this are committed leadership, development of human capabilities, tools and infrastructure within the enterprise, increased reliance on collaboration within a greater ecosystem, and the realignment of corporate structures to take advantage of new business models and new capabilities. The Chapter illustrates these points in four different areas: supply chains, healthcare, manufacturing, and transportation. Because missteps are more than possible the Chapter also addresses the precautions to take in implementing Digital Twins. To be competitive and to thrive now and in
All digital images courtesy of pixabay.com. Special thanks to Gerd Altman 6883235, 3488497, 3125131 and wikimediaimages 872470 Photograph of GE Network Analyzer Special thanks to Chris Hunter, Vice-President of Collections and Exhibitions miSci | Museum of Innovation and Science J. J. Salvo (*) Cognitive Blocks AI Inc, Schenectady, NY, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_20
559
560
J. J. Salvo
tomorrow’s world Digital Twins are an important concept for organizations to internalize and an imperative to put into practice now. Keywords Business cases · Complex systems · Complexity · Digital Twins · Digital Twin use cases · Governance · Implementation · Leadership · Management · Manufacturing · Multiverse · Organization · Resources · Strategy · Supply chains · Systems
1 Introduction 1.1 Are Digital Twins a Fad? Every few years a new idea comes along that quickly becomes the darling of marketing organizations. But how can we tell if there is real substance and staying power in a new concept. Can we see a logical reason why this might be an inevitable outcome of a fusion between technology, business affairs and cultural acceptance? Although there are many attempts to go beyond the hype cycle and get past the peak of heightened expectations, we realize few initiatives will change the world like the internet and access to computing power. Taking advantage of the commoditization of technology that was once rationed and expensive is a clear path to future opportunity. Digital Twins can now be easily created and utilized as a general feature of modern business management (Fig. 1).
1.2 A Computational Divide The twentieth century witnessed the explosion of distributed computing power and its intimate integration into all aspects of our society. Initially, the introduction of computers was centralized and controlled by large IT departments with raised white floors and security systems designed to isolate people from computing resources and data stores. Access control was rationed and required layers of authorization and written documentation. Computing was a specialized discipline carried out behind thick glass partitions with unusual languages and by eccentric practitioners. The mass production of integrated chips and the realization of Moore’s law changed all of that and resulted in a multi-trillion dollar [1] commercial space created out of digital bits. As the effective cost of computing hardware plummeted and processing power skyrocketed on an exponential curve, the notion of “computing systems” necessarily evolved. Rather than highly controlled centralized computing resources that were rationed for use by the elite few, the introduction of the personal computer democratized the access and exploitation of business information across all the
Welcome to the Complex Systems Age: Digital Twins in Action
561
Fig. 1 The interaction between the physical world and a Digital Twin involves much more than a digital version of the physical – it includes simultaneous interactions in many dimensions
individuals in an enterprise. Rather than a loosely coupled organization of departments with separate cultures and incentives, modern business became a fluid and dynamic system that had the potential to incorporate more and more real time data and business intelligence into daily decisions. In contrast to the traditional organizational structures where upper management was at the pinnacle of the information pyramid, modern organizations increasingly became flat [2] and efficient with information and computing resources made available to all the individuals at the front lines on a need-to-know basis. The business transformation was swift and relentless. Those that understood how the shift in thinking about computing resources and data would be morphed into ubiquitous and commoditized business tools, quickly moved to reorganize their organizations to take advantage of the seismic economic and cultural shifts destined to follow. But the world had seen this kind of transformation before in another era and physical form. The nineteenth century and the dawn of the industrial age made the mass production of sophisticated tooling accessible to the general population, and business responded by creating organizational structures with access to capital and moderately skilled labor, rather than relying on highly skilled tradesman that were in short supply. The industrial foundries of yesterday that shaped physical materials into a new world of standard products have been replaced in large part by the digital foundries and software houses of today. They shape silicon into processing power and information into bits forming an ever-expanding universe of mass customized software and hardware products with a natural place in virtual space. The quintessential
562
J. J. Salvo
“organization man” [3], who once ran the 1950’s industrial giants, was unceremoniously replaced by the networked software developers and chip engineers of Silicon Valley. What might a global transformation of the 21st Century look like?
2 Leadership Matters: A Digital Divide No More With pervasive hyperconnectivity between machines, computing devices, individuals and organizations comes an exponential increase in the complexity of interactions and decisions. The democratization of computing pushed decision making capabilities far down into the organization and this yielded a rapid improvement in the speed and accuracy of decisions that relied on local information. The spreadsheet became a de-facto decision support tool that may have lacked accuracy and controllership, but it was economical, readily available and provided far superior outcomes than rule-of-thumb guidelines. Simple mathematical equations on laptops would now inform many types of decisions from scheduling to purchasing and supply chain management. But now with the potential of terabytes of real time data at the fingertips of individuals, how can we take advantage of this unprecedented opportunity? Spreadsheets and traditional software programs for planning, scheduling and optimization will remain, but they have no hope of providing a competitive advantage in the face of exponentially increasing data flows and business network complexity. Once again, we can learn from strategies proven in a prior transformational era of the physical world. In 1958 the launch of Sputnik marked the beginning of more than a space race between competing superpowers. The follow-on aspirational goal announced by President Kennedy in 1962 [4] of sending a man to the moon in less than a decade was breathtaking and bold. Leadership was shaken and exhilarated at the same time. It would require assembling the most complex physical and organizational machinery ever created by mankind. The engineers were confident they could build the physical equipment and means necessary to design and launch the most powerful spacecraft ever, but they were less sure of how to manage the reality of complex systems, whether they be composed of physical systems in the real world or more frightening yet, a combination of machines, computing hardware and software systems that were willed to create a specific deterministic outcome in a provably non-deterministic world [5].
2.1 The Dawn of the Complex Systems Age: A New Paradigm for a Complex World Ten years ago, complex system models combined with real time data inputs were first given the moniker “Digital Twins” by NASA’s John Vickers, but the concept was formally described much earlier in the context of product lifecycle management
Welcome to the Complex Systems Age: Digital Twins in Action
563
by Michael Grieves [6]. In the face of computational solution spaces of unimaginable complexity and facing a non-deterministic future unpredictable in the extreme, efforts were made to create twins of complex systems and environments that could be equipped with real time data inputs to help foresee possible outcomes when critical control parameters were changed. This approach would allow a rapid analysis, exploration, and adjustment of the system behavior at every level without putting the real-world instantiation at risk. This approach really paid off when Apollo 13 experienced a near catastrophic explosion of an oxygen tank while on its mission to the moon. A complex Digital Twin on earth that combined a duplicate physical system with streaming real time data allowed engineers and astronauts to brainstorm solutions that proved invaluable in getting the crew safely home [7] (Fig. 2). However, a forgotten analog version of this notion of a “Digital Twin” was realized long before, when the Industrial Age first created electrical power grid networks of complexity beyond the existing capacity of mathematical calculation. Engineers often knew the limits of their complex system models and were keenly aware how their assumptions often change with time. Complex physical twins of electrical grids were built with small scale components to allow real time exploration of solutions when unpredicted states appeared due to the non-deterministic nature of time-specific equipment failure, operator errors, and weather events [8]. However, as the complexity of ad hoc built electrical grids expanded rapidly with time, and access to routine real time information was lacking, analog computing itself was unable to manage the exponential growth in the problem. Digital computers and complex simulations have so far tried to step up to the challenge, but with the advent of millions of new electric vehicles and smart devices on the horizon,
Fig. 2 Long Island Lighting Co. engineers using a GE 480 cycle AC Network Analyzer April 25, 1941, at the GE Schenectady Works
564
J. J. Salvo
improved access to real time data inputs and modern Digital Twins will be critical to ensure grid stability in the future.
3 Digital Twins: In the Coming Metaverse Reality Digital Twins have already moved beyond the peak of the Gartner emergent technology hype cycle of 2018 [9]. But the question remains as to how they will be implemented at scale and across industries and organizations? The deployment of capital is one of the key responsibilities of leaders and upper management, but how will decisions get made when faced with a high degree of uncertainty and a world that seems to evolve at a pace faster than ever before? The ability to not only predict future outcomes but bias what path the future takes will undoubtedly be a strong focus for Digital Twin practitioners. Digital Twins are much more than a tool for complex systems modelers and risk mitigation departments. Properly designed Digital Twin systems will fuse the past, present and future(s) into a hybrid reality where time can be run forwards and backwards at will in virtual realities. In the world(s) of infinite complexity, Digital Twins will enable a deeper understanding of a universe of potential outcomes and opportunities. It has recently been shown that models of individual biases can be used to create data streams that ultimately effect human moods, decisions, and beliefs [10]. Digital Twins that model such behavior may guard against unintended consequences from exposure to unbalanced information sources. Over the last generation the routine acquisition of knowledge in the world at large has shifted from print media and broadcast media to a hybrid of physical and virtual constructs. People spend more time with sophisticated computer processors (cell phones) and real time data streams than ever before. The digital native is at ease from morning to night blending a new reality that’s synthesized from a near infinite supply of information and competing models of reality. The business world will also take direct advantage of an ability to create new realities according to the careful design of optimization and risk mitigation strategies. The concept of a metaverse [11] is emerging whereby the physical world and virtual worlds are combined and explored in augmented reality. Digital Twins will likely extend the metaverse to all imaginable forms of dynamic business endeavors.
4 Digital Twin Implementations: Where to Start? Large bureaucracies often struggle with execution speed and reaction time to external factors seemingly beyond their control. The recent pandemic has highlighted the brittleness of just-in-time manufacturing, six sigma and lean manufacturing paradigms. The leaders of yesterday championed speed and simplicity. Tomorrow the trend setters will harness system complexity in the wild. Complex systems are
Welcome to the Complex Systems Age: Digital Twins in Action
565
inherently unpredictable and highly susceptible to damage from cascading failure modes. What starts out as a seemingly insignificant event can lead to a totally unforeseen and rapid system failure that unfolds along an exponential timeline. In the absence of a Digital Twin infrastructure, decision makers often resort to simplistic rules of thumb that can paralyze operations or worse. We have seen that what ostensibly began as a few cases of an aggressive Coronavirus, rapidly turned into a global disaster affecting billions of people. Healthcare, education, manufacturing, entertainment, and government institutions now all struggle with a once-in-100year viral pandemic event that had been foreseen for decades but was unstoppable by traditional damage control and analytic methodologies. Harnessing complex systems at scale undoubtedly requires new approaches to be explored. We must embrace complexity because that is the essence of our world at large. Digital Twins and simulations can be used to mitigate the damage by a specific future not predicted or expected by central planning and beyond the capability of local groups to manage when left to their own devices in isolation. A huge opportunity exists to take advantage of our hyperconnected world and the unprecedented distribution of computing power.
5 The Organizational Resources Necessary for Implementing a Digital Twins Strategy The true sustainable value of any business is not to be found in its isolated technology portfolio or detailed financial systems but instead in the myriad of processes used to execute all the requirements of the business. The definition of these processes and their interactions is extremely complex and has been a true challenge for anyone who has tried to create, implement or manage an “expert system” to “optimize” some of these activities. The failure points have been documented and include (1) the inability to acquire and manage the vast amount of data required to build a useful knowledge repository and (2) the difficulty in producing an inference engine that often relies on a vast amount of “common knowledge” plus specific domain knowledge mastered only by seasoned experts in the field. IBM’s Watson system highlighted both the power of vast knowledge stores and the significant technical limitations and necessity of interfacing with common knowledge [12]. Digital Twin technology is cross-functional and depends on clear and dedicated management support. The Human Resource function in many businesses has evolved from a service organization handling the straightforward administration of tasks such as payroll and benefit plans (which have by and large been automated through self-service software systems) to a more strategic function that focuses on business culture, talent development and corporate “initiatives”. However, the highly technical nature of the global business environment has rendered the generalist HR professional of yesterday ineffective in building complex cross domain teams that require talent with business knowledge and technical experience. Recent
566
J. J. Salvo
analysis of persistent and extremely poor HR performance ratings across business domains highlights the mismatch between skills and organizational needs in an era of ubiquitous connectivity, data and mobility [13]. Today’s business needs “knowledge resource managers”. The formulation of a successful Digital Twin strategy and implementation will require a more sophisticated executive that interacts with an HR team that is more like a business consultancy and advocate for the development and acquisition of many types of knowledge resources. Digital Twins will augment the existing knowledge base that enables a modern business to function.
5.1 Clear Business Benefits Lead to Digital Twin Success Executive support of Digital Twin technology must be clear and sustained over multiple management cycles that are increasingly seen to be shorter than the time needed to create, implement and benefit from a Digital Twin strategy. Compelling business benefits must be defined, understood in great detail and communicated prior to any new Digital Twin activity. The best way to ensure the future success of Digital Twins is to have a clear and up-to-date plan that defines costs and benefits in measurable terms. Cost reductions, increased production, higher quality, fewer warranty claims, better service outcomes and more successful product launches are just a few of the targets that can be identified and quantified. As the complexity of digital twins increases to describe a system and/or system of systems network an organization can begin to balance short term profitability with long term robustness and vitality. At the heart of every digital twin is a group of definitions that relate all potential inputs and outputs to performance metrics. These definitions must be created by domain experts with experience and technical depth. A clear definition of desired business outcomes must then be mapped from the top down to connect with the digital twin system interface. Because of the absolute requirement for judgement and real time tradeoff analysis, human decision makers must ultimately manage the operation and own the responsibility of digital twin infrastructure. No amount of Digital Twins technology, complex simulations and Monte Carlo analysis can substitute for a management team that understands and respects the underlying technologies in question. Complex systems are by definition highly technical in nature and must be managed by technically competent experts. Executives may set strategy, business targets and allocate capital while digital twins are there to better describe the present and possible future outcomes; not to insulate anyone from personal responsibility. Understanding the risks and rewards associated with specific courses of action in a complex non-deterministic world require hands-on technically competent management (Fig. 3).
Welcome to the Complex Systems Age: Digital Twins in Action
567
Fig. 3 The most complex flying machine ever created by mankind consisting of over 10 million components began operations on April 12th, 1981. On January 28, 1986, in its 10th flight and the 51st for the Shuttle fleet, the Space Shuttle Challenger suffered a disaster caused by an O-ring failure. Despite, sophisticated modelling systems, the incident illustrated how complex systems can’t be protected from inappropriate management structures and inadequate decision hierarchies. (Courtesy NASA)
5.2 How to Plan for a Successful Digital Twin Strategy Digital Twins are a unifying concept that can help analyze the past, manage the present and plan for the future. To ensure a higher probability of success upper management must embrace the power of Digital Twins to guide operations that define every enterprise including design, production, supply chain, service, and customer engagement. The business infrastructure must allow for data to be acquired in near real time to refresh the inputs of models that describe the key aspects of business activities and product performance. Just as “top down” management commitment is necessary for a successful Digital Twin Strategy and Framework, a “bottoms up” approach is required to create and maintain the Digital Twin technology. Pervasive sensors and data harvesting systems must automatically populate Digital Twins and update the performance of products, services, and operations on a regular basis. Inadequate IT infrastructure for discrete manufacturing facilities can severely hamper the implementation of digital twin methodology. A clear distinction can be made between Digital Twins that represent physical products and those that represent the behaviors of complex processes. More and more emphasis is being placed on understanding the complex and emergent behavior of systems-of-systems that have come to the forefront of business competitiveness as globalization links suppliers, manufacturers, and customers on a truly global scale. Lean manufacturing and just-in-time manufacturing principles that optimized
568
J. J. Salvo
• • • • • • • • • • • • • • • • • •
Cash Flow Customers Manufacturing Materials Personnel Product Design Products Product Tesng Repair Returns Revenue Sensors Services Standards Suppliers Transportaon …… ……
N-Dimensional Data Mulverse
Compound System Twins
Complex System Twins
Limits of Classical Computaon
System Complexity
Single Digital Twins
Data Sources
# of Twins Required to Describe System
Fig. 4 Low level Digital Twins’ model individual components of complex systems. Multi- dimensional networks of Digital Twins can be used to simulate complex and emergent behaviors that evolve through time and are not obvious from any local vantage point
material flows and order placement have failed to perform well during non- equilibrium stress events that have pushed many enterprises to the brink of collapse. Digital Twins have the capacity to allow for rapid analysis of critical “what if” scenarios and can take advantage of stochastic models that probe the behavior of complex systems with the exploration of millions of potential decision trees (Fig. 4).
5.3 Responsive Supply Chains for the Real World Supply Chains are a target rich for Digital Twin implementations [14]. The globalization phenomena over the past few decades resulted in networks of suppliers and consumers connected by standardized containerized cargo shipping (Fig. 5). Physical distances are often at the extreme and timelines and planning systems necessarily must look months into the future. Unfortunately, real time information is often in short supply and unreliable at best. Those industries that fused high quality real-time data streams and clear communications with flexible operations greatly expanded their operations at the expense of traditional competitors. For example, Walmart and Amazon made huge investments in real time supply change management for years where Digital Twin concepts can inform key decisions. These systems facilitated the delivery of unprecedented amounts of goods and services during the pandemic. Beginning in 2016 all of Walmart’s trailer fleet in north America was equipped with telematics devices [15] that relay real time information and status so that dynamic scheduling systems can be utilized. Capital allocation towards this type of goal can be sometimes viewed as controversial and may lead to profit
Welcome to the Complex Systems Age: Digital Twins in Action
569
Fig. 5 Supply chains are global and depend on multi-modal transportation systems and a hierarchy of distribution facilities. They should be resilient, reliable, efficient, and transparent
deferral in the short term, but management prepared for the future with the resilience of a Digital Twin philosophy. In contrast, we see that many traditional retailers are now teetering near collapse.
5.4 Healthcare Stretched to the Breaking Point In the face of relentless cost increases over the past several decades healthcare providers and insurers turned to traditional financial management methods to perform cost reduction and the elimination of redundant operations. Due to necessarily strict privacy rules and the perceived high cost of technology upgrades, many legacy institutions and service providers have poor real time information access and/or cross platform information systems. Communication in a crisis is critical and the need to model complex systems that were evolving minute by minute was beyond the capacity of traditional scheduling and procurement departments. Resources of all kinds were difficult or impossible to find and schedule, while ad hoc systems were necessarily put into place. The introduction of Digital Twin technology to create a flexible and visible real- time decision and control system in health care is a rich area that needs significant future investment. For example, medical resources can be dynamically monitored and modelled in real time to drastically reduce consumable inventories and small physical assets that are often misplaced.
570
J. J. Salvo
5.5 Manufacturing in the Complex Systems Age Manufacturing capacity built the foundations of many economies of the past and present. However, as globalization shifted its center of gravity to the east, an increasingly brittle global system-of-systems was created. Undoubtedly the percent of GDP contributed by manufacturing exhibits a steady decline [16], but nonetheless any disruption in critical supplies (microprocessors, raw materials, health equipment etc.) leads to system disturbances that are so complex and far reaching that the political ramifications are substantial and cannot be ignored. Digital Twin methodology can be used to make a “plan for every part” [17] where materials, manufacturing capacity, transportation systems and demand scenarios can be monitored in real time and tuned for resiliency in uncertain times. Technology infrastructure needs to penetrate the depths of manufacturing organizations with state-of-the-art security and authentication processes. The data needed to assemble Digital Twins also provides the information necessary to establish unprecedented levels of product authenticity, provenance, and proof of ownership. In a world where data is cheap and identities can be fabricated, new trust metrics will be a requirement for both success and survival (Fig. 6).
Fig. 6 Today’s complex discrete manufacturing is integration of modular components that are built and tested in facilities around the world and assembled in separate facilities
Welcome to the Complex Systems Age: Digital Twins in Action
571
5.6 Transportation in a Hyperconnected World It is natural to think of transportation in the simple context of moving real people, finished goods and raw materials. The railroads, highway systems, trucking and air carriers have sustained the growth of a society that is more globally aware than ever before. We are on the edge of an era when artificial intelligence will begin to augment the responsibilities and activities of human participants. For example, post pandemic the need for business travel “as usual” will be routinely challenged [18] and the arrival of AI enhanced transportation systems will undoubtedly interact with the Digital Twins of their human customers and/or counterparts. Detailed customer profiles will be (and are being) assembled and grouped by online retailers and social media mavens. The travel and preference profiles will become an integral part of the Digital Twin metaverse. Where to go, when to go, and how to go, whether in traditional reality or in a virtual or hybrid space will become a subject of debate and economic analysis. Scheduling and route selection in transportation is a uniquely difficult problem traditionally used as a test for the capabilities of new digital simulation systems. Trying to solve such a problem with static brute force methods is often intractable at this time. New systems of Digital Twins that optimize their behavior as they try to maximize their own efficiency under changing traffic conditions are already in place. Digital Twins supported by emerging quantum computing resources promise to improve the execution of this class of problems even further [19].
6 Privacy, Digital Twins, and Security 6.1 What Could Happen? A famous bank robber Willie Sutton was once asked by a reporter why he robbed banks. His simple reply, “that’s where the money is” [20]. In a hyper-networked future filled with valuable Digital Twins, the concept and protection of identity will be paramount. Nothing else will be more important and valuable than the ability to mathematically prove the identity, provenance, and performance history of everything. Reputations of individuals, corporations and even classes of assets will be at stake. But who will control and regulate this capability? Digital Twins are necessarily connected to the real world and can be designed to interact with people, things, and systems in a proscriptive, informative, or flexible manner. Just as trust is at the foundation of human interaction, we will need a provable trust fabric for the distribution of Digital Twins in our business systems and the world at large. As a Digital Twin of your home or business is being created by a robotic custodian would you care about the accuracy of the outcome; where it is stored and modified, what other Digital Twins may interact with it [21]? But of course, you must. As
572
J. J. Salvo
the Digital Twins of today and tomorrow are infused with machine learning they will become more and more valuable while becoming the target of theft, disruption, or sabotage. The reputation of Digital Twins will be as important as their capabilities. Can you really trust a Digital Twin in your enterprise when it is making real time decisions more complicated than any single human being can comprehend? Has it been certified? Unlike human experts, Digital Twins could have a lifespan of unlimited extent and their reputations over time will grow in value and need protection. Today safety protocols and software standards for mission critical systems that control assets such as airplanes and nuclear power plants require extensive formal methods analysis [22] to explore the boundaries of foreseen possible scenarios. But Digital Twins when deployed at scale will be even more important to understand in detail because they will inhabit the realm of the non-deterministic future. Newly developed blockchain technologies may help provide iron clad methods to track the creation and custody of Digital Twins through time, but methods to provide the quality and assurance ratings associated with the nature of their intent and performance will be much more complicated.
7 Digital Twins and the Multiverse: A Word of Caution Traditionally a business focused much energy on responding to the customer faster and more efficiently than the competition. Digital Twins are well positioned to deliver on the promise of customer alignment and care, but also offer an opportunity to influence the trajectory of the future to align with business operations and customer satisfaction. A multiverse describes a theoretically infinite set of possible futures that exist simultaneously in different dimensions (Fig. 7). Does a business passively respond to the unwinding of future events, or can it help bias what the future may bring. Marketing functions have traditionally served to create new and expanding markets for goods and services. Digital Twins of complex systems can help define a reality that is not obvious or likely in the eyes of the competition. Proactive actions can then be taken to “nudge” present and future events into alignment [23]. The advent of pervasive social media and advanced computing platforms at the edge of high-speed networks has revealed the nascent advantages and risks of these technologies and exchanges. We have all seen how powerful forces can be summoned at will to meet critical social objectives (Go Fund Me), seemingly trivial behaviors, i.e., the Pokemon craze, and even shocking political forces. The implementation of automated Digital Twins that describe and potentially manage many aspects of daily life will require careful regulation and management that engages a broad representative base of stakeholders. The evolution of Environmental Social Governance “ESG” metrics that will impact trillions of dollars of future investments [24], may be seen as a
Welcome to the Complex Systems Age: Digital Twins in Action
573
Fig. 7 The multiverse provides individual realities that business can address, tailoring products and services to individual needs
precursor to potential regulatory frameworks that will blunt the unintended consequences of complex systems behavior. The concept of the decentralized autonomous organizations (DAO) [25] whereby a business can be created with no central office and no dedicated employees while operating according to software algorithms and Digital Twins’ points to a potential future very unlike any we have known.
8 Getting Started: There Is No Time to Waste Managing complex behavior with Digital Twins requires access to real time data and computing models. We have seen that Digital Twins can be used to help manage processes such as those built from physical inventories and suppliers that lead to supply chain decisions or separate physical systems built from individual components or even systems-of-systems. Most organizations already have the technical people required to create Digital Twins if they are given the access to real time data and the computing resources necessary to simulate the system of interest. What is often lacking is the determination by leadership and management to build in resiliency and flexibility at the cost of short-term profitability. In the end, it always becomes a question of capital allocation and management philosophy. Digital Twins are set to become more and more a part of our world even as they seem to simultaneously retreat behind the scenes and merge with the general knowledge fabric supporting our businesses and society at large. Complex systems will require complex Digital Twins to keep them in check while new supervisory technologies will be needed to keep the Digital Twins in check. There is no time to waste. They have already read this paper [26].
574
J. J. Salvo
References 1. Alcantara, Chris; Schaul, Kevin; Vynck, Gerrit De; Albergotti, Reed. (2021, April 21) How Big Tech got so big: Hundreds of acquisitions. Washington Post. 2. The Advantages of Flat Organizational Structure David Ingram, Updated February 05, 2019., Chron.com https://smallbusiness.chron.com/advantages-flat-organizational- structure-3797.html 3. The Organization Man, William H. Whyte, University of Pennsylvania Press; Revised ed. edition June 27, 2002) 4. John F. Kennedy Moon Speech – Rice Stadium https://er.jsc.nasa.gov/seh/ricetalk.htm 5. The challenges of testing in a non-deterministic world. https://insights.sei.cmu.edu/blog/ the-challenges-of-testing-in-a-non-deterministic-world/ 6. Origin of the Digital Twin concept https://www.researchgate.net/publication/307509727_ Origins_of_the_Digital_Twin_Concept 7. https://en.wikipedia.org/wiki/Apollo_13 8. Thomas Parke Hughes. (1993). Networks of power: Electrification in Western society, 1880–1930. JHU Press. 9. https://www.gartner.com/en/newsroom/press-releases/2018-08-20-gartner-identifies-five- emerging-technology-trends-that-will-blur-the-lines-between-human-and-machine 10. https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds 11. https://en.wikipedia.org/wiki/Metaverse 12. https://www.huffpost.com/entry/watson-final-jeopardy_n_823795 13. https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2015/reinventing- hr-human-resources-human-capital-trends-2015.htmlReinventing HR by Arthur H. Mazor, Michael Stephan, Brett Walsh, Hendrik Schmahl, Jaime Valenzuela 14. The Brave New Worlds of Global Supply Chain Optimization June 2017 The Manufacturing Leadership Journal pp 56–64 15. March 2006 https://www.truckinginfo.com/97779/wal-mart-picks-ges-telematics-for-trailer- fleet “The Trailer Fleet Services division of GE Equipment Services announced it will provide Wal-Mart Stores with trailer tracking technology for its fleet of 46,000 over-the-road trailers.” 16. https://data.worldbank.org/indicator/NV.IND.MANF.ZS 17. Lean Supply Chain Management Essentials: A Framework for Materials Managers 1st Edition Bill Kerber and Mike Brian J. Dreckshage 2011 CRC Press 18. https://www.economist.com/business/2020/08/01/air-travels-sudden-collapse-will-reshape-a- trillion-dollar-industry 19. Quantum speedup of the traveling-salesman problem for bounded-degree graphs Dominic J. Moylett, Noah Linden, and Ashley Montanaro Phys. Rev. A 95, 032323 22 March 2017 20. The Saturday Evening Post, January 20, 1951, Volume 223, Issue 30, Someday They’ll Get Slick Willie Sutton by Robert M. Yoder 21. https://www.cnet.com/home/smart-home/irobot-wants-to-share-roomba-generated-maps-of- your-home/ 22. Concise Guide to Formal Methods: Theory, Fundamentals and Industry Applications, Gerard O’Regan 2017 Springer Verlag 23. Nudge: The Final Edition Richard H. Thaler and Cass R. Sunstein, New York, New York 2021 Penguin Books 24. The Power of Capitalism https://www.blackrock.com/corporate/investor-relations/ larry-fink-ceo-letter 25. https://ethereum.org/en/dao/ 26. https://blogs.microsoft.com/ai/microsoft-creates-ai-can-read-document-answer-questions- well-person/
Welcome to the Complex Systems Age: Digital Twins in Action
575
Dr. Joseph J. Salvo was the Founder and GE Director of the Industrial Internet Consortium (IIC). He was responsible for setting the overall direction and prioritization of IIC activities for the organization that currently has members from over 25 countries and 250 organizations. He also managed the Complex Systems Engineering Laboratory at GE Global Research and lead an AI laboratory at the GE China Tech center in Shanghai. For the past 20 years Dr. Salvo and his laboratories developed a series of large-scale internet-based sensing arrays to manage and oversee business systems on a global scale and deliver value-added services. Digital manufacturing, supply chain management, telematics services, and energy management were among the targets for continual improvement through the judicious use of big data and AI algorithms. Commercial releases of complex decision platforms such as GE Veriwise™ GE Railwise, Global Vendor Managed Inventory, Ener.GE™, and E-Materials Management delivered near real-time customer value through system transparency and knowledge-based computational algorithms. More than 1 trillion dollars of assets and goods have been tracked and system optimized with technology that is deployed on 4 continents. Blockchain, 5G communications, digital manufacturing, personal privacy and quantum secure data transfer are among his latest interests.
Physics in a Digital Twin World Jason Rios and Nathan Bolander
Abstract This chapter explores the use of physics-based modeling & simulation technologies as part of Digital Twin solutions. It describes how an understanding of physics-based phenomena at microscale can be the basis for avoiding failures at a systems level, not only to improve safety, but to also reduce operating costs dramatically. The use of physics in the Digital Twin ecosystem enables the capability to see beyond the limitations of sensors and extends the ability to reasonably predict the future health state of complex mechanical systems. In mechanical systems under operation, components break when materials fail. Fortunately, the progression to failure can be modeled and anticipated using a physics-based approach to analyze the accumulation of damage accumulation of damage due to microstructure-level stresses microstructure level stresses and eventual initiation of component failure. This capability is critical to forecasting the ability of a mechanical design to withstand operational loading conditions and for predicting component stresses that will ultimately determine the remaining useful life of the system. It is also important for establishing inspection and remediation procedures to avoid failures. Integration of physics-based modeling offers a complementary prognostics capability that, combined with advanced sensor data analytics, can be used to predict the probability of failure of key components at any point in the lifecycle - even before failure initiation. The Chapter further describes a hierarchical multi-level framework that prescribes how detailed physics- based modeling can be extended to account for behaviors at a systems level. The approach is illustrated through a use case involving aerospace – specifically, helicopter drive systems – and addresses how the multiscale framework approach that combines physics and data science can maximize health state awareness in specific applications. Keywords Component failure · Complex mechanical systems · Damage analysis · Data science · Digital Twins · Digital Twins for aerospace · Digital Twin J. Rios (*) · N. Bolander Sentient Science Corporation, Buffalo, NY, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_21
577
578
J. Rios and N. Bolander
ecosystem · Failure prediction · Health state awareness · Helicopter use case · Initiation of failure · Inspection · Material failure · Material stresses · Mechanical systems · Microstructure stresses · Physics-based modeling · Physics-based phenomena · Prognostics · Remediation · Sensor limitations · Sensors · Simulation · System health
1 Introduction On 26 April 2016, thirteen people were killed in a helicopter crash off the coast of Norway due to a mechanical failure that was both catastrophic and impossible to detect. According to the final report from the investigation: …the accident was a result of a fatigue fracture in one of the eight second stage planet gears in the epicyclic module of the main rotor gearbox (MGB). The fatigue fracture initiated from a surface micro-pit in the upper outer race of the bearing, propagating subsurface while producing a limited quantity of particles from spalling, before turning towards the gear teeth and fracturing the rim of the gear without being detected. The investigation has shown that the combination of material properties, surface treatment, design, operational loading environment and debris gave rise to a failure mode which was not previously anticipated or assessed. There are no connections between the crew handling and the accident. Nor is there any evidence indicating that maintenance actions by the helicopter operator have contributed to this accident. The failure developed in a manner which was unlikely to be detected by the maintenance procedures and the monitoring systems fitted to LN-OJF at the time of the accident [1].
Let that last paragraph sink in for a minute. This accident involved a failure mode that had never occurred before and that was unavoidable by the flight crew, unrelated to maintenance actions, and completely undetectable by embedded safety monitoring systems of the helicopter that are specifically designed to uncover such emerging failures. The incident highlights the importance of developing advanced capabilities to predict the future health state of critical mechanical systems. One such approach involves the application of digital twin technologies to analyze design and usage data using sophisticated techniques to forecast the probability of failures in highly stressed components. This chapter explores the use of physics-based modeling & simulation technologies as part of digital twin solutions, to not only improve safety but also dramatically reduce operating costs. The inclusion of physics into the digital twin ecosystem enables a capability to look beyond the limitations of sensors and extend the visibility into the future health state of mechanical systems. This offers a complementary prognostics capability that, combined with advanced data analytics, can be used to predict the probability of failure of key components at any point in the lifecycle even before failure initiation. Later in this section, we will look at a specific use case involving aerospace – specifically, helicopter drive systems – and discuss how a multiscale framework approach that combines physics and data science can maximize health state awareness in certain applications.
Physics in a Digital Twin World
579
2 The Importance of “Why” Prognostics can generally be defined as a field of technical expertise focused on predicting the time at which a system or a component will no longer perform its intended function. From that perspective, the objective of prognostics is to answer the question of “when”. To be sure, the timing of failures is very important – but oftentimes the more relevant question is “why”. Understanding the root cause of something undesirable (like a component failure) can be the key to delaying or avoiding it entirely. To illustrate this concept, let’s look at an example that has nothing to do with industrial applications: A 30-year-old man with a family history of heart disease has committed himself to avoiding the heart attack that may be looming in his future. He uses the latest wearable technologies that enable him to closely monitor the vital signs that are often early indicators of heart disease (such as pulse, blood pressure, O2 levels, etc.). He also gets annual physical exams and blood work – to check his cholesterol and other relevant indicators of heart health. And… he could realistically do this for the next 25 years and not learn anything more about his likelihood of having a heart attack – because he is looking for symptoms. He is only thinking about ‘when’, and there is no indication whether he is getting close to that fateful day – until his body starts to show signs that some of the risk factors are emerging. By then, it might be too late to act. But… what if he visits a cardiologist, learns more about the factors that contribute to heart disease, and adjusts his behaviors to affect them? He exercises, adopts a low sodium diet, avoids smoking/drinking/drugs, and adopts other healthy lifestyle habits. In this way, he is directly addressing the most likely causes that can affect his probability of having a heart attack – and he is delaying, or perhaps altogether avoiding, the very event that he is most concerned about. He has focused on the ‘why’, and that has allowed him to take preventive action long before he even entered the timeframe where a heart attack would presumably have been more likely.
In this example, the ability to understand ‘why’ has provided the insight necessary to act well before any symptoms begin to develop. That is the same principle that operators and maintainers should adopt as they try to not only figure out ‘when’ their systems will fail, but also to find ways to mitigate the risk of failure through preventive action. This is the power of understanding ‘why’ – it provides the time and opportunity to act long before issues emerge that will affect the future health of a component or system. One of the most powerful tools available to understand both the ‘when’ and the ‘why’ is the Digital Twin.
3 What Is a Digital Twin? The concept of the digital twin has been evolving for years, since the term was first coined by Dr. Michael Grieves in the early 2000’s. There seem to be almost as many opinions about the explicit definition of the term ‘digital twin’ as there are
580
J. Rios and N. Bolander
companies developing technologies to support it. For this discussion, the following definition seems most appropriate: A digital twin is a virtual instance of a physical system (twin) that is continually updated with the latter’s performance, maintenance, and health status data throughout the physical system’s life cycle [2].
The significance of digital twins in a prognostic environment is that they unlock the ability to replicate, analyze, and predict the performance and behaviors of physical assets in a virtual environment. For those charged with sustaining and maintaining complex high-value assets, this capability can be invaluable.
4 The Value of the Digital Twin Digital twins exist to predict the future. For mechanical systems, this usually means the ability to forecast the performance and/or life expectancy of critical components. There has been enormous interest over the past 30–40 years in developing technologies that enable advanced predictive and prognostic capabilities in support of that most elusive of objectives: predicting remaining useful life (RUL). Remaining Useful Life: The period extending from the current time to the point at which a component loses the ability to perform its intended function. In the industrial world, few pieces of information carry as much value as an accurate prediction of the RUL of an asset. A 2011 white paper on the subject captures the role and importance of prognostics: “It is the task of prognostics to estimate the time that it takes from the current time to the failed state, conditional on anticipated future usage. This would give operators access to information that has significant implications on system safety or cost of operations.” [3] If a wind farm operator knows when a specific turbine will break, they can ensure that parts and maintenance resources are available at precisely the right time to get maximum usage from the asset before taking it offline for repairs. If an aviation service provider knows when a helicopter transmission will fail, they can avoid the risk to life and property that would result from an unexpected meltdown of that critical system. As described in a 2020 position paper published by the American Institute of Aeronautics and Astronautics (AIAA): “The uncertainties in the current status or the external variables affecting the physical asset result in propagating uncertainties into its current or future performance. The Digital Twin harnesses information collected throughout the lifecycle to update and better inform the analysis and decision- making process of the physical asset.”[4] It is the aggregated uncertainty in RUL that drives enormous investment in inventory and preventive maintenance, as operators try to mitigate the risk of unforeseen
Physics in a Digital Twin World
581
failures. In theory, perfect knowledge of when to expect the end of the useful life of an asset would remove that uncertainty for the operators that maintain those systems. Even a modest reduction in RUL uncertainty can impact the financial performance of the sustainment enterprise by shrinking the budget required to ensure safe and reliable operations. This is what drives industries like wind energy and aerospace to spend so much on R&D, in pursuit of a better mousetrap to predict remaining useful life. Historically, much of the focus has been on analyzing the vast amounts of information that flows through enterprise-level data streams and using advanced algorithms to discover trends related to previously observed failure modes. This has proven particularly effective in applications with robust, reliable, and relevant data streams associated with assets that are operated in a mostly consistent manner – where historical data can provide useful indicators of future state. Industries that operate high value assets that are susceptible to mechanical failures – such as aerospace – have been intensely focused on such technologies. While these investments have resulted in significant data collection and analytical capabilities that have dramatically increased the ability to respond to emerging mechanical failures, there are limitations on the potential of such solutions to predict RUL. The sensor-based infrastructure that generates the system data that drive the predictive capabilities of big data solutions is only capable of observing current state of the system – which is of limited usefulness during the first 80–90% of the life of a component, when there are few (if any) indications of pending failure. This highlights the importance of incorporating capabilities designed to assess the likelihood of future failures based on usage and maintenance history. One often underutilized technology that is especially well-suited for this purpose is physics-based modeling and simulation. Mechanical failures are physics-based phenomena, and the ability to fully understand their root cause is the key to predicting or avoiding them. One very effective way to gain that insight would be through a physics-based approach that fully explores what will cause the breakdown of materials that defines a particular failure mode. This methodology leverages multiple enabling technologies, but the ability to predict a failure mechanism ultimately depends on an understanding of the physics of the system and its underlying materials. That is the key to remaining useful life, and that is the power of ‘why’.
5 The Role of Physics There is a simple concept that governs the health of mechanical systems: Components break when their materials fail. On the surface this seems so basic that it would be hardly worth mentioning, but the ability of a component’s material microstructure to withstand stress over time is an essential consideration when trying to predict the future health state of anything mechanical. Even with the best monitoring technology, it can be incredibly difficult to forecast the demise of any part - whether it is the chain on a bicycle or the transmission of a sports car.
582
J. Rios and N. Bolander
Fortunately, this behavior can be modeled using a physics-based approach to analyze the accumulation of damage accumulation of damage due to microstructure level stresses to microstructure level stresses and the initiation of component failure. This capability is critical to forecasting the ability of a mechanical design to withstand operational loading conditions and predicting the component stresses that will ultimately determine the remaining useful life of the system. How does a physics-based approach help bridge the gap to remaining useful life? With a detailed knowledge of the physics of a system, to include microstructure- level insights into the underlying materials for key components, it is possible to simulate a wide range of future loading conditions and forecast the impact of this usage on life expectancy. For mechanical systems that are exhibiting little or no indication of failure in the current state (as observed by sensors), the ability to predict the accumulation of damage due to future operations enables a more realistic forecast of remaining useful life. It is important to recognize that defects below a certain size are unobservable to sensors and remaining useful life predictions that rely on sensor data are unable to recognize health state shifts during that period of operation. Sensor-based solutions are inherently limited by a ‘detectability threshold’, which represents the earliest point in time at which a sensor-based system can observe a defect or change in health state. It is, therefore, useful to incorporate a capability such as physics-based prognostics that can provide predictive insights into the impact of operational loading conditions on future health state prior to actual initiation of detectable damage. Detectability Threshold: The earliest point in time at which a sensor-based system can observe a defect or change in health state. Figure 1 features a graphic depiction of the concept of the detectability threshold. Conventional wisdom suggests that for most mechanical components, such as gears and bearings, there is roughly 10–20% useful life remaining after a defect is detected that is significant enough to generate an observable vibration or heat signature. If that is truly the case, roughly 80–90% of the component life is spent prior to failure initiation. Once the earliest material defects (microcracks, pits, etc.) begin to develop, there is still a period during which the failure is not yet able to be detected with existing sensor technology. Eventually the failure progresses to the point that it changes the system behavior in a way that can be sensed – and that moment is what we define as the detectability threshold. This is depicted in Fig. 1 by the dashed vertical line between failure initiation and failure propagation.
The importance of the detectability threshold is that it determines the limitations of sensor-based solutions to identify a change in health state, because anything below the threshold is essentially invisible to monitoring techniques. A lack of detected faults doesn’t mean that all is well and that all components are perfectly healthy. It is entirely possible that there are microstructure-level changes occurring
Physics in a Digital Twin World
583
Fig. 1 The detectability threshold
that could soon have a dramatic impact on the future health state of key components in the system. This makes it extremely important to find ways to extend the health awareness horizon beyond the detectability threshold by employing predictive capabilities that are not reliant on the ability of sensors to detect emerging defects. Physics-based solutions are focused on the capacity of the component to withstand repeated stresses, based on an understanding of the mechanics of the design and the microstructural characteristics of the materials in use. This enables analytical capabilities that are not just complimentary to sensor-based solutions but also capable of anticipating the initiation of failures without relying on sensor detection, allowing these solutions to effectively ‘see’ beyond the detectability threshold. The integration of sensor-based and physics-based solutions creates the potential for comprehensive health state awareness that not only captures failures as they manifest themselves in real-time, but it also enables users to anticipate the initiation of failures well before they emerge. Figure 2 depicts the concept of failure progression in a mechanical system as an iceberg, partially visible above the waterline but with most of the mass below the surface. The section above the waterline is only a fraction of the total considerations that need to be accounted for, and there are important details below the surface. If the water line represents the detectability threshold, then the iceberg section that rises above it can be classified as the detectable faults. Detectable faults result from a material failure that has progressed to the point that it triggers a change in the behavior of the system that can be observed – vibrations, temperature increases, etc. The emergence of detectable faults indicate that there has been an irreversible change in the condition of one or more components, and it is no longer possible to completely avoid the failure – it is simply a matter of time. Undetectable faults fall in the region between the initiation of failure at the microstructure level and the detectability threshold. These are defects that have not yet caused a measurable change to the system, and they are considered unobservable. Improvements to sensing technologies can reduce the size of this region, but there will always be some practical limitations on the ability to sense failure initiation. There may be an opportunity to manage undetectable faults to delay their progression, but it requires insight into the root cause of the expected failure and an understanding of the physics of the design. It may seem counterintuitive to consider the possibility of managing faults that are unobservable, but that is exactly where physics-based solutions are most valuable. It is possible to apply insights gained
584
J. Rios and N. Bolander
Fig. 2 Detectable vs undetectable faults
from physics-based models in a simulation environment to predict the probability of failure initiation due to actual and forcasted usage. Users can establish a threshold for probability of failure that reflects their risk tolerance and, when the physicsbased models forecast that expected asset usage will result in an unacceptable likelihood of defect initiation, there is still an opportunity to take actions to delay or avoid the onset of the predicted failure. Prior to the initiation of failure, there is a period of damage accumulation. This begins with the first cycle of operation of a mechanical system, as the components are subjected to loads that translate to stresses at the material microstructure level. All materials have a finite capacity to withstand stresses, and the useful life of a component is shaped by a combination of applied loads and material properties. Over time, each loading event contributes to the damage accumulation as the component begins to consume its useful life. Eventually, this period ends with failure initiation. Modern sensor-based solutions are quite good at recognizing the detectable faults and their expected impact on the remaining useful life of a component or system. However, the limitations created by the detectability threshold make it risky to rely solely on this approach – since roughly 90% of a component’s useful life will be spent in a condition where its health status is essentially invisible to sensors. It is useful, therefore, to apply physics-based solutions to track the component’s
Physics in a Digital Twin World
585
accumulation of damage through its operational history and offer predictive insights into the development of undetectable faults. Not only does this assist with early identification of potential failures, but it can also offer incredibly valuable information to inform logisticians and supply chain experts responsible for forecasting parts. A particularly compelling use case for these capabilities can be found in the aerospace industry – helicopter drive systems.
6 Sample Use Case: Helicopters There are few instances where predicting component failure is more important than in the aerospace industry. The safety-critical nature of key components, combined with potentially catastrophic consequences for failure, makes this sector an obvious candidate for advanced predictive and prognostic capabilities. Within this space, the notoriously maintenance-intensive helicopter is perhaps the best example of an application that benefits tremendously from technologies that can accurately predict failures. The US Army Research Lab describes this need very succinctly: “Helicopters rely on the integrity of their drive trains for their airworthiness. Drive trains rely on the condition of their gears for their integrity and function.” [5] This section will explore the specific use case of helicopter drive systems and describe how physicsbased solutions can enable critical capabilities to anticipate the impact of future operations on complex systems, such as the Apache helicopter shown in Fig. 3. The transmission in a rotorcraft drive system is the heart of the helicopter, so it is one of a select few systems where failure virtually guarantees the loss of the aircraft.
Fig. 3 Military Helicopter, with transmission highlighted
586
J. Rios and N. Bolander
Helicopter pilots exhaustively train to handle a wide variety of mechanical and electrical failures, and there are emergency procedures to safely land the aircraft for nearly all of them - including the loss of engine power. There is, however, no effective pilot response to ensure the safe landing of a helicopter with a failed transmission - because the rotors will simply lock in place and stop providing lift. Pilots describe the resulting aerodynamic performance as being remarkably like flying a refrigerator dropped from an airplane. There is simply no hope to recover from such a condition. Given the importance of avoiding this catastrophic failure, helicopter manufacturers place enormous emphasis on design and manufacturing specifications of the life critical components of the gearbox. The precision required to ensure the performance and fatigue life requirements for these parts are costly and time consuming, and as a result these components often create significant challenges for those responsible for maintaining and sustaining these systems. Given the limited number of qualified suppliers that can support the highly controlled manufacturing processes, some of the most critical components (such as gears) are often in short supply. It is not unusual for delivery lead times for a new helicopter transmission to exceed 12 months. Couple that with a catalog price that can approach $500,000, and it can be quite difficult for sustainment planners to ensure that they have sufficient inventory available to support the fleet. It is not a realistic option to maintain “just-in-case” replacement components for such expensive parts, so planners are faced with difficult choices that reflect a trade-off between inventory risk and budget constraints. That would be a daunting task under ordinary circumstances - but when you add the no-fail safety requirements that are also inherent in the expectations for the drive system, it does not take much imagination to see how quickly this can become overwhelming for sustainment managers. To address issues like this, the aerospace industry has embraced the emergence of the digital twin. Digital twin solutions that feature physics-based models can predict failure rates months or years in advance –an invaluable capability for users of expensive, long lead-time components (such as those in aerospace). Imagine the value of accurately forecasting the 18-month demand for key parts, using solutions that predict the probability of future failures that are driven by the expected usage. As useful as this capability is, it is not a trivial task to predict fatigue life using physics-based models – but there are technologies being developed that can greatly assist.
7 Applying Physics to Predict Life: The DigitalClone® and the Prognostic Integration Architecture Critical to the effectiveness of the physics-based approach is the ability to account for a wide array of considerations that can affect the life expectancy of the components in a system. For example: The component life of geared systems, such as rotorcraft drive systems, are strongly impacted by Hertzian contact stresses and the effectiveness of the lubrication layer. It is therefore expected that physics-based
Physics in a Digital Twin World
587
solutions for geared systems can account for these factors and expanded analytical capabilities that consider additional factors will make these solutions even more effective. Sentient Science Corporation has developed a patented microstructure-based approach for simulating the fatigue life of components, specifically bearings and gears. The modeling process uses a damage mechanics approach to forecast the state of stress in the material at the microstructure level, where fatigue cracks first initiate. The polycrystalline nature of the material is simulated explicitly, allowing the solution to consider factors such as grain size distribution, orientation, inclusions, void density, etc. The company has combined these high-fidelity analysis steps (surface traction, grain-level stress analysis, fatigue crack initiation) into a comprehensive modeling tool called DigitalClone®. The DigitalClone process is based on the patented 6-step approach illustrated below in Fig. 4. • Step #1 – Determine Component Hot Spot: Macro-level loads are virtually applied to a component design using a combination of finite element and multi- body dynamics modeling, and system loads are analyzed to determine high stress regions of the component (called ‘hot spots’). The analysis accounts for load intensity, contact pressures, relative velocities, and curvatures. The hot spots define the critical locations for the detailed material microstructure modeling that follows, since it would be too computationally intensive to model the entire component at the microstructure level. • Step #2 – Build Material Microstructure Models: Representative volume elements (RVE) are created to model the microstructure of the component based on detailed material characterization information. The RVE model accounts for microstructural material characteristics and manufacturing process such as heat treatments, coatings, and surface finishing. • Step #3 – Build Surface Traction Models: A mixed electrohydrodynamic lubrication (EHL) model is applied to understand the tractions and stresses on material surfaces. Detailed analysis of surface roughness and micro asperity interactions ensure full consideration of the impacts that micro-level surface tractions can have on surface distress and fatigue cracking of critical components. • Step #4 – Material Microstructure Response: A finite-element solver determines the stresses acting on individual grains in reaction to the applied surface traction. Bulk stresses and surface tractions are applied to determine material
Fig. 4 Example of physics-based approach (DigitalClone® 6-step process)
588
J. Rios and N. Bolander
microstructure responses to damage accumulation and crack nucleation/ propagation. • Step #5 – Calculate Time to Mechanical Failure: A failure mode is determined as an outcome of the loading conditions (applied loading, surface tractions and internal loads). Physics-based algorithms are used to predict the probability of component failure due to fatigue, pitting, subsurface initiated spalling, through- cracking, fretting fatigue, or bending fatigue. • Step #6 – Predict Fatigue Life Distribution: Uncertainty is present in almost every step of the above-described process. For example, the material microstructure is randomly generated. Material properties are sampled from a distribution. Surface roughness profiles are randomly generated. There may be uncertainty in component loading, or lubricant viscosity. To account for these uncertainties, the DigitalClone simulation must be run many times to build a useful failure distribution. Due to the computationally intensive nature of this step, high performance computing (HPC) resources are required to generate the virtual population of data points that will define the fatigue life distribution. Typically, a Weibull distribution is used to summarize the results. The full high-fidelity damage accumulation model from the DigitalClone process is very computationally intensive due to the iterative use of finite element analysis to determine the local stress fields. However, the behavior of the model can be captured by a response surface describing the rate of damage accumulation as a function of load and speed. This reduced order model (ROM) can then be used within the model updating/uncertainty management software - Sentient’s Prognostic Integration Architecture, or PIA. The Prognostic Integration Architecture is a stochastic framework and set of general-purpose algorithms for fusion of diagnostic state indications with damage progression models. It provides an automated prediction of current state and remaining life with accurate and optimal uncertainty bounds. The PIA is a mature, generalized architecture applicable to a wide range of diagnostics and prognostic models at the component level (single failure mode). The PIA is based on a Bayesian fusion framework. Stochastic parameters are used to represent the difference between the “average” component and a particular component. Based on the damage progression model and the uncertainty of the parameters, a group of particle values is sampled from the parameter distributions; each sampled value is used to generate a candidate damage trajectory (particle model). The Bayesian fusion framework guarantees that the particle trajectory that is closest to the true damage progression trajectory in the sense of Kullback-Leibler distance will have the dominant trajectory probability, and that the probability will converge exponentially to one as diagnostic value accumulate. Therefore, this architecture is scalable and has a measurable convergence rate for the probability of the damage progression trajectories. The resulting PIA framework is very flexible, easily accommodating application-specific damage progression functions. There are two particle parameters representing the model uncertainty: 1) a slope modifier and 2) the initial damage time. The slope modifier parameter accounts for
Physics in a Digital Twin World
589
the irreducible stochastic effects (aleatoric uncertainty) that are present in the determination of the damage progression trajectory. Using the example of bearing spall propagation, this describes the variance in material parameters, microgeometry, and other unmeasurable probabilistic effects that determine the damage progression behavior of a particular bearing. Spall growth for a particular bearing is well behaved, a characteristic that enables prognostic modeling of the damage progression. However, the next bearing to be tested will likely not follow the exact same trajectory due to the stochastic effects mentioned above – that is, tests for an additional bearing will be similarly well behaved but will likely progress at a different rate (slope) than the first bearing. The challenge then is to determine which, among a family of trajectories as defined by the slope parameter, best represents the specific bearing under consideration. This is accomplished by incorporating additional information obtained through the incoming diagnostic data. A Bayesian approach is used to weight the particle trajectories based on how well they fit the incoming and past diagnostic data. These fitness values, or weights, are applied to a (Gaussian) kernel function for each particle, which are then combined in a Gaussian mixture density to provide a probability distribution for the current state of damage. This current state distribution is then propagated out into the future past the failure threshold to determine the Remaining Useful Life (RUL) distribution (Fig. 5). As discussed previously, diagnostic values are subject to uncertainty, which must be accounted for during the particle weighting procedure. Diagnostic uncertainty is included in the Bayesian scheme through the likelihood term. If the error distribution for the diagnostics is known and well defined, it can be used directly in the determination of the measurement likelihood; if it is not well defined, the
RUL PDF Failure Threshold
1
Damage State
0.8
0.6
0.4 Current State PDF
0.2
0
historic 0
10
20
30
40
prognostic
50 60 current time
70
80
90
Fig. 5 Conceptual illustration of Sentient’s Prognostic Integration Architecture (PIA)
100 Time
590
J. Rios and N. Bolander
architecture assumes a normally distributed error. Thus, the PIA, through the Bayesian weighting scheme, can be seen to account directly for two of the primary sources of uncertainty in the prognostic model (model uncertainty and diagnostic measurement uncertainty). Prognosis for mechanical systems requires three basic components: (1) Damage propagation models, (2) Diagnostics, and (3) Model Updating & Uncertainty Management. Accurate prediction of the remaining useful life is dependent upon the quality and level of integration of these components. 1. Damage Propagation Model. The function of a damage propagation model is to determine the rate at which the component/system will degrade with time and usage. Proponents of physics-based solutions argue that the best prognostic technologies result from a thorough understanding of the mechanisms behind component degradation and failure. Physics-based damage models have distinct advantages over diagnostic trending for components where faults generally progress according to known physical laws (e.g., fatigue failure in metals). 2. Diagnostics. The general diagnostic process begins with data from one or more sensors, which is converted to an estimate of component health state by diagnostic algorithms. All real-world sensors and diagnostics are imperfect and can be sensitive to changing operating conditions and other non-health related factors. This can be modeled as measurement error/noise, and it is typically non-Gaussian and time varying. Development of diagnostic algorithms is oftentimes as challenging as developing damage progression models. Sensors most commonly record the response of the system to the damage, rather than measuring damage severity directly (e.g., vibration data). The mapping between the system response and the true damage state is often system dependent, therefore generalization of diagnostic algorithms is quite difficult. Here again, diagnostic algorithms derived from an understanding of the physics that drive the measured quantity can both accelerate and enhance the generalization process. 3. Model Updating and Uncertainty Management. Model Updating refers to the process of utilizing diagnostic data as a source of additional knowledge to reduce uncertainty in the remaining useful life prediction. Proper model updating approaches view the model as a general description of fault progression characteristics, and the sensor-based diagnostics as a noisy indication of current state. An improved estimate of state can therefore be obtained by combining the sensor-based state estimates with a fault or damage progression (prognosis) model. Approaches such as the Kalman filter are widely used for this purpose in similar applications. While the Kalman filter is highly efficient, it is limited to linear models and Gaussian measurement error; for the general case, sequential Monte Carlo methods (such as the particle filter) are widely recognized as the most flexible and robust. In health monitoring applications, the state of interest is typically component degradation/fault severity. Sentient has developed sequential Monte Carlo methods to indirectly estimate state by employing them in a parameter identification mode. The parameter(s) to be identified are initially unknown constants that describe the
Physics in a Digital Twin World
591
differences in damage propagation behavior between individual components. The objective of the model updating scheme is to reduce uncertainty in both current state estimates and forward predictions by learning the characteristics of an individual component as it degrades. This model updating scheme is flexible, powerful, and applicable to a large class of problems in health management and prognostics. Uncertainty management must be an integral part of the model updating architecture. The remaining useful life estimate must be provided with some measure of the prediction confidence, which can be traced back to the uncertainties present throughout the prognostic schema. At the component level (the current state of the art), there are three primary sources of uncertainty: 1) model uncertainty, 2) diagnostic uncertainty, and 3) operating condition/environmental uncertainty. Each of these sources of uncertainty will affect the confidence bounds places on the RUL estimate – therefore it is critical that they be included within the prognostic scheme. Most aircraft components are subjected to variable amplitude (spectrum) loading conditions, containing periodic overload and/or underload cycles. The fatigue damage accumulation needs to be modeled on cycle-by-cycle basis, which accurately simulates the service loading on a component, for accurate crack initiation, propagation, and total life analysis. Figure 6 features four loading profiles that an aircraft component has been subjected under various service conditions. The variability in component usage and material leads to scatter in crack growth and fatigue life. The sequence of cycles appearing in a service spectrum defines fatigue damage accumulation and propagation. For instance, high load cycles succeeded by the lower ones produce delays in fatigue crack propagation. This delay is due to the existence of compressive residual stresses at the crack tip because of the plastic deformation which occurred at the high load level. Physics-based modeling technologies such as DigitalClone can account for the influence of loading sequence, overloads (OLs), and underloads (ULs) on defect formation and fatigue lives of drive system components – enabling comprehensive predictions of future health state that consider a wide array of usage spectra. P [a ≥ 0.01in] −0.96
Mission 2
Load
Mission 3
Mission 4 Cycles
Crack size
Mission 1
0.01 inch (0.25mm)
P [a ≥ 0.01in] −0.50 P [a ≥ 0.01in] −0.61 P [a ≥ 0.01in] -0.011 P [a ≥ 0.01in] -10 50%
-2
100%
150%
200%
300%
Fatique life expended (FLE)
Fig. 6 Example of predicting fatigue life across a spectrum of loading conditions
592
J. Rios and N. Bolander
8 The Multiscale Framework Approach Technology is useless without a purpose. The physics-based approach discussed in this chapter describes an extraordinary capability to predict life expectancy, but… so what? Developing transformational technologies also requires creation of a vision that makes sense to potential users, and a key challenge for the acceptance of digital twins is showing how they can help solve problems facing industry. It is important to recognize that the physics-based approach alone is generally not sufficient in predicting life expectancy of fielded systems. Although they can provide a great deal of insight into the underlying reasons behind a failure mode, such models are built to initially reflect the “as-designed” system. Regardless of how much effort is put into accounting for even the most minute details of a system, the accuracy of physics-based models will be affected by any number of unknowable elements – undetected manufacturing defects, configuration errors, and even human error that results in a substantial change to the system. For example, there is no conceivable way that a physics-based model of a gearbox can anticipate when an assembly technician inadvertently damages a component during the build process, or when a maintainer uses the wrong lubricant during a routine service – but either would result in a significant deviation from the as-designed configuration and would impact the ability of the physics-based solution to fully anticipate the behavior of the system. So, it is not enough to model the “as-designed” system – there must also be a way to capture the actual condition of the system. According to NASA and other experts, a comprehensive digital twin approach (combining physics and data science) is very well suited to tackle this challenge: “In addition to the backbone of high-fidelity physical models of the as-built structure, the Digital Twin integrates sensor data from the vehicle’s on-board integrated vehicle health management (IVHM) system, maintenance history and all available historical and fleet data obtained using data mining and text mining.” [6] “By combining data from these different sources of information, the digital twin can continually forecast vehicle health status, the remaining useful vehicle life, and the probability of mission success.” [7] A multiscale framework approach supports an ecosystem capable of modeling from the atomistic to the enterprise level, using physics-based modeling and advanced data science techniques in a complementary fashion that leverages the relative strengths of both approaches to provide the most accurate picture of future health state. Figure 7 illustrates how this might be applied toward the helicopter use case. Early in the lifecycle of a new aircraft, there is very little historical data available that is suitable for the application of data analytics to predict future health state of the assets – so the most relevant information is related to the design and manufacturing technical data. It is possible to use that data to apply physics-based modeling and develop baseline life predictions for expected usage profiles that reflect the capabilities (and limitations) of the “as designed” system. This is illustrated in Fig. 7 by the modeling activities in Level 0 (Atomistic) through Level 3 (System),
Physics in a Digital Twin World
593
Fig. 7 Example of multiscale framework – from atomistic to enterprise
which represents the portions of the multiscale framework where material microstructure and the physics of the design are the main considerations in defining fatigue life predictions. Here, it all comes down to having the ability to simulate the application of expected loading conditions and predict the material response. Observed failures at the grain boundaries of the material microstructure will be the first indicator of likely future failures that will eventually manifest themselves as microcracks, pitting, etc. This is extremely useful as a means of tracking estimated damage accumulation in systems that are not demonstrating any detectable faults, but once sensors and data collectors begin to observe the emergence of detectable
594
J. Rios and N. Bolander
faults then data analytics can be applied to refine the baseline predictions from the physics-based modeling. In Fig. 7, the Level 4 (Asset) through Level 6 (Enterprise) range represents the part of the multiscale framework where usage and sensor data can be leveraged to gain a better understanding of the current health state. It is here that the vast amounts of data that flow through enterprise-level systems are analyzed to uncover trends and patterns that can assist in the identification of detectable faults. At the asset level, current health indicators can be correlated with the usage of a specific aircraft to provide early warning of known failure modes that are starting to emerge. This is consistent with the vision described by experts at the US Air Force Research Laboratory: “As information about how aircraft in the fleet are being flown and maintained is collected into the digital thread and reflected in the Digital Twin, maintenance options can be continually reviewed and revised to meet the mission requirements, availability, safety, and cost goals of the warfighter. Furthermore, rather than applying the same design modifications and repairs to every asset, the Digital Twin will enable tailoring of the maintenance package to individual assets.” [8] The fleet data from all assets of a specific aircraft type can also be combined to provide aggregated fleet health statistics and assess aircraft availability impacts. At the highest level, enterprise decision makers can compare the reliability drivers across various users and aircraft types to evaluate the effectiveness of the sustainment operations across all platforms. A multiscale framework methodology enables sustainment and maintenance managers to consider all available information about the drive systems of their aircraft – from the molecular composition of the materials that make up key components to the vast data streams that are collected and analyzed at the enterprise level. A comprehensive approach that uses all available tools, to include physics-based models and machine learning algorithms, provides the best opportunity to accurately predict the future health state of those assets.
9 Building a Hybrid Framework – Physics + Data Science The hybrid approach to the digital twin enterprise centers around using physics- based solutions to create a baseline model that reflects the design related capabilities that are applicable to all assets in the fleet. This allows asset-specific life models to be created by updating the design-related fatigue life predictions (from physics- based modeling) with the data streams that include onboard health sensing, maintenance actions, part replacements, operational usage/loading profiles, and other key contributors to current health state. For example: If an accelerometer shows indications of an increase in vibration for a specific component, that data creates an anchor point for the health state projections and enables updates of the models to better align with the observed actual condition of the component. In this way, the sensor data performs two functions: (1)
Physics in a Digital Twin World
595
Fig. 8 Concept for predicting asset-specific health risk
provides confirmation of actual state that serves to update physics-based modeling projections, and (2) contributes to historical datasets that can eventually be exploited using data analytics techniques and machine learning algorithms to generate complementary life predictions. Using the graphic below as an illustration of this concept, all assets have a common start point for health risk as the fleet enters service. Throughout its operational life, each individual asset develops its own unique usage and maintenance history and that causes the models to diverge over time (Fig. 8). In this case, the portion of the fleet that is categorized as “heavy duty users” shows an accelerated increase in the usage-based health risk over time than the “average users” and “light duty users”, as reflected by the steeper slope of the curves for those assets. This would indicate a much more damaging loading history on the drive systems for those aircraft, which is typical of assets that are being flown more aggressively or in more demanding environmental conditions. There is every reason to assume that the heavy-duty users will be at higher risk of failure during future operations, even if there are no specific indications at the current time. The value of asset-specific health models is both operational and logistical. The ability to assess the future health state of an individual aircraft can be an important input into the risk assessment associated with deploying that asset in an upcoming mission. Systems that have previously been subjected to more severe loading conditions will naturally have a higher likelihood of failure in future operations and having that visibility can be crucial to decision makers as they review their options. In addition, a customized asset-specific health model can provide logisticians with critical information to enable intelligent forecasting and demand planning for long lead-time (and expensive) components. This is an extremely valuable capability that can reduce unnecessary supply chain cost drivers, such as excess inventory. There are few organizations that have sufficient financial resources to justify keeping extra helicopter transmissions as ‘just-in-case’ safety stock, considering they can cost up to $500,000 each.
596
J. Rios and N. Bolander
Fig. 9 Conceptual framework for hybrid Digital Twin
So, how exactly would the insights from both physics-based modeling solutions and advance data analytics be combined into a useful technology solution that supports both operational and sustainment decision makers? Figure 9 depicts one possible option for creation of a unified framework to combine these two powerful techniques into what will eventually become a Hybrid Digital Twin. 1. Baseline physics-based models are created using technical design and manufacturing specifications to establish the material response of key components to various operational loading conditions. Enterprise data related to asset-specific supply chain actions, maintenance/ repair/overhaul (MRO) history, and aircraft usage from HUMS (health and usage monitoring systems) are analyzed using data science techniques to assess the current health state of each asset in the fleet. 2. Physics-based modeling outputs reflecting baseline configuration are compared to asset-specific outputs from data analytics, and confidence scores are assigned to each. The physics models incorporate the design-related context to define the relationship between loading conditions and damage accumulation on key components. The data science models reflect actual measurements of current health state and provide the usage-related context associated with flight history and maintenance activity for each asset. Both physics-based models and data science models can be trained and updated using insights learned from each other, creating a learning loop for continuous refinement. 3. Insights from both physics-based and data science modeling are used as inputs to the consolidated asset-specific model, known as the Hybrid Digital Twin. 4. The Hybrid Digital Twin is created by combining the asset-specific analysis of future health state provided by both the physics-based and data science techniques. Confidence scores are updated based on an objective assessment of input
Physics in a Digital Twin World
597
data quality and subjective assessments of model maturity for both techniques. The confidence-weighted predictions of future health state are captured for each asset and presented in the sustainment digital architecture to support operational and maintenance decision making. 5. Key outputs such as projections for remaining useful life of key components, recommendations for preventive sustainment actions, and updates to support intelligent demand plan are transferred and stored at the enterprise level. This enables the outputs of the Hybrid Digital Twin models to be visible in the other enterprise-level systems – ensuring that inventory, demand plans, maintenance schedules, and operational availability can be updated to reflect the insights provided by these advanced modeling solutions. Although presented here as notional, all the technology to implement the Hybrid Digital Twin exists today. There are companies in multiple industries that are aggressively pursuing strategies to take advantage of the limitless potential inherent in digital engineering tools that will continue to improve the design and sustainment for critical assets. The purpose of this chapter has been to demonstrate how that might look in an industry like aerospace – but there is a wide array of other approaches that are equally exciting, and possibly even more effective.
References 1. “Report on the air accident near Turøy, Øygarden Municipality, Hordaland County, Norway 29 APRIL 2016”, page 5, Accident Investigation Board Norway, 2018 2. White Paper: “Leveraging Digital Twin technology in model-based systems engineering”, page 1, Azad M. Madni, Carla C. Madni, and Scott D. Lucero, 2019 3. “Data Mining in Systems Health Management: Detection, Diagnostics and Prognostics”, page 150, Chapter 5 – Prognostic Performance Metrics, Kai Goebel, Abhinav Saxena, Sankalita Saha, Bhaskar Saha, and Jose Celaya, 2011 4. “Digital Twin: Definition & Value: An AIAA and AIA position paper”, page 7, American Institute of Aeronautics and Astronautics (AIAA) Digital Engineering Integration (DEIC) Committee, 2020. 5. White Paper: “Physics-based modeling strategies for diagnostic and prognostic application in aerospace systems”, page 3, David B. Stringer, Pradip N. Sheth, and Paul E. Allaire, 2009 6. White Paper: “The Digital Twin paradigm for future NASA and U.S. air force vehicles”, page 7, E. H. Glaessgen and D.S. Stargel, 2012. 7. White Paper: “Leveraging Digital Twin technology in model-based systems engineering”, page 2, Azad M. Madni, Carla C. Madni, and Scott D. Lucero, 2019 8. White Paper: “Digital thread and twin for systems engineering: EMD to disposal”, page 11, Pamela Kobryn, Eric Tuegel, Jeffrey Zweber, and Raymond Kolonay, 2017.
598
J. Rios and N. Bolander Jason Rios is the Sr Vice President, Defense for Sentient Science and leads the company’s aerospace and government-funded business. Prior to joining Sentient Science, Jason led the global military Condition Based Maintenance Systems (CBMS) business at Honeywell Aerospace – developing and providing advanced health monitoring technologies to aircraft OEMs and global operators in the aerospace market. His background as a former Army aviator with combat experience flying the Apache helicopter provided him with an understanding of the needs of military operators, as well as the commitment to ensuring that aerospace operators get access to the best possible solutions. Jason holds a BS in Engineering from the United States Military Academy at West Point, and a Master of Business Administration degree from Auburn University
Dr. Nathan Bolander is the Chief Scientist at Sentient Science and is responsible for the development of the company’s proprietary DigitalClone life prediction technology. He has MS and PhD degrees in mechanical engineering from Purdue University and has specifically focused his research on the field of tribology – which is the interdisciplinary study and application of principles of friction, lubrication, and wear.
Operating Digital Twins Within an Enterprise Process Kenneth M. Rosen and Krishna R. Pattipati
Abstract The Digital Twin concept is an all-encompassing Industrial Internet of Things (IIoT) use case. It is an artificially intelligent virtual replica of a real-life cyber-physical system (CPS) useful in all phases of a system’s lifecycle. The Digital Twin is made possible by advances in physics-based modeling and simulation, machine learning (especially, deep learning), virtual/augmented reality, robotics, ubiquitous connectivity, embedded smart sensors, cloud and edge computing, and the ability to crowd source the domain expertise. These technologies have the potential to make Digital Twins anticipate and respond to unforeseen situations, thereby making CPS resilient. To realize resilient CPS, engineers from multiple disciplines, organizations and geographic locations must collaboratively and cohesively work together to conceptualize, design, develop, integrate, manufacture, and operate such systems. The refrain “model once, adapt with data and domain expertise, and use it many times for many different purposes” offers an efficient and versatile approach to render organizational silos extinct. This is accomplished by providing a “single source of the truth” representation of the CPS, to collaborate virtually, assess and forecast in evolving situations, and make adaptive decisions. Organizationally, Digital Twins facilitate situational awareness and effective organizational decision- making through the acquisition, fusion, and transfer of the right models/knowledge/ data from the right sources in the right context to the right stakeholder at the right time for the right purpose. That is, the design, manufacturing, optimal operation, monitoring, and proactive maintenance of the CPS. This Chapter addresses the operation of Digital Twins within an enterprise process. The discussion is generally applicable, but illustrated specifically with examples from the Aerospace Industry. It begins with a vision for the enterprise Digital Twin methodology to provide timely and accurate information created during the initial conceptual design, product development, and subsequent operational life cycle phases of the product/system utilizing a comprehensive networking of all K. M. Rosen (*) General Aero-Science Consultants, LLC, Guilford, CT, USA K. R. Pattipati Dept. of ECE, University of Connecticut, Storrs, CT, United States © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_22
599
600
K. M. Rosen and K. R. Pattipati
related information. All related partners share such information, thereby connecting product/system design, production and usage data with those human and non- human agents requiring this information. The Chapter further reviews the enterprise-wide product lifecycle phases and describes how the use of digital twin methodology allows quasi-static model-based systems engineering (MBSE) and Enterprise Resource Planning (ERP) business models to morph into a temporal information continuum, spanning the life cycle of the product or system. Specifically, the focus is on global information flow throughout the enterprise, and suggested DT-committed organizational changes affecting the enterprise. It is followed by a discussion of MBSE-based requirements analysis and platform-based design principles in the product’s conceptual design phase and related examples. This lays the groundwork for encouraging a range of conceptual design ideas, standardizing design, analytical and learning tools for superior coordination and integration of the information flow within the enterprise’s Digital Twin processes. Subsequent portions of the chapter discuss the product development phase via Digital Twin models and platform-based design principles using digital 3-D CPS models with specific examples from Sikorsky and GE as illustrations. An emphasis is placed on the improvement of computing capabilities, such as the introduction of hyper-efficient Graphical Processing Unit (GPU)-based computational capability that provided an order of magnitude improvement in design productivity. Multi- functional causal models are introduced to help uncover failure modes, their propagation paths, and consequent functional effects, and discuss how such digital twin models automatically generate fault trees for risk assessment analysis, and an initial Failure Modes, Effects, and Criticality Analysis (FMECA) report. The importance of domain knowledge and data-informed models is emphasized in how it can aid in product testing, qualification, and certification phase. A formal system of health modeling to test the severity of candidate faults and their effects to generate an updated FMECA model using the digital twin is also introduced. This enables design engineers to understand the potential faults in the system, their probabilities of occurrence, and their manifestation as functional failures (effects), monitoring mechanisms for making the effects visible, system level implications in terms of safety, customer inconvenience and service/maintenance implications, and so on. The Digital Twin methodology enables the FMECA and fault tree updates in real-time. The role of Digital Twins in product manufacturing, quality management and distribution phase is addressed next. It includes a description of an integrated process for additive manufacturing and an advanced 3D quality inspection process. The DT network links these highly accurate coordinate measurement processes with the complex 3D Cyber-physical models intended to define the product accurately. As part of the Digital Twin, an integrated on-board and off-board system health management, coupled with virtual/augmented reality, can improve customer experience and support via real-time monitoring, incipient failure detection, root cause analysis, prognostics, predictive maintenance, and training assessment. The S-92 helicopter’s data integration process serves as a constructive example of how a proactive
Operating Digital Twins Within an Enterprise Process
601
digital twin-aided health management system can significantly improve product resilience, safety, and customer acceptance. The remaining portions of the chapter describe how the Digital Twin infrastructure’s ability to process enormous amounts of data into information and knowledge, aided by the Failure Reporting, Analysis and Corrective Action System (FRACAS) database, enables proactive product configuration management and active learning. The result is efficient product maturation and customer adaptation. The Digital Twin’s ability to support the design of environmentally sustainable products and how they can eventually be suitably disposed is also addressed. The successful adoption of Digital Twins has other consequences, including the need to revamp traditional organizational structures to be effective in a globalized environment. While Digital Twins offer great promise, it is also important to consider some cautionary thoughts on the need for accurate models, domain knowledge-informed machine learning, and awareness of human fallacies in implementing the Digital Twin methodology. Finally, in a business environment, commitment to the Digital Twin methodology hinges on an understanding of the value that Digital Twins provide and the steps that the enterprise must take to successfully adopt the methodology. The enterprise must accept significant structural and cultural changes to succeed with a DT methodology. The authors firmly counsel that adaptation of these necessary enterprise modifications will not be easy and, as such, will require top- down leadership to respond to structural and cultural changes with the necessary corporate resources (adequate and digitally literate staff, leadership, hardware- software computing and communication infrastructure, and budget). Keywords Aerospace · Computing · Enterprise · Digital design · Digital thread · Digital Twin · Digital transformation · Digital manufacturing · Multi-domain modeling · Digital inspection · Model-based design and systems engineering · Multi-functional causal models · Prognostics/diagnostics · System health management · Product development · Product lifecycle · Simulation
1 Introduction The positive impact of introducing the DT approach to the enterprise will become evident to the reader through a clear understanding of how the key parameters of enterprise value are affected. These include the performance, risk management, cost, quality, time-to-market, operability, customer acceptance, and flexibility of products to meet competitive challenges. The key is how the DT significantly contributes to improve value throughout its life cycle in a closed-loop setting.
602
K. M. Rosen and K. R. Pattipati
1.1 The Modern Enterprise Will Greatly Benefit from the Digital Twin Methodology The dictionary defines an “enterprise” as a business or company involved in an entrepreneurial economic activity or a project/undertaking that is typically difficult or requires effort. Clearly, this broad definition describes enterprises that range from a very small business structure to an extremely large global corporation or even a government organization; an enterprise can describe an entity engaged in a purely service industry as well as one devoted to product development and manufacturing. The authors of this chapter, to describe how the digital twin methodology can be linked to important enterprises processes and organizational functions, have chosen to focus on how the digital twins (DT) can affect a large global manufacturing enterprise by demonstrating the value impact of the methodology throughout a product’s life cycle. Based on our experience, we believe that the positive impact of introducing the DT approach to the enterprise will become evident to the reader through a clear understanding of how the key parameters of enterprise value are affected. These include the performance, risk management, cost, quality, time-tomarket, operability, customer acceptance, and flexibility of products to meet competitive challenges. The key is how the DT significantly contributes to improve value throughout its life cycle in a closed loop setting. Consequently, it is our hope that the reader will view the content as generally applicable and useful to most enterprises, even though the authors’ backgrounds, and many vignettes and examples in this chapter specifically refer to the aerospace industry. This is an industry where a single wrong decision can have generational impact on the enterprise because a limited number of product lines constitute a significant revenue for the enterprise. Over the professional life of this chapter’s authors, enterprise system design infrastructure tools evolved from ink on Mylar drawings, to designing products with slide rules, and then to Computer-aided Design/Computer-aided Manufacturing (CAD/CAM), first using room-sized computers and evolving to the utilization of extremely capable desktop workstations/laptops/ notebooks, cloud computing and fast processors with Graphics Processing Units (GPUs) & Tensor Processing Units (TPUs). Emerging as of the writing of this chapter is practical quantum computing which will significantly facilitate autonomy, intelligent systems, real-time health management, artificial intelligence, predictive analytics and possibly approach near true cognition. This evolutionary process is depicted in Fig. 1 and the enabler over the last 60 years is a huge increase in the available levels of computational capability, facilitating ever increasing levels of enterprise automation, collaboration, and relevant knowledge availability. Similarly, these huge advances in computational capabilities allowed enterprise process-wide methodologies to evolve from independent department silos/information waterfalls to Concurrent/Collaborative Engineering (Integrated Product Development) to Enterprise Resource Management
Operating Digital Twins Within an Enterprise Process
603
Fig. 1 Evolution of enterprise computational capabilities and tools/processes
and eventually to today’s usage of digital twins as complete organizational, process and product knowledge repositories. The concept of mirrored systems in the aerospace industry has been around since the 1970’s, beginning with flight simulations of Apollo 13 at the Houston and Kennedy Space Centers [18, 34] and engine health management by Pratt & Whitney [40]. During the 1980’s, the US Navy initiated the Integrated Diagnostics Support System (IDSS), a structured system engineering approach and the concomitant information-based architecture for maximizing the economic and functional performance of a system. It sought to integrate the individual diagnostic elements of design for testability, on-board diagnostics, automatic testing, manual troubleshooting, maintenance and repair, technical information, and adaptation/learning [23]. Concurrent and collaborative engineering (sometimes called integrated product development) followed in the 1990’s that sought to consider the issues of product functionality, manufacturing, assembly, testing, maintenance, environmental impact, disposal and recycling in the early design phases. These early systems lacked a seamless connection and real-time data exchange, which would facilitate continuous or periodic “twinning” of the digital to the physical in a closed-loop setting and serve as an enterprise process and product knowledge repository. The Digital twin concept is the all-encompassing Industrial Internet-of-Things (IIoT) use case for cyber-physical systems (CPS) that represent the convergence of computing, control, communication, and networking. It is an intelligent digital virtual replica of a real-life CPS that understands the past (as a “knowledge repository” or “stored legacy knowledge”), provides situational awareness of the current, and facilitates the anticipation (prediction) of the future; it is useful in all phases of a system’s lifecycle by leveraging all the interconnected data [5].
604
K. M. Rosen and K. R. Pattipati
The digital thread is a term for the lowest level design and specification for a digital representation of a physical item. The concept of requiring a “single source of data” is the primary enabler for the enterprise’s Digital Twin architecture. Indeed, the digital thread constitutes a critical element in Model-Based Systems Engineering (MBSE) and enables the traceability of the digital twin back to the requirements, parts and control systems that make up the physical asset [26]. The US Air Force Global Horizons study [20] states that Digital Twin’s cross-domain, advanced physics-based modeling and simulation tools can reduce development cycle time by 25% through an in-depth assessment of the feasibility and cost of integrating technologies into a system. What made digital twins and digital threads possible? We believe that deep physics-based modeling and simulation, machine learning (especially, deep learning), virtual/augmented reality, robotics, ubiquitous connectivity, embedded smart sensors, cloud and edge computing, and the ability to crowd source the domain expertise has made digital twins possible. These advances have the potential to make digital twins anticipate and respond to unforeseen situations, thereby making real-life CPS resilient. To realize resilient CPS, engineers from multiple disciplines, organizations and geographic locations must cohesively and collaboratively work together to conceptualize, design, develop, integrate, manufacture, and operate these systems. The refrain “model once, adapt it with data and domain expertise, and use it many times for many different purposes” offers an efficient and versatile approach to render organizational silos extinct by providing a “single source of truth” representation of the CPS to collaborate virtually, assess and forecast an evolving situation, and make robust, flexible, and adaptive decisions. Enterprises, committed to the digital twin methodology, may often encompass the use of multiple digital twins representing multiple product lines and libraries of digital artifacts; as such, many of these digital twins may utilize identical digital threads or components in multiple locations. These common digital threads or components would be approved for multiple uses across all the enterprise’s product lines. A simple example would be the common use of pumps or generators in various enterprise aircraft designs. Hence, it is apparent that the digital twin process itself encourages commonality, collaboration and reuse of standard tools, knowledge, qualification data and proven components where this approach is applicable. It is essential for the enterprise, its trusted supply base and even outside enterprises or consortiums, joined by common intent, to maintain a validated library of proven tools, knowledge, and components which meet the process, design and qualification standards established by the enterprise. Consequently, the digital twin methodology requires that humans and even machine nodes resident within the enterprise’s multiple digital twins or related trusted collaborating entities be able to communicate and/or make timely and accurate design, manufacturing, and service decisions with great confidence. Achieving this extreme level of communication and interoperability will require standardization of software taxonomy, architecture, and data structure as well as commonality of models. Organizationally, digital twin facilitates situational awareness and resilient organizational decision-making via the acquisition, fusion, and transfer of the right
Operating Digital Twins Within an Enterprise Process
605
models/knowledge/data from the right sources in the right context to the right stakeholder at the right time for the right purpose, viz., design, manufacturing, optimal operation, monitoring, and proactive maintenance of the CPS.
1.2 Digital Twin Enterprise Methodology Vision Our vision for Operating digital twins within an enterprise process has its origins in the definition of the digital twin by Boschert, Heinrichs and Rosen of Simmons AG [9]. Our enterprise digital twin methodology refers to a complete digital representation of the organization’s processes and a comprehensive physical and functional description of the enterprise’s legacy and developing products or systems, together with all available design and operational data. The DT methodology includes all information, which could be useful in all – the current and subsequent – lifecycle phases. One of the main benefits of the digital twin for enterprises that are concerned with high technology products at the extreme edge of performance is to provide timely and accurate information created during the initial conceptual design, product development, and subsequent operational life cycle phases of the product/system utilizing a comprehensive networking of all related information. Such information is to be shared between all relevant and trusted stakeholders and customers; thereby, connecting product/system design, production and usage data with those human and non-human nodes requiring this information. Therefore, we firmly expect that an effectively utilized digital twin methodology will bridge the gap between mechatronics/physical space design/simulation/development/ manufacturing/qualification and the product’s subsequent operational and service experience. Inherent in the DT Methodology is a vibrant feedback loop that uses currently applicable data to drive timely and effective decisions. These can range from real-time and fast timescale for control to collection of data and information over long periods of time for improving functionality, reducing risk, and enhancing the “customer” experience. The current chapter describes the use of digital twin methodologies and networks within the enterprise; however, Chap. 21 (Building and Deploying the Digital Twin) addresses the very important and detailed processes necessary to design, manage, harden, secure, verify/validate and maintain an active DT network and its resident knowledge.
1.3 Modern Enterprise Digital Twin Methodology Institutionalizing DTs prepares an enterprise to go through a product life cycle much more efficiently at all stages, but at the same time it requires an internal infrastructure element (computing, storage, communications, coordination,
606
K. M. Rosen and K. R. Pattipati
organizational processes) that support DTs, has the internal culture built around them, and captures critical knowledge that flows across product lines. The enterprise must accept significant structural and cultural changes to succeed with a DT methodology. The authors firmly counsel that adopting these necessary enterprise changes will not be easy and, as such, will require top-down leadership with unity of purpose and unity of command as enterprise goals. We anticipate a modern enterprise to utilize the digital twin methodology early in the conceptual phase, where engineers imagine a product and establish initial plans for a technical approach, business case, financial pro-forma, operations, manufacturing, supplier selection, product certification approach, safety, environmental compatibility, quality management, and marketing/sales. It is envisioned that the DT methodology, rules, and processes will be somewhat streamlined during this early phase, while maintaining appropriate levels of accuracy and currency. However, the processes used during this phase must clearly establish the proposed product’s key performance parameters (KPPs), including a realistic target price. Work during the conceptual phase also answers questions on how well the product will be accepted in the marketplace, what level of development cost is required, how much schedule delay to market introduction will be tolerated, what sort of cash flows will result and, certainly, what Internal Rate of Return (IRR) can be expected from the product or system. These processes will require concurrent information from multiple sources and models, such as global economic indicators, target customer business needs, applicable market growth rates, competitive technical status, projected disruptive product/systems technology and competitive pricing. During the product’s conceptual phase, an enterprise with a functioning digital twin methodology solidifies the technical and business requirements for the product/system design by trading off a range of design ideas, establishes the fundamental physical and functional descriptions of the product, and delineates its real-time information interfaces and standardized design, analytical and machine learning tools for superior coordination. During this phase, the DT methodology will facilitate concurrent use of first-order, appropriately complex and often historically based, validated engineering models, virtual product abstractions, financial projections, and business analyses, resulting in enhanced communication between the trusted stakeholders and customers. An extremely important deliverable from the conceptual phase is the estimation of the required capital investment for laboratory and manufacturing facilities and the exposure of what additional physics-based models need to be created and validated by the enterprise. Finally, it is still quite likely that the proposed product/system will not survive the rigorous conceptual phase and be discarded for various business, financial, and technical reasons or because of a projected lack of customer acceptance. Even during the conceptual phase, there are systems and sub-systems each of which has a DT that are more than likely to be “reused” in another product without having to be reinvented by another team from scratch. Additionally, products that fail to emerge from the Conceptual Phase at a given time may emerge years, and even decades, later as the enabling technology reaches the required readiness level. This is evident
Operating Digital Twins Within an Enterprise Process
607
in the current innovations in artificial intelligence, autonomous vehicles, robotics, crypto currencies, genomics, and 3D printing, which had their roots in the internet bubble of the late 1990’s. If a proposed product/system survives a rigorous conceptual phase, the next step in the life cycle is the Product Development Phase, where the development team utilizes detailed systems/design engineering and simulation processes to create the targeted product/system. Here, based on the established critical performance parameters of the product, an enterprise employing digital twin methodology will promulgate detailed model-based system requirements documents, create physical/ functional interface models, produce a system architecture model, define product’s embedded information networks, establish internal and external enterprise information networks and nodes, and finalize anticipated methods of product requirements validation. Depending on the complexity of the product, suitable applications of deep physics-based analyses and simulation (up to and including real-time) solidify the design and manufacturing processes. These activities will utilize all the applicable fundamental engineering disciplines, such as systems engineering, electrical & electronics, controls, structures, dynamics, fluid mechanics, thermodynamics, mechatronics/CPS, etc. to create these models, creating a valid product abstraction and eventually a competing product at a targeted price. These activities also finalize the required power and energy requirements of the product; define the sensors, some of which will be smart and self-powered, to provide the necessary system’s control performance and health status, allowing appropriate levels of diagnostics and prognostics. The digital twin methodology, utilizing model-based design, will help accelerate the product development processes by completely defining a virtual equivalence model of the target product/system. Simulation of the functionality of the design at the system level under nominal and faulty scenarios will occur throughout the product development and qualification lifecycle phases. This uncovers complex interaction issues early, optimizes detection and diagnosis of critical faults, improves failure modes, effects, and criticality analysis, optimizes sensor network design, generates test cases and documentation from models, facilitates tooling/ manufacturing, and enhances maintenance/training processes. For successful implementation of digital twins in an enterprise, it is essential that organizations gain the trust of certifying agencies (e.g., FAA, EASA) and customers on the integrity of enterprise processes and, in turn, the certifying agencies and targeted customers embrace the enterprise’s deep physics-based models, tools and processes in supporting eventual product certification/ qualification and ultimately the application of the DT processes. Nevertheless, selective physical testing is still needed to validate the models and to qualify the use of products in uncertain environments. Truly mid-twenty first century (aerospace, medical equipment, manufacturing) CPS will utilize large amounts of embedded autonomy and AI; adoption of a digital twin methodology in advanced System Integration Laboratories of the enterprise for virtual product representations and real-time operational simulation using global information inputs will greatly enhance this capability. Good enterprise practices
608
K. M. Rosen and K. R. Pattipati
would require a true understanding of the parameters affecting the product’s quality through an understanding of the product’s anticipated failure modes, sensitivity to component/part tolerances, and their manifestation in critical performance characteristics. Again, the virtual product representation and operational high-fidelity simulations inherent in a digital twin methodology will facilitate these good practices in a timely and affordable manner. The product’s qualification and certification phase begin very early in product development and incorporates processes consistent with the (model-based) requirements and various international and or national standards. The activity during this phase is necessary to ensure the safe and effective operation of the product; for example, in the case of an aircraft, compliance with FAA and EASA certification regulations and advisory material is essential; consequently, an appropriate component/system design, development, test and evaluation (DDT&E) program will be required for the aircraft systems’ software and hardware. Timely feedback of information from the test program will enable the design teams to incorporate the needed modifications in the product. During this time, it is appropriate that all the product development teams take full advantage of available information from related fielded legacy products so that they would have a full understanding of existing dominant failure modes; this salient information will also impact the target product’s planned approach to health and usage monitoring. This facilitates incipient failure detection, early diagnosis of anticipated issues, prognostics, and proactive system health management. During this phase, relationships and/or partnerships with the trusted global supply base are established; these relationships are essential and often historically based. Currently, it is usual to establish a plan for information flow between the supplier and the enterprise nodes. Finally, the Product Operations, Manufacturing, Quality Management and Distribution Phase creates detailed plans for delivering a conforming, high-quality, and effective product to the customer in a timely, effective, and affordable manner. All these phases will utilize processes highly dependent on the product’s digital description models, including 3D CAD/CAE/ CAM models, detailed virtual representations, work instructions, assembly, and quality inspection criteria. As such, timely information flow is necessary between the internal enterprise processes, supplier nodes and customer distribution nodes. Real time feedback during the Customer effectiveness and support phase will occur so that the design team can begin to understand the product’s actual operational experience and observed failure modes from timely (possibly real-time) field data. The data spans automatic embedded diagnosis results from smart (self- diagnosing) systems, remote degradation detection, diagnosis, and prognosis via remote instantiation of multiple embedded sensors (some of which will be smart & self-powered) to glean the fielded product’s health status information, and digital twin (model)-guided troubleshooting outcomes. This approach establishes a timely database for sophisticated big data analysis that will greatly assist in a continuous product improvement phase. This use of Big Data anticipates the curation and management of very large amounts of statistical data from fielded product nodes to establish the root causes of product/systems failures or degradations. This level of
Operating Digital Twins Within an Enterprise Process
609
continuous information acuity will also be required in accurately and autonomously projecting the need for replacement of spare parts and product’s condition information fed back into the supply and manufacturing processes. This facilitates the proactive manufacturing of replacement products and/or parts without requiring excessive asset warehousing. Finally, all products/systems must eventually enter the Product’s End of life and Environmentally Suitable Termination Phase as the product completes its useful life. Establishing explicit plans for obsolescence, eventual product destruction, and or reuse to satisfy appropriate environmental and legal requirements is a key enterprise activity. To accomplish this effectively will require a large amount of information from global information sources. It quickly becomes evident that effectively managing the enormity of information throughout an enterprise requires the adoption of the digital twin methodology. Such an information network could eventually involve thousands, if not millions, of field nodes affecting hundreds, if not thousands, of internal and external enterprise processes. Cohesive and collaborative coordination of enterprise processes will require congruent mapping of information flow and models onto the nodes of the information network. The need for robust internal and external enterprise information networks becomes obvious. This vision projects a high bandwidth temporal process, where information updates occur continuously as the product evolves in the field and as product revisions ensue. The real-time flow of information is likely to affect the product’s real internal rate of return and cash flows by proactively managing failures and improving component reliability; thereby minimizing warranty claims. We also anticipate that our approach to continuous digital information flow will result in timely product improvement, thereby delaying obsolescence or preventing market disruption by emerging competitors and increasing the global market share. A continuous feed of customer satisfaction information, involving the real time fielded product’s status and the future needs of an expanding customer base, will result from the adoption of a robust digital twin methodology by the enterprise. Timely and dynamic flow of information between the various network nodes of the enterprise should provide significant competitive advantages to participating organizations and related players. Additionally, it is highly likely that digital twin-committed enterprises will have to revamp their traditional organizational structures. Specifically, the organization will have to develop an appropriate level of digital dexterity allowing growth, hence, a DT-committed work force must be highly-motivated and highly-trained. In summary, the entire enterprise will greatly benefit from the adoption of a digital twin methodology, which transcends the product’s mechatronics/ physical space by linking with the cyber world of embedded software and ubiquitous networking in a revolutionary manner. In the remaining sections of this chapter, the authors’ intent is to highlight the following topics: The first eight are dedicated to life cycle management phases; the content describes how the digital twin is used within each phase. While the technology that accompanies the DT is essential, it requires a rethinking of how organizations are structured and coordinated so that they can take full advantage of the value that DTs bring. The use of DTs can be conducted at
610
K. M. Rosen and K. R. Pattipati
different levels of capability and execution, and it is important to understand the traps and difficulties as an organization goes up the learning curve and internalizes the commitment to practice at the highest level possible. We also address the fundamental reasons for adoption of the DTs and that is the value they bring to an enterprise in terms of interoperability across the eco system and competitiveness and to face the existential issues in terms of survivability as a business. We conclude with a peek at the exciting prospects that DTs can bring to an enterprise in the future.
2 Enterprise Processes and Phase Reviews in the Age of Digital Twins Digital Twin (DT) methodology morphs collaboration, coordination, and information flow processes into a knowledge-rich, anywhere, and anytime continuum spanning the life cycle of the product. We expect that the DT methodology will facilitate a continuous peer review process where non-team dedicated subject matter experts can serve as reviewers of product development issues in real-time, thereby, facilitating timely product development modifications and avoiding last minute issues at formal reviews. Standardization of internal process tools/systems and decision-making processes helps in better coordination and transparent communication of incremental changes across the enterprise. Arguably, design engineers spend as much as 50% of their time seeking information; we expect the knowledge-rich DT network will vastly improve the information acquisition process, thereby greatly enhancing their productivity.
For the last three decades, various enterprises processes have utilized ever more sophisticated physics-based analytical design support and simulation models as well as somewhat comprehensive product management and Enterprise Resource Planning (ERP) business models. The use of Digital Twin Methodology allows these quasi-static information models to morph into a temporal information continuum accessible over a very high bandwidth network and spanning the life cycle of the product or system. To conceptualize an effective use of the digital twin methodology throughout an enterprise, it is useful to begin by reviewing the enterprise processes in a product’s life cycle and the global information sources affecting the enterprise. Tables 1 and 2 below provide a listing of representative processes and information sources or nodes.
Operating Digital Twins Within an Enterprise Process
611
Table 1 Enterprise processes in a product’s life cycle Lifetime enterprise processes: Concept/feasibility design process Sales/marketing, projecting product’s customer acceptance process Business case and financial pro-forma process Detailed engineering design and simulation Product qualification/certification process Operations processes (manufacturing, supplier/purchasing, quality management) Global sales and market penetration process Financial, business services and human resources processes Customer service processes and feedback to the technical and operations processes Product safety and environmental compliance processes Table 2 Global information sources affecting the enterprise Global information sources affecting the enterprise: Global economic indicators, GDP, interest rates, currency status, energy prices, etc. Applicable market growth rates; impediments, accelerators Disruptive product/systems technology and pricing Target customer business needs and financial status Legacy product/systems operational history, issues, and physical health information Product/system material cost Target supplier product costs, technology, quality history, availability, financial viability, and schedule dependability Target information ownership, patent status, licensing, etc. Existing and pending global government regulations Current and emerging certification requirements., i.e., FAA, EASA, ISO 9001 Existing and pending safety and environmental regulations Customer product/systems operational and usage experience Product/system health status, i.e., sensor feedback, big data inputs
Traditionally enterprises conduct formal reviews at the conclusion of each life cycle phase to determine if the product is suitable to proceed to its next phase. Enterprises conduct a Concept Design Review (CODR) at the conclusion of the Product’s Conceptual phase, and a Preliminary Design Review (PDR), followed by a Critical Design Review (CDR) at the conclusion of detailed design activity within the Product Development Phase. Senior level enterprise technical, business, marketing, and financial experts and, where applicable, participants from customers and certifying agencies conduct these reviews. The instantiation of the vast quantity of knowledge on a trusted DT enterprise network will allow the conduct of formal reviews virtually over extended period, thereby greatly increasing the reviewer’s ability to have a positive impact on the emerging product, while significantly reducing travel costs. Furthermore, we expect that the DT methodology will facilitate a peer review process where non-team dedicated peers can serve as reviewers of product development issues in real time, thereby, facilitating timely product development modifications and avoiding last minute issues at the formal reviews. This
612
K. M. Rosen and K. R. Pattipati
encourages peers with alternate proposals to exchange ideas, forcing design teams to articulate their assumptions, and justify their choices. Standardization of internal process tools/systems and decision-making processes helps in better coordination and transparent communication of incremental changes across the enterprise.
3 Product Conceptual Phase During the conceptual phase, the DT methodology will use MBSE-based requirements analysis and platform-based design principles based on relatively straightforward models and abstractions, coupled with financial projections and business analyses. This will accelerate conceptual design enterprise decisions and enhance communication among the stakeholders. MBSE models, data and knowledge bases become the “hub” for the digital thread from which the DT is derived. This hub must encompass deep physics- based modeling (from enterprise and third party sources) and substantive supplier content. Integrating a MBSE hub within an enterprise’s knowledge base is essential for the robustness of the DT methodology. Use of System Health Management (SHM) Twins commencing in the Conceptual Phase is extremely valuable because the controllability of life cycle costs is greatest in the early conceptual phase when uncertainty in cost estimation is the highest and creative advanced design of fault management is a key for cost control in product sustainment. The implementation of a digital twin-based approach to platform-based design, commencing in the conceptual design phase, involves repeated and iterative application of requirements, architecture, model-based design and analysis, and verification and validation; this process continues to be refined at each phase of the design process. This rigorous approach, facilitated by the substantive DT knowledge base and communication with the trusted stakeholders (including the intended customer), will avoid costly requirements creep, late design changes and accelerate the time to market of a quality product.
Operating Digital Twins Within an Enterprise Process
613
3.1 Model-Based Design and Digital Twins in the Conceptual Phase Product’s conceptual phase solidifies the technical and business requirements for the product/system design, establishes the fundamental physical and functional descriptions of the product and sets its real-time information interfaces. Model- based design process, a key component of DT methodology, described in Table 3, as a “single, quantitative visual environment for the design of next generation systems”, begins in the Product Conceptual Phase, continues extensively throughout the Product Development Phase; it maintains a common and timely knowledge base within the enterprise. During the conceptual phase, the DT methodology will use MBSE-based requirements analysis and platform-based design principles based on relatively simple models and abstractions, coupled with financial projections and business analyses, to make conceptual design decisions and for enhanced communications between the stakeholders. Indeed, MBSE becomes the “hub” for the digital thread from which the DT is derived. This hub must eventually encompass deep physics-based modeling (from enterprise and 3rd party sources) and substantive supplier content. Integrating a MBSE hub is essential for the robustness of the enterprise DT processes. This phase concludes with a fully functional, although not yet completely populated, cyber-physical product model firmly linked to a target set of salient system-level requirements; the subsequent Product Development Phase establishes a completely populated and complex cyber-physical model. Utilization of system requirements and MBSE modeling tools (e.g., DOORS, CAMEO), actively linked to the DT knowledge base, throughout a product’s life cycle create a dynamic and accurately populated summary of all the detailed system and component level functional relationships and requirements, together with a requirements validation plan.
Table 3 What is model-based design? A single, quantitative, virtual environment (“one stop knowledge source” for a system) Requirements ⇒ executable specifications in the form of models Design teams use models to clarify requirements and specs. Elaborate models to develop a detailed design Simulate the design at the system level, uncovering complex interaction issues early Automatically generate production code, test cases & documentation from models Concurrent engineering to speed up development Rapidly create, evaluate, select, and implement design concepts Software-in-the-loop, processor-in-the-loop, and hardware-in-the-loop Testing can begin at the requirements phase ⇒ defects are caught and removed earlier, lowering cost of development Enhanced communication and coordination among stakeholders Reduced development risks (cost and schedule) through improved productivity, quality, and traceability Models’ reusability across product life-cycles reduce operational costs Facilitate contract-based designs
614
K. M. Rosen and K. R. Pattipati
One approach to deal with the size, velocity, veracity, and heterogeneity of big data in complex industrial systems is to create human understandable representations (models, abstractions) of the phenomena that generate these data. Models (physics-based, data-driven and domain knowledge-based) are, thus, becoming system clones, virtual testbeds, data generating functions, and feasible means of ensuring compliance. A common theme across the automotive, aviation, chemical and smart manufacturing industries is the use of AI and digital twins that continuously adapt to dynamic, uncertain, and “big” data generated by CPS. In this vein, several design twins can aid in the conceptual design process. Platform-Based Design Twins Platforms are “time-proven [means] in solving problems similar” to the known problem [12]. A platform-based design process is a modular, integration-oriented design approach that allows enterprises to focus on differentiating features of their products by reusing verified and validated components of their own or from established vendors, thereby substantially reducing the development time, shortening the verification time, lowering the development costs, and shortening the time-to-market. When coupled with Model-based Systems Engineering (MBSE) processes and simulation tools, platform-based design twins, embodying functional, structural, and behavioral descriptions, enable enterprises to test complex interactions early, analyze networked hardware and software performance of CPS, and develop test procedures to verify the design. Modeling Languages and Tools for Functional and Behavioral Description In contrast to document-based engineering of systems, general-purpose Systems Modeling Language (SysML) can aid in reuse of knowledge in the form of design patterns, formal representation of system requirements in the form of verifiable models, as well as functional, behavioral, and structural description of the system [19]. Seamlessly integrated CAD/CAE simulation models based on tools, such as Modelica, MATLAB-Simulink, Amesim, can evaluate design variants on different scenarios [27] to verify whether the requirements are satisfied or not, and to determine critical functions and interfaces early in the design process. Indeed, such digital twin models reduce ambiguity by formally representing system requirements, functions, structures, and behaviors. Of course, this activity commencing with simplified abstractions in the conceptual phase will continue with much more detailed abstractions in the product development phase. System Health Management (SHM) Twins in the Conceptual Phase Debate exists whether fault management should commence in the concept definition phase; the authors believe that this effort should begin, at least at the functional level, in the conceptual phase. Indeed, motivation for this stems from a design folklore that 70% of life-cycle costs are committed, while expending only 5% of total program life cycle cost by the time conceptual design phase ends. This is because the controllability of life cycle costs is greatest in the early conceptual phase, when uncertainty in cost estimation is largest and creative advanced design is a key for
Operating Digital Twins Within an Enterprise Process
615
Fig. 2 MBSE and Platform-based Design Approach enables enterprises to test complex interactions early, analyze networked hardware and software performance of the product, and develop test procedures to verify the design (Multiple Interlaced ‘System Vs’ are executed in a DevOps Modeling Paradigm)
cost control [2]. The effort on SHM should expand greatly in the product development phase, as evidenced by human fallacies and design errors [11, 43]. The digital twin-based approach to platform-based design involves repeated and iterative application of requirements, architecture, model-based design and analysis, and verification and validation at each phase of the design process (see inverted triangles in Fig. 2). The traditional system level verification and validation (V&V) is performed far earlier in the development life cycle via DevOps model that involves interlaced ‘System Vs’ at each step in the development process [4]. Under this model, design, development, manufacturing, operations, security, quality management and sustainment teams are no longer “siloed”; they collaborate as a team to work across the entire life-cycle processes of a product. This has the added benefit of developing a range of skills by the product’s stakeholders and not limited to a single function within an enterprise. Because the architecture is understood and represented by complex virtual modeling/integration of the sub-systems, it is possible to perform the V&V evaluation early and often; thereby, uncovering issues far earlier in the development process. The evaluation process to validate requirements needs detailed multi domain analyses and increasingly complex testing of hardware and software. Testing will evolve from purely software-in-the-loop to processor-in- the-loop and eventually hardware-in-the-loop. This process will initiate as early as the Product Conceptual Phase and continue throughout the Product development and the Product Testing, Qualification and Certification Phases. Such an integrated DT-enabled approach to product development and deployment shortens the time from a concept to deployment making speed, agility, reliability, and improved collaboration as an enterprise’s attributes.
616
K. M. Rosen and K. R. Pattipati
3.2 Digital Twin Examples in the Conceptual Phase Examples of the 3-D modeling, appropriate for the DT methodology in a Product’s Conceptual Phase, are the proposed long range, high speed Vertical Takeoff and Landing (VTOL) Commercial Passenger Aircraft and (VTOL) UAV proposed by Morgan Aircraft and created by a digitally linked virtually integrated product team, see Fig. 3. Fig. 4 shows some of the results of the CFD analyses used to define the UAV, while Table 4 presents the performance results and top-level design criteria for the aircraft resulting from the product conceptual design study. Unfortunately, as of this writing, these products have been unable to gain adequate business support and
Fig. 3 VTOL commercial passenger & UAV conceptual design studies. (Courtesy: Morgan Aircraft)
Fig. 4 Conceptual CFD studies for Morgan aircraft’s VTOL and UAV concepts. (Courtesy: Morgan Aircraft)
617
Operating Digital Twins Within an Enterprise Process Table 4 UAV design criteria Design gross weight: 7963 lbs. Useful load: 2683 lbs. Mission fuel load: 1750 lbs. Mission payload: 933 lbs. Range: 600 nm w/30 min on station 300 kt cruise at 35000 ft. Vmax: 300 knots Production A/C designed around ITEP engine Demo A/C to use GE CT7-8A6 engine HOGE: SL 90F HOGE margin: 10% Rate of climb: will fall out of HOGE margin Flight controls: Redundant Fly-by-Wire system VTOL capability for ship-board and close quarter launch and recovery Courtesy: Morgan aircraft
have failed to advance to the next lifecycle phase. The examples illustrate the relatively detailed complex technical modeling and analysis required to define a product adequately in the enterprise’s conceptual lifecycle phase.
3.3 Outcomes of the Conceptual Phase At the conclusion of the conceptual phase, it is appropriate to have executed the concept’s design/definition to a level sufficient to assure appropriate system level performance and customer acceptance. The technical effort should have completed the initial physical and functional representation, with adequate domain analyses, to assure the product’s efficacy and operational suitability. Expected outcomes are a complete understanding of the product’s design level specification, functional description, and requirements along with detailed plans as to how to achieve validation of these requirements. Accordingly, the product must pass through a formal rigorous conceptual design and business case review (CODR) before it can advance to the product development phase.
4 Product Development Phase Digital twin models, platform-based design principles, and an instantly accessible and applicable knowledge/information data base will greatly enhance Integrated Product Development in a modern enterprise by introducing an intelligent, currently representative, complex, virtual replica of a cyber-physical system (CPS) in all phases of a system’s lifecycle. This exact 3D modeling, coupled with the related system’s functional requirements, (continued)
618
K. M. Rosen and K. R. Pattipati
specifications, MBSE tools and complex, deep physics-based domain analyses will accelerate and reduce the cost of product development; the approach should facilitate the early discovery of issues/failures, thereby, significantly reducing expensive redesign. Extensive use of SHM digital twins in the product development phase enables the Project Team and operational planners to anticipate the expected health and failure modes of the system as well as to identify incipient technical, business, and financial “risks” to expected field operations. Additionally, the DT SHM process should readily project the efficacy of various mitigation strategies in reducing the risks to the operational and business goals (effects). This DT capability enables the product development team, business leadership and potential customers to view the anticipated risk/cost/schedule profile for various mitigation strategies (e.g., changing system configuration modes) in a much-accelerated and cost-effective manner.
Starting in the 1990’s through today, enterprises introduced elements of digital product creation using tools. Representative examples include Product Data Management software, Enterprise Resource Planning (ERP) business models, CAD/CAM/CAE tools (e.g., Unigraphics, CATIA), numerically controlled cutting machines, and stereo-lithography processes (early 3D Printing using non-structural isotropic materials cured in layers by exposure to ultraviolet light). See Fig. 5 for an example of a digital mockup of Comanche, RAH-66 Helicopter. Digital twin models and platform-based design principles will greatly enhance the Integrated Product Development in a modern enterprise by introducing an intelligent, currently representative, complex, virtual replica of a cyber-physical system (CPS) in all phases of a system’s lifecycle. This exact 3D modeling, coupled with the related system’s functional requirements, specifications, MBSE tools and complex, physics-based domain analyses will accelerate and reduce the cost of product development; the approach should facilitate the early discovery of issues/failures, thereby, significantly reducing expensive redesign. We hope to expand on some of these positive impacts on the Product Development Phase in this section.
4.1 Multi-domain Modeling and Analysis Design teams, especially in aerospace industries, need large amounts of knowledge and information to complete the detailed design in an integrated product development phase. This information is often proprietary to the enterprise and/or suppliers and strict policy rules, instantiated on the secure digital twin network, control access to it. Examples include the enterprise’s historical and technical knowledge and experience on a legacy product, statistical material characterization, unique
Operating Digital Twins Within an Enterprise Process
619
Fig. 5 Comanche RAH66 electronic digital mockup of the forward fuselage. Digital twins with much richer knowledge content greatly enhance the integrated product development in a modern enterprise. (Courtesy: Sikorsky Archives)
manufacturing/tooling processes, statistical material allowable strength details (i.e., allowable graphite strain levels) and material electromagnetic properties. Additionally, cyber protection of proprietary design/analysis methodologies and optimization techniques that are resident and detailed in the DT network is salient. This includes data/information/knowledge provided by trusted suppliers, often in response to the design team’s request for information and product specifications. Some examples of multi-domain modeling and analysis methods used by detailed design teams include unique aerodynamic/performance predictive methodologies, structural/fatigue (low and high cycle) analysis methods, dynamic/vibratory response and mode prediction analyses, thermal management methods, unique control schemes, wiring design, component/system weight management approaches, and software/hardware verification/validation methodologies. It is necessary that the detailed design teams have available the loads necessary to perform relevant structural calculations; these loads are often in historical databases based on previous legacy data, where detailed component or complete system testing has been accomplished in a real mission environment. Again, all this technical data/information/knowledge is proprietary to the enterprise and the “need to know” access controls of the DT methodology must be in place for its protection. Where historical loads are not available or applicable, subject matter experts create this information using rigorous multi-domain model-based analyses linked by the DT network to an updated 3D cyber-physical model. Supplier provided physical/ functional information is often directly instantiated in the updated 3D Cyber- physical model; hence, its accuracy is essential. In the case of a fixed or rotary wing aircraft, an extremely detailed, preferably elastically representative, aeromechanics
620
K. M. Rosen and K. R. Pattipati
digital model predicts the aircraft’s loads throughout the planned mission’s operational flight spectrum. By effectively “flying” the product’s digital representation throughout the mission envelope using digital simulation, the design team establishes the predicted loads. Such digital twin simulations exercise pitch, roll, and yaw characteristics, take-offs and landings, and the extreme maneuvers necessary to assure its adequate flight operation and aircraft’s structural efficacy. This digital twin-based simulation approach also results in the ability to predict the aircraft’s anticipated handling qualities and flight characteristics for an evolved set of control laws. The DT methodology and network facilitate the flow of flight control information directly between the aircraft system’s design team and the Flight Control Computer (FCC) supplier. Automatic generation of FCC software code through “Pictures to Code (models to code)” techniques [39] and exchange of data among trusted partners speeds up the V & V process. Update of the functional and behavioral models to include structural aspects of the product in SysML occurs during this phase. An excellent example of how an enterprise would use deep physics modeling to optimize design and maximum component life has been developed by Sentient Science. This team developed a Digital Twin (Clone) Multiscale Framework, which is illustrated in Fig. 6. This approach is knowledge rich and takes information from the atomic level, through microstructure, to the individual component, subsystem and eventually the entire asset; furthermore, the framework goes on to develop fleet- wide knowledge and eventually information that will assist the complete enterprise. Instantiated in the framework are multiple Digital Twins for material simulation, engineering, additive manufacturing, and operations/maintenance. An example of digital twin engineering modeling is a virtual gearbox model, illustrated in Fig. 7, which applies loads, identifies critical gears and bearings, and obtains ISO standard lifeing estimates. Additionally, it conducts comprehensive dynamic analyses of critical gears and bearings and identifies potential structural hotspots. The model contains a large material characterization database for critical gears/ bearings and generates virtual stochastic microstructure and surface traction models for input into a component lifeing simulation. The result yields stress/strain levels resolved to the microstructure level and establishes when damage will occur as well as predicting a statistical distribution of failure times dependent on cyclic load application [37].
4.2 GPUs for Multi-domain Analysis An essential modeling tool in the development of aerospace products is hyper- efficient Computational Fluid Dynamics (CFD) simulations. An example of this modeling tool built on the accurate digital Twin 3D physical model is the development of modern turbomachinery using Graphic Processing Unit (GPU) accelerators. Often in traditional aerospace product development, use of traditional CFD tools is expensive and may require days or even weeks to run due to their massive
Operating Digital Twins Within an Enterprise Process
621
Fig. 6 Digital twin multi-scale framework. (Courtesy: Sentient Science, 2021)
Fig. 7 Digital twin modeling of a gearbox. (Courtesy: Sentient Science, 2021)
appetite for processing power; consequently, use of CFD simulations is limited to investigating extremely complex issues because design teams on a tight development schedule cannot debug a model that takes weeks to simulate. To enable increasing levels of high-fidelity CFD simulations earlier in the design cycle, an order of magnitude computational speedup is required; additionally, the approach must fit within the computational budget with no loss in numerical accuracy. Traditional CPU based infrastructure has blocked progress as these must be general purpose, and handle legacy applications. Just as in deep learning, the
622
K. M. Rosen and K. R. Pattipati
Fig. 8 GPU CORE Advantages over a CPU CORE for CFD Simulations. Massive Parallelism offered by GPU accelerators will Revolutionize Design Simulations. (Courtesy of Aerodynamic Solutions [1])
Fig. 9 GPU-based CFD case studies. GPU-based design simulations greatly enhance the understanding of the product’s behavior early in the design cycle (Courtesy of Aerodynamic Solutions [1])
massive parallelism offered by GPU accelerators will completely revolutionize this situation. Figure 8 clearly describes these GPU advantages. Adoption of GPUs will speed up CFD computations by 1–2 orders of magnitude and lower the computational cost. Indeed, they will “maintain the integrity of the flow solver so that existing validations are not lost; get the highest parallel efficiency by migrating as much computation to the GPU and enable High Performance Computing (HPC) scalability by implementing highly optimized parallel communication”. [1]. Some case studies illustrating the computational efficiency and cost advantages of using GPU accelerators are in Figs. 9, 10 and 11. We firmly expect that this accelerated computational approach will be applicable to multiple domains involving complex computational analysis, such as aerothermodynamics, non-linear dynamic Nastran (structural analysis), electromagnetics complex heat transfer/thermal studies, and hybrid physics-based and data-driven learning. The result will be the continuous modeling of these applicable domain studies as the cyber-physical model matures throughout the product development phase.
Operating Digital Twins Within an Enterprise Process
623
Fig. 10 Computational and cost advantages of compressor stage analysis using GPUs vs CPUs. GPUs enhance productivity and understanding at reduced cost. (Courtesy of Aerodynamic Solutions [1])
Fig. 11 Computational and cost advantages of casing treatment analysis using GPUs vs CPUs. (Courtesy of Aerodynamic Solutions [1])
4.3 System Health Management (SHM) Twins in the Product Development Phase What Is SHM and Modeling Methods SHM is an engineering activity focused on safety analysis, risk assessment and mitigation during system design, and the use of resulting digital twin models for the detection, isolation, diagnosis, and response/ recovery/resilience during system operations [33]. SHM is particularly difficult to design, implement, and verify because it must respond to hundreds or thousands of different failure scenarios; these cut across the various subsystems and disciplines, and span hardware, software, and operational mechanisms. The organizations
624
K. M. Rosen and K. R. Pattipati
needed to manage and implement these processes are equally diverse, requiring integration from cross-program Integrated Product Development Teams (IPDT), composed of design and development disciplines, such as systems engineering, software engineering, safety assurance, operations, maintenance, and so on. Given the requirements, structural, functional, and behavioral product descriptions in SysML, automated generation of multi-attribute graphical digital twins from SysML for designing products for quality, safety, reliability and serviceability and life cycle cost minimization is essential. SHM enhancements to the generated model from SysML include the addition of potential failure modes and their manifestations. The resulting model includes components, hierarchy, interconnections, functional attributes, sensors, effects, system configurations, cyber-physical representations, and all the relevant data to conduct serviceability, safety, and reliability analysis. Representative attributed graphical model-based digital twins include dependency graphs [38], multi-functional models [15, 16], Petri nets [6], and Bayesian Networks [14, 30]. A virtue of graphical models is their ability to integrate knowledge from varied modeling approaches, operational data, and domain experts through deterministic or probabilistic cause-effect relationships. Inference on graphical models can isolate root causes by propagating the effects of failures across subsystems and reasoning on the probabilistic outcomes of tests. Graphical models are easy to understand and relate to, provide visual feedback of fault propagation paths across system boundaries, combine FMECA, Fault Tree and serviceability analysis, and can grow to large-scale systems with as many as 30,000 failure sources [15]. More significantly, these models are refined and reused in later phases to quantify time/cost to diagnose faults onboard, by automated test systems and manual troubleshooting; thereby minimizing the gap between design and operations. It is important to evolve all these representations from a single model as these relate to the failure-space behavior of the product. These graphical model abstractions are greatly refined during the Product Testing, Qualification and Testing Phase. The benefits of such an integrated approach to system health management are the following. Fault Tree analysis from a single source of truth minimizes the probability of undesirable effects by identifying single points of failure and designing in fault-avoidance/fault-tolerance/reconfiguration options to mitigate such effects by employing high reliable components, physical/analytical/information/logical redundancy, and fault-tolerant control. Fault propagation across subsystem and domain boundaries identifies cross-subsystem failure effects previously unknown or imagined by the design team. Analysis at the design stage identifies gaps and blind spots, such as undetectable or hidden/latent faults. Real-time reasoning identifies root cause(s), instead of “Christmas tree” effect or message storm, and facilitates system health management for dynamic fault-tolerant systems. Causal Models As part of the Development phase, it is appropriate, and even necessary, to perform complex Failure Modes and Effects Analysis (FMEA). To accomplish this in the DT impacted enterprise, Multi-Functional Causal Modeling is a useful approach. Causal reasoning is essential for machines to communicate with us in our own language and explain the causes of things [41]. Causal models provide a
Operating Digital Twins Within an Enterprise Process
625
natural way to combine fault-test dependencies from multiple DT-based sources (physics-based models, fault simulations, operational data, and qualitative/subjective observations) via the diagnostic dictionary (error correcting code (ECC) matrix) to conduct serviceability, safety and reliability analysis and perform onboard diagnosis, and guided troubleshooting. A complex system representation in these models involves its components, hierarchy, interconnections, functional attributes, sensors, effects, system configurations, cyber-physical representations, and all the relevant multi-modal data (see Fig. 12; also see Figs. 13, 14, 15 and 18). Analysis on this graphical model propagates effects of faults across subsystems, affected system functionality and fault propagation delays. This helps in systems engineering, FMECA, Fault Tree analysis, sensor network optimization, serviceability analysis, fault management and recovery. The model forms the basis for evaluating the lifecycle cost impact of adding or removing diagnostics and in designing diagnostic and prognostic architectures ranging from remote data acquisition to tele-diagnosis to onboard diagnosis to onboard prognosis. Graphical model inference and sequential testing algorithms [36] use this causal model for embedded diagnosis, tele- diagnosis, and guided troubleshooting. NASA has used this DT facilitated approach to uncover the issues for space launch systems. Eventually, after fielding the product, the enterprise can compare these model-based predictions with an actual field usage Failure Reporting, Analysis, and Corrective Action System (FRACAS) database, also linked on the Digital Twin network. FRACAS captures the failures reported in the field and conducts extensive root cause analysis for continuous and verifiable improvements of the product [3, 25]. The result of this digitally linked adaptive knowledge process is an extremely enhanced product robustness. Multi-functional causal models can answer two broad classes of queries: (1) Forward causal questions, or the estimation of “effects of causes.” For example,
Fig. 12 Multi-functional causal models enable hierarchical system representation and analysis. (Courtesy, Qualtech Systems, Inc., 2020)
626
K. M. Rosen and K. R. Pattipati
Fig. 13 Diagnostic & prognostic process. (Model – Sense – Detect – Infer – Predict – Adapt)
Fig. 14 Integrating multi-source models for detailed SHM analysis and design. (Courtesy, Qualtech Systems, Inc., 2020)
forward propagation algorithm computes how a physical or functional fault in a component or subsystem propagates to the observables (e.g., error codes, symptom, and alarm from an anomaly detector). (2) Reverse causal inference, or the search for
Operating Digital Twins Within an Enterprise Process
627
Fig. 15 Intelligent systems sense, assess, anticipate and respond to unforeseen situations making it intelligent and proactive
“causes of effects” involves the inference (identification, diagnosis, root cause analysis) of the most-likely evolution of the system states, given (the possibly uncertain) outcomes of a sequence of tests in various regimes of system operation. Indeed, multi-functional causal models can answer the queries on the ladder of causation: observational (seeing), interventional (doing) and counterfactual questions (retrospective thinking, “what if”, imagining), whereas data-driven techniques, in isolation, just fit functions to the data (the seeing aspect of the ladder of causation) [31]. Intelligent SHM Process via Causal Models An intelligent SHM process based on causal models contains six major steps: model, sense, develop and update test procedures, infer, adaptive learning, and predict (see Fig. 13). Execution of steps 2 and 3 requires iteration. Step 1: Model In this step, one develops multi-functional causal models to understand fault-to-error characteristics of system components. Step 2: Sense Typically, system control and performance dictate sensor suite design. A systematic and quantitative evaluation of these sensors reveals if the existing sensors are adequate for diagnosis/prognosis. If not, one considers adding sensors and/or analytical redundancy without affecting system control, performance, weight, volume, and other system metrics. Diagnostic analysis by multifunctional causal modeling tools can compare and evaluate alternative sensor placement schemes. Step 3: Develop and Update Test Procedures Smart test procedures to detect failures, or onsets thereof, that minimize Type I and Type II errors (false alarms and missed detections). The anomaly detection procedures should have the capability to uncover trends and degradations and assess the severity of a failure for early
628
K. M. Rosen and K. R. Pattipati
arning. This step typically uses signal threshold tests or advanced signal processw ing, statistical hypothesis testing and machine learning techniques for anomaly detection. The enhanced fault to error to symptom model enables one to compute percent fault detection and isolation measures, identify redundant tests and ambiguity groups, and generate updated Failure Modes Effects and Criticality Analysis (FMECA) report, and the diagnostic tree for troubleshooting. It also exports the D-matrix, the test code and structural information to on-board, real-time diagnosis and guided troubleshooting. Step 4: Adaptive Learning If the observed fault signature does not correspond to faults modeled during design, active learning techniques identify new cause-effect relationships and update the multi-functional causal model. This aids in the FRACAS process. Step 5: Inference Evaluation of system health involves an integrated on-board and off-board reasoning system capable of fusing results from multiple sensors/reasoning systems and operator observations. This reasoning engine and the test procedures must be compact enough for embedding in the control units and/or a diagnostic maintenance computer. A seamless transfer of the onboard diagnostic data to a remote diagnostic server can guide maintenance personnel in root cause analysis (by driving intelligent and adaptive interactive electronic technical manuals), diagnostic/maintenance data management, logging, and trending. Step 6: Predict (Prognostics) Lifing algorithms, which interface with onboard usage monitoring systems and parts management databases, predict the useful life remaining of system components. Integration of supply-chain management systems and logistics databases with the diagnostic server facilitates enterprise-wide proactive system health and asset management. Role of SHM Digital Twin in Design and Operations Figure 14 shows how a multi-functional digital twin can serve as a central enterprise platform, furnishing key analytic capabilities that will enable an enterprise to reduce the costs of conducting SHM activities, increase the autonomy of CPS, and conform to time and cost schedules. Once SHM digital twin translates and integrates multi-domain models and data from various sources, such as SysML, CAD/CAE/CAM, FMECA, Spreadsheets, etc. into a multi-functional causal model, it can serve as a SHM support tool in a coherent, consistent, and standardized manner. The SHM digital twin can serve as a visualization tool providing functional, structural, requirements and behavioral views of the system, aid in identifying design mechanisms to mitigate the effects of failures and visualize the response/recovery mechanisms. It provides useful feedback to the requirements community early in the SHM Design process to detect failures and mitigate their effects through appropriate abatements and management of faults. The SHM digital twin supports fault management analysis via fault tree analysis (FTA), Failure Modes, Effects and Criticality Analysis (FMECA), Reliability, Maintainability & Testability (RM&T) analysis, on-board diagnostics,
Operating Digital Twins Within an Enterprise Process
629
guided troubleshooting, and prognostics. Use of SHM digital twin spans conceptual, preliminary, detailed design, integration, V&V of design, manufacturing, and operations of a system life cycle. It enables operational planners to view the current health and time-to-failure of the system in real-time as well as incipient “risks” to the current operations. In addition, it projects the likelihood of various mitigation actions in reducing the risks to the operational goals (effects). This capability enables the system operators to view the risk profile for various mitigation strategies (e.g., changing system configuration modes) and to choose the one that best mitigates the risk. The SHM digital twin automates test suite to cover critical system failures, and computes SHM metrics, such as Fault Detection/Isolation, False Positive (FP), False Negative (FN), Failure Response Effectiveness, under various failure scenarios. Finally, SHM digital twin enables virtual qualification of the product by verifying whether the product has met or exceeded its required fault detection, isolation, response/recovery, reliability, and availability specifications.
4.4 Digital Twins, SIL and Human-in-the-Loop Cockpit Simulation An important part of the product development process, linked by the DT methodology, is the use of System Integration Laboratories (SILs) and fixed/moving base simulators. The System Integration Laboratory is an important Product Development tool in validating the efficacy of the complete product software and hardware. Simulation in the SIL usually begins with a non-real-time engineering desktop approach, representative of the Product’s architecture and control laws, and evolves into a fully integrated hardware-in-the-loop integration, where evaluation of many test cases and failure scenarios occurs for compliance with the product’s requirements. In the case of a fixed or rotary wing aircraft, it is usual to emulate the airframe, control surfaces, rotor/wing, and the engine by a fully effective nonlinear aeromechanics and aerothermodynamics digital model resident in the SIL processors; update of these complex models in near real-time occurs through the updated domain knowledge resident in the DT information network. As the numerical representation of a complex product is largely nonlinear, stability evaluation is performed in the time domain rather than the frequency domain, initially in non-real time and eventually in near real-time. In its final manifestation, the SIL, where possible, will instantiate most of the target product’s electrical components, hydraulic systems, Flight Control Computers (FCC), Full Authority Digital Engine Controls (FADEC), control position sensors and feedback system. In cases when some of these systems are not available, complex emulations are used. Consequently, successful fielding of a complex product and its resulting efficacy depends on the adequate representation of the product’s architecture in the SIL and the execution of thousands of operational test cases, including the evaluation of embedded component and system level failures. It is evident that the SIL is an essential tool for the verification of software
630
K. M. Rosen and K. R. Pattipati
requirements and validation of the integrated hardware & software in the design/ development process “V” diagram presented in Fig. 2 of Sect. 3. Aircraft simulation eventually involves using a fully populated or emulated cockpit instrument display, physical controls, system’s advisory status display, and an outside terrain/ obstacle display representation directly linked, in real-time, to a digital representation of the aircraft performance and functionality through the SIL. The digital twin methodology allows for near real-time updates of the current knowledge resident within the aircraft’s numerical cyber-physical representation based on the human-in-the-loop cockpit simulation in the SIL. Complex representative simulators are “fixed base”, described above, or “moving base”, which incorporates physical motion to the cockpit. The latter simulates flight conditions, as well as an emulation of the physical vibration and acoustics experienced in a real aircraft cockpit. Here the human operator can experience true emulated flight over a given mission and can evaluate his response to embedded or unanticipated system/component failures. In summary, the incorporation of the DT methodology by the enterprise is expected to greatly improve the effectiveness of the System Integration Laboratory and “human-in-the-loop” Simulator as useful tools for modern product development.
4.5 Digital Twins in Product Development at GE One company that seems to be focusing on the advantages of the Digital Twin Methodology is GE, which provides a helpful overview of its Digital Twin Analytic Approach [35]: “At its core, the Digital Twin consists of sophisticated models, or a system of models based on deep domain knowledge of specific industrial assets. The Digital Twin is informed by a massive amount of design, manufacturing, inspection, repair, online sensor, and operational data. It employs a collection of high-fidelity computational physics-based models and advanced analytics to forecast the health and performance of operating assets over their lifetime”. Tables 5 and 6 provide some examples of GE’s Digital Twin modeling focus [35].
Table 5 Digital twin modeling focus at GE Lifing: Capital equipment predictive reliability models for personalized intervals, dispatch tradeoffs & long-term outage planning. Anomaly: Physics & data-driven models for prognostics, early fault detection & asset specific failure mode management to reduce unplanned downtime. Thermal: Plant thermal cycle models to make informed operational tradeoffs, manage degradation and improve efficiency over the load profile Transient: Physics & predictive models for achieving best plant operational flexibility, while managing equipment & site constraint Courtesy: GE Digital and Forbes technology council
631
Operating Digital Twins Within an Enterprise Process Table 6 GE digital twin technologies: physics-based, AI and sensors
Physics-based models: Deep physics models for flow, thermal, combustion, and mechanical aspects of the power equipment to provide unprecedented insights into equipment operation. Some examples include: Performance models Anomaly detection models and techniques Lifing models Microstructure models Dynamic estimation and model tuning Configuration management Artificial intelligence: AI technologies that leverage data from equipment to generate insights and deeper understanding of the operating environments. They include Pattern recognition Unstructured data analytics Multi-modal data analytics Knowledge networks Enabling sensing technology: Innovations in data sensors, designed to work under harsh and difficult environments, provide the information to drive the analytic models. Examples include: Printed sensors Inspection technologies Atmospheric/weather data Plant component analytics Courtesy: GE digital and Forbes technology council
5 Product Testing, Qualification and Certification Phase The advent of the DT methodology, with near exact and current cyber-physical and temporally representative system models, allows the linking of substantive ground test facilities directly to a System Integration Laboratory (SIL), either physically or even virtually in real time; thereby, greatly facilitating the operation of the entire integrated hardware/software system in a simulated flight environment. This approach can result in significant cost, schedule, and risk reduction during the flight test program with an attendant increase in safety. The DT methodology facilitates near real time updates of the current knowledge and deep physics-based models within an aircraft’s numerical cyber-physical representation incorporated in the machine (AI) or human- inthe-loop simulation resident in the SIL. Accordingly, the incorporation of the DT methodology by the enterprise is expected to greatly improve the configuration and flight performance accuracy of the SIL and “human in the loop” Simulator as useful tools for modern product development; thereby, greatly enhancing product performance and safety, while substantially reducing cost and schedule delays during the subsequent flight test program. (continued)
632
K. M. Rosen and K. R. Pattipati
It is extremely important to emphasize the essential role of the appropriate certifying agency (e.g., FAA, EASA) and targeted customers in embracing the enterprise’s deep physics-based models, tools, and processes in supporting eventual product certification/qualification and ultimately the application of the DT process. It is equally vital for the enterprise DT processes to be transparent to the certifying agencies and customers to gain their trust.
5.1 Digital Twins in Product Testing Complex products usually require extensive qualification test programs to verify their function, performance, and reliability under several operational scenarios and to satisfy highly detailed and rigorous certification requirements. Physics-based, data-driven, and knowledge-informed models, instantiated in the DT methodology, should greatly aid in the product testing, qualification, and certification phase. In Sect. 4, we outlined how SHM, SIL and related simulation processes provide failure scenarios (ranked by the criticality and frequency of occurrence), virtual qualification and how they enable an effective software verification and system level validation program (V&V) linked through the DT methodology. Additionally, it is usually necessary to carry out complex component and system level tests; this is especially true in the case of an aircraft or helicopter. Using the latter as an example, it is usual to include extensive individual component testing; appropriately designed to satisfy the N + 1 rule, which requires that the tested component be attached to representative adjacent aircraft parts to assure that appropriate boundary conditions are satisfied. Examples of some of these components include main & tail rotor blades and attachments, rotor hubs, shafts, mechanical control linkages, transmissions, landing gears, dampers, and control servos. Where appropriate, evaluation of multiple components occurs in a combined test facility. Where fatigue loading is prevalent, it is necessary to test samples of components to assure statistical accuracy of the results and to accelerate the loads necessary to accrue cumulative damage at a reduced number of cycles; this facilitates the development of a statistically sound fatigue S-N (Stress-Cycles) curve in a realistic time schedule. Once again, controlled access and protection of the enterprise’s proprietary knowledge on the process of testing, the creation of testing facilities and the test results on the DT network is salient. The component weaknesses and failures that surface will be instantiated in the DT database for later comparison with the fielded product’s FRACAS data. Where appropriate, addressing of component issues expected to surface under a real-world operating spectrum may result in a model update and redesign to improve the component’s life or its robustness. More complex mechanical testing often includes the development of an airframe’s static and or fatigue test article and a Propulsion System Test Bed (PSTB). PTSB facilitates testing all the rotorcraft’s engines, transmissions, shafts, controls, and dynamic systems, together with an operative rotor system under power. The
Operating Digital Twins Within an Enterprise Process
633
intent of this complex integrated hardware program is to uncover design flaws and integration failures before the final aircraft is flight-tested. The advent of the DT methodology allows the linking of PSTB directly to the SIL, either physically or even virtually in real time, allowing the operation of the entire integrated hardware/ software system in a simulated flight. Aircraft flight-testing represents the final step in the testing, qualification, and certification phase; here a rigorous flight test program assures the aircraft’s airworthiness, handling qualities, structural efficacy, and operational suitability. All these test qualification programs generate a huge amount of data, which requires management and mining for gleaning domain knowledge using the DT methodology.
5.2 Update of SHM Digital Twins in Product Testing Product testing provides a host of useful SHM metrics, such as detection, isolation and response/recovery delays, false alarms, missed detections, misdiagnosis, fault ambiguity groups, fault propagation paths and their delays, troubleshooting times and so on, that are stored in a DT network database. This data helps SHM design team to update the causal models. The updates may include model changes in the causes and/or effects. For example, this may involve changes to failure mode specification of components (e.g., failure rate, characterization in terms of functional effects, criticality level), fault propagation delays and level of repair. On the effect side, this may involve adapting test thresholds, test type (onboard versus off-board, automatic versus manual), test delays and test reliability. These updates may involve new fault-test relationships not anticipated in the product development phase. In rare cases, this may involve redesigning the SHM system architecture (e.g., remote data acquisition versus tele-diagnosis versus onboard diagnosis versus onboard prognosis). In this vein, the modeling artifacts and active learning tools resident in the enterprise DT network and the data generated during the earlier phases of the design cycle will aid the SHM team in updating the SHM system and making it intelligent and proactive (see Fig. 15).
6 Product Operations, Manufacturing, Quality Management and Distribution Phase Digital Twins’ detailed, current and near exact physical and functional models will greatly enhance the operations, manufacturing, quality management and distribution life cycle phase. (continued)
634
K. M. Rosen and K. R. Pattipati
Within the DT architecture, the digital thread of products provides a single source of truth, creating consistency, collaboration, and alignment across all enterprise functions. Hence, all manufacturing and operations processes will utilize the same unified set of information. The concurrent digitally accurate information will allow direct access of product’s physical and functional information by the trusted supplier and manufacturing community, enhance product quality and consistency, minimize tooling cost and schedule, enhance timely fabricating machine health prognosis and calibration, and support direct, highly automated, inspection of the product’s critical characteristics. Individual part’s actual material and physical characteristics along with any history of material review board disposition of minor discrepancies will be instantiated in the DT knowledge data base, allowing trusted stakeholders’ access throughout the component’s operational life cycle. This approach should greatly minimize manufacturing time/cost, operational risk, and warrantee cost exposure, while maximizing product constancy, quality, and safety.
Digital Twins’ detailed physical and functional models will greatly enhance the operations, manufacturing, quality management and distribution life cycle phase. Lack of seamless collaboration across design and manufacturing can result in expensive errors and quality issues, especially when the product design requires unique machining operations. However, the concurrent digitally accurate information with Digital Twins will allow direct access of product information by the manufacturing and supplier communities, greatly facilitating tooling and direct inspection of the product’s critical characteristics. For example, the Sikorsky S-92 helicopter, developed using extremely accurate 3D modeling, surprisingly needed little assembly tooling because of part accuracy and consistency. Use of the Digital Twin will take this process to the next level of accuracy and consistency. Computer Aided Manufacturing (CAM) will directly utilize the DT’s physical 3D modeling to provide direct and extremely accurate information to cutting and grinding machines, as well a guiding the direct layup of and accurate placement of composite materials in a DT facilitated mold tool. Although use of CAM is currently widespread even without the enterprise implementation of the DT processes, its use often requires human transposition of data, thereby exposing the process to potential errors. Within the DT architecture, the digital thread of products provides a single source of truth, creating consistency, collaboration, and alignment across all enterprise functions. The ability to use the consistent information from the right source for the right purpose will prove very valuable, especially when different machines are required to fabricate a single component. Inspection measurements
Operating Digital Twins Within an Enterprise Process
635
will be greatly automated using coordinate measurement machines linked to a unified set of information, thereby tracking cutting machine variations for periodic calibration and maintenance. The use of DT methodology greatly enhances the potential for additive manufacturing of parts. “3D printing processes – additive manufacturing – are becoming increasingly a part of the industrial production chain. Medical technology, aerospace, and automotive industries are leading the innovation and implementation of additive manufacturing. An example of a modern holistic integrated process for additive manufacturing, which will link well with a DT committed enterprise, is the ZEISS’s holistic manufacturing process in Fig. 16. “The greatest challenge lies here in the verification of the 3D-printed parts’ absolute reliability”. (Reference ZEISS [45]). Figure 17 illustrate the 3D Dimensional Quality Inspection process capability process offered by ZEISS. Linking the results of these highly accurate coordinate measurement processes through the DT network with the complex 3D Cyber-physical models will accurately define the product. Applying the SHM approaches inherent in the DT methodology to fabricating machines will facilitate machine health diagnosis and prognosis, thereby, reducing the capital investment, material waste and production delays. The resulting product should maintain measurement consistency easily with upper and lower control limit tolerances. Additionally, the DT network database will store the product’s manufacturing measurements, as well as the related fabricating machines’ health status information, with a complete history of part, component, and system history. The application of digital twin methodology to the Parts Management Process will be game changing. Specifically, facilitated direct procurement from a trusted supplier will be possible with that organization directly linked into the Enterprise DT network; thereby allowing the supplier team to access complex detailed 3D physical models, part requirements, relevant specifications, functional information, scheduling needs and delivery locations necessary to fabricate and deliver supplier parts. Furthermore, customer service personnel, consistent with enterprise
Fig. 16 Integrated process for additive manufacturing spans material characterization to process data analytics and links well with a DT-committed enterprise. (Courtesy: ZEISS [45])
636
K. M. Rosen and K. R. Pattipati
Fig. 17 ZEISS 3D quality inspection process draws upon DT-enterprise knowledge. (Courtesy: ZEISS [45])
authorization policy, will directly access the supplier procurement process, thereby greatly facilitating the ordering of new spares. Component life today is usually defined by a pre-set understanding of the component’s “expected” mission and operational usage; therefore, for life-limited fatigue loaded components, subjected to extreme usage conditions, part replacement times or “lives” are often established and fixed in the parts management process or, where permitting, component “times between overhaul” are promulgated. In a DT-committed enterprise, we expect that individual part life will be electronically/digitally calculated and tracked in real time using the enterprise’s validated statistically based methodology for cumulative damage determination. The result will be a significant increase in actual part life and utilization and, consequently a significant reduction in direct operating cost. The DT network stores and tracks this individual part history along with parent system’s mission/operational usage information. For high value components, or where necessary to establish parent system usage patterns, the instantiation of appropriate sensors will be necessary. These small and localized sensors will most likely be self-powered and smart enough to filter out those periods within a mission where a part does not experience damage. Additionally, should a part use up its predicted life, where a fatigue-loaded part reaches its crack initiation time, it may be possible to directly, and possibly in real- time, calculate the remaining useful life before an actual part failure is reached using the DT managed fracture mechanics model and the enterprise’s proven statistically based fracture mechanics methodology,
Operating Digital Twins Within an Enterprise Process
637
This DT facilitated approach to parts life prediction and management allows the user to operate the system efficiently and the constituent components within their full capacity; thereby, minimizing the need for spares and increasing component time between overhauls. The DT methodology will also assist in the management of component performance degradation, which can also be predicted and measured in real-time, again allowing the customer service and operations teams to manage individual parts and systems based on actual mission usage spectrum.
7 Product Customer Effectiveness and Support Phase Digital twin has the potential for making every service technician an expert by using SHM Digital-twin as a Decision Support approach for Training Assessment (DDSTA). Information flow in such a decision support system will utilize the digital twin knowledge base (unique to the enterprise) and resident in the multi-functional model. All trusted stakeholders will have direct accessibility to this information. We believe that this approach will result in rapid generation of accurate field failure scenarios; thereby, enabling enhanced maintenance training and support, and, if appropriate, timely corrective action. Field data collection processes linked through the DT information network, will allow subject matter experts to perform sophisticated predictive data analytics and conduct extensive root cause analysis. This approach will result in a continuous improvement of the product with a substantial positive effect on product quality, field operations, customer satisfaction and safety.
7.1 Digital Twin-Aided Customer Support Figure 18 illustrates the range of enterprise processes performed by a (physics- based, data-driven, knowledge-informed) SHM digital twin using multi-functional causal models. Enterprise-scale server components of SHM digital twin enable automatic machine-to-machine diagnosis where thousands of connected products can periodically update data for automatic assessment and display of their health status, time-to-alarm (TTA) and time-to-maintenance (TTM) on a Fleet Health Management Dashboard by the SHM digital twin. The TTA and TTM measures enable proactive condition-based health management of products by linking with the autonomic logistics and customer relationship management (CRM) software. In the event of unscheduled maintenance, the digital twin (as a server or portable maintenance aid or as a mobility component), by seamlessly integrating the onboard symptoms and inference results, can guide a multi-lingual global workforce of
638
K. M. Rosen and K. R. Pattipati
Fig. 18 Multi-functional models in SHM facilitates traceability throughput a product’s life-cycle. (Courtesy: Qualtech Systems, 2020)
service technicians through the process of troubleshooting and restoring the system health, and automatically logging the troubleshooting session’s interactions and outcomes. The mobility component, with no access to the enterprise network from a customer’s facility, should be able to work, for example, in hospitals and secure workplaces. Based on the initial problem and onboard data, the SHM digital twin generates a diagnostic decision tree to guide the technician step-by-step through the troubleshooting process, logging the results of every step and synchronizing with the SHM server to upload the log files when the enterprise network is accessible. The logged data (sensor data, onboard diagnostics, and troubleshooting data) in the DT network aids in updating SHM model parameters, such as component reliabilities, test setup and execution times, test accuracies, and fault-test relationships, using machine learning algorithms, with appropriate regression testing to assure the accuracy of deployed models. Adaptive onboard inference, proactive health management and troubleshooting procedures improve service productivity, product availability and customer satisfaction. Section 9 discusses in more detail how the digital twin infrastructure’s ability to process enormous amounts of data into information and knowledge can aid the product improvement phase. An integrated on-board and off-board system approach to health management, coupled with autonomy, advanced analytics, virtual/augmented reality, can make products intelligent, improve customer experience and support via real-time monitoring, incipient failure detection, root cause analysis, prognostics, predictive maintenance, and training assessment. When these are fully exploited and linked into the DT network, we expect increased customer satisfaction and enterprise profitability. Sikorsky Aircraft’s vision for DT-enabled intelligent aircraft health management is shown in Fig. 19.
Operating Digital Twins Within an Enterprise Process
639
Fig. 19 Vision for a DT-committed enterprise PHM is comprised of intelligent products with built-in health management, autonomy and predictive analytics. (Courtesy: Sikorsky Aircraft, a Lockheed-Martin Company)
Fig. 20 Service digital twin provides decision support for training assessment. (Courtesy: Qualtech Systems, Inc., 2020)
7.2 Digital Twin-Aided Training and Assessment All enterprise processes, including design, operations, manufacturing, maintenance, purchasing, business functions and finance, use digital twin-aided training and assessment. In all cases, subject matter experts skilled in the enterprise functionality assess and validate the skill levels of trainees. As an example of the process, digital twin has the potential for making every service technician an expert by using SHM Digital-twin as a Decision Support for Training Assessment (DDSTA). Figure 20 above shows our conceptual view of information flow in such a decision support system and proceeds as follows. The
640
K. M. Rosen and K. R. Pattipati
digital twin knowledge base, resident in the multi-functional model, unique to the enterprise is accessible by all the elements of the information flow in Fig. 20. An Instructor logs into the DDTSA tool and begins by answering a series of questions that allow the DDTSA to assess the expertise level of the technician and to recommended instructional method/content delivery methods. A mapping of the instructional methods to the failure scenarios by the digital twin generates problem assignments in the form of AR/VR enables guided troubleshooting tasks for the trainees. Using Learning Sciences techniques, the digital twin scores the trainees performing their assignments, as novice, intermediate or expert, based on troubleshooting accuracy (how close are the trainees to the optimal guided troubleshooting steps?), timeliness of the steps (did the trainee pause uncharacteristically long between steps?), and number of repetitions and missed steps. SHM digital twin records all guided troubleshooting activities in a “task report”. Mining of these “task reports” provides insights into trainee performance, such as the time it took to perform a certain test setup, test, etc. Aggregation of these learner records and scores measure the overall effectiveness, across multiple trainee skills, of each type of Instructional method.
7.3 Health Management of Sikorsky S-92 Since being introduced in 2004, the Sikorsky S-92 helicopter, with more than 1.6 million fleet flight hours and nearly 95% availability, arguably has become the industry’s standard for safety and reliability. Each S-92 contains a state-of-the-art Health and Usage Monitoring System (HUMS) in the baseline configuration. “As part of the sales agreement, customers agree to provide HUMS data on a regular, usually daily, basis to Sikorsky aircraft for analysis. To help the operator achieve the transmission of HUMS data on a regular basis, Sikorsky has provided automated methods for data transfer. For example, the S-92 ground stations automatically send a copy of the data to Sikorsky allowing immediate retrieval from the ground station. Sikorsky also works with the operator to collect and analyze maintenance records. The maintenance records for each operator include maintenance history, aircraft status, and spare parts ordered. A third database Sikorsky maintains is a Failure Reporting, Analysis, and Corrective Action System (FRACAS) database. “The intent of FRACAS is to capture the failures reported in the field and conduct extensive root cause analysis for continuous improvements of the product.” [25]. “The data integration process, illustrated in Fig. 21, has led to new techniques for monitoring the degradation of mechanical components and improving the performance of advanced mechanical diagnostic algorithms” [25]. Data collection processes like these, linked through the DT information network, allow subject matter experts to perform sophisticated predictive data analytics with a substantial positive effect on field operations and safety.
Operating Digital Twins Within an Enterprise Process
HEALTH
Parametric Data Vibration/Temp Data Mechanical Diagnostics
LCA&T
Maintenance History Aircraft Status Spare Parts
R&M
FRACAS
641
Results of Collaboration:
Understand maintenance induced trend changes in health data. Fleet analysis to identify anomalies and failure mode singatures. Identify atypical trend shifts in data.
Fig. 21 S-92 Data integration process combines health, maintenance and FRACAS data for SHM. (Courtesy: Vertical Flight Society)
Fig. 22 S-92 condition-based maintenance example from the S-92 fleet data facilitated the detection and isolation of the degraded component prior to a chip event. (Courtesy: Vertical Flight Society)
Figure 22 “shows a condition-based maintenance example from the S-92 fleet. Sikorsky was able to detect and isolate the degraded component prior to a chip event using traditional HUMS techniques. This allowed the operator time to procure the spare component and perform a scheduled maintenance activity to remove and replace the degraded component.” “Statistical analysis of the data can generate thresholds that will identify anomalies, but Sikorsky has found one of the best ways to view the data across the fleet is
642
K. M. Rosen and K. R. Pattipati
Fig. 23 Vibration signature characterization informed by maintenance history and FRACAS data to improve detection accuracy. (Courtesy: Vertical Flight Society)
using a statistical boxplot chart. The statistical boxplot graphically displays the fleet for a given condition indicator with each bar representing a unique aircraft. Fig. 22 illustrates an example from the S-92 fleet, where one of the features is clearly an outlier and indicative of a fault” [25] “Fig. 23 shows a typical vibration signature for a condition indicator. It has several trends up and down, which are difficult to interpret without additional information; as illustrated in Fig. 23, the jumps are explainable by maintenance and FRACAS events. With this added layer of understanding, updated algorithms monitor expected levels of change post maintenance. The largest benefit of characterizing maintenance and defect events accrues when a defect signature falls within the variance of the fleet distribution. In the past, this scenario would have led to a missed detection. However, with a clear understanding of the vibration signature, new algorithms can be created to provide detection without sacrificing the false alarm rate” [25]. An example of an S-92 fleet issue which benefitted from this approach is the tail rotor pivot bearing (TRPB) disband event (see Fig. 24) detected by the HUMS measurements of the tail rotor vibration signature. Although the measured vibration signature level fell within the allowable fleet variance, subsequent detailed analyses identified a trend shift and correlated well with maintenance events of the tail rotor. Using this knowledge, an algorithm compared trend changes and rate of change in signature value with last known maintenance events to address this problem. Using this approach, an automated software tool identifies TRPB disbands with minimal false alarms.
Operating Digital Twins Within an Enterprise Process
643
Fig. 24 Data integration and analysis indicated a trend change suggesting an impending Tail Rotor Pivot Bearing Disbond. (Courtesy: Vertical Flight Society)
8 Product Improvement Phase Using the DT facilitated external and internal networks, feedback of FRACAS data to the Virtual Integrated Product Team’s business, technical, operations and quality Subject Matter Experts provides realistic enterprise-wide picture of the operational and usage experience of the products by the customers. Following the subsequent introduction of selective product improvements, the enterprise should experience reduced warrantee costs, extended product life and increased customer satisfaction. Accordingly, we believe that the DT enterprise will benefit immensely from a responsive product improvement phase, thereby widening the barrier that disruptive competitive products must cross.
644
K. M. Rosen and K. R. Pattipati
The digital twin enterprise infrastructure’s ability to curate data, process enormous amounts of real-time data into information and knowledge, adaptive SHM and performance dashboards forms the basis for the product improvement phase. Throughout this chapter, we emphasized the concept of re-use of a continuous flow of information and knowledge in a DT-impacted enterprise. As previously described, an actual field usage Failure Reporting, Analysis, and Corrective Action System (FRACAS) database, instantiated on the Digital Twin network, is an excellent example of this information flow. Use of the FRACAS database integrated with the DT methodology and processes yields a timely understanding of the root causes of customer issues, product quality concerns, operational failures through sensor- based System Health Management information, operational regime recognition, corrective actions, and using predictive data analytics on big data collected as appropriate. The DT methodology will allow the tracing of issues to individual components with a traceable understanding of the part’s operational usage, fielded environment & location, and manufacturing and quality history. Using the DT facilitated external and internal networks, feedback of FRACAS data to the Virtual Integrated Product Team’s business, technical, operations and quality, Subject Matter Experts provide a realistic enterprise-wide picture of the operational and usage experience of the products by the customers. Where appropriate, introduction of product improvements to address the fielded issues enhances system performance and increases customer utilization of the product and its safety. The DT methodology will again aid the enterprise greatly by using its timely, knowledge rich, product development, operations, qualification, and delivery processes. In this way, as the enterprise field’s product improves, it leads to reduced warrantee costs, extended product life and increased customer satisfaction. Accordingly, we believe that the DT enterprise will benefit immensely from a responsive product improvement phase, thereby widening the barrier that disruptive competitive products must cross.
9 Product End of Life and Environmentally Suitable Termination Phase Use of DT global information network facilitates the incorporation of applicable international, national, state, and local government’s environmental, hazardous waste issues, safety, certifications and disposal regulations into the product’s design requirements and specification. Consequently, using this up- front knowledge, enterprises can address the product disposal issues early in the product’s life cycle.
Operating Digital Twins Within an Enterprise Process
645
The digital twin methodology supports the design and suitable disposal of environmentally sustainable products. Use of DT global information network facilitates the incorporation of applicable government’s (i.e., EPA) environmental, hazardous waste issues, safety, certifications and disposal regulations into the product’s design requirements and specification. Consequently, using this up-front knowledge, enterprises can address the product disposal issues early in the product’s life cycle. For example, the EPA describes the heavy metals, Chromium and Cadmium, as among the hazardous air pollutants historically used in aerospace manufacturing and lists them as part of its National Emission Standards for Hazardous Air Pollutants (NESHAP) rules. We expect that the enterprise environmental and safety experts, again using the DT knowledge dissemination network, will review these issues, using virtual peer reviews, with the Integrated Product Team early in the design process.
10 Organizational Structures in the ERA of Digital Twin Methodology Digital twin committed enterprise structures are expected to be congruent with their processes and employ robust, flexible, and agile process management strategies to achieve resilience in uncertain business environments. As modern enterprises demand enhanced confidentiality and government restrictions strictly limit the distribution of sensitive information, centrally controlling all aspects of DT integrated information and knowledge using a dynamically managed accessibility policy is essential. It is important for a DT-committed enterprise to invest in talented and committed individuals dedicated to continuous improvement, while recognizing the need for security and protection of the enterprise’s confidential information.
Superior business performance in a globalized twenty-first-century environment requires congruence of the activities of digital twin methodology and the enterprise processes. To understand the use of digital twin methodology in such congruent organizations [32], it is useful to model the various processes used throughout a product’s life cycle as a task network, including the information flows between human and non-human source nodes (internet of things). Effectively, a digital twin approach will provide a single, quantitative, virtual environment for the enterprise and for the systems/products spawned by the enterprise. Consequently, it is highly likely that digital twin-committed enterprises must significantly revamp their
646
K. M. Rosen and K. R. Pattipati
traditional functional and divisional coordination structures and information flows to be effective [21, 29, 32]. Enterprise organizations in the era of digital twin methodology must be far more efficient and competitive; hence, such organizations will require significant change from traditional functional and divisional structures. Starting in the 1990s, many enterprises realized that conventional departmental structures, such as engineering, manufacturing, purchasing, finance etc., were too cumbersome to be flexible in the timely and affordable development of disruptive products. At that time, some innovative enterprises developed integrated product development teams, which drew contributors from the conventional functional organizations into a new co-located team dedicated completely to the development of the target product (Matrix organizations or product-based organizations). While these teams were successful in an era that saw emerging 3D digital product representations and organizational informational tools, such as product data management (PDM) tools, the application of digital twin methodology to a virtually integrated, fully networked, enterprise will facilitate an opportunity for far more revolutionary advances in organizational structure, such as networked heterarchies [28]. Almost all enterprises will go through a transition period, which involves the commitment to evolve the use of the DT methodology in legacy products, usually defined by processes with some digital content, and emerging products which are fully committed to the digital twin processes from their inception. This reality requires a significant effort on the part of the enterprise as the entire organization morphs into a fully committed digital twin entity. A similar, sometimes painful, transition period was experienced by the aerospace industry three decades ago as it transitioned from a pen-in-ink orthographic projection design/manufacturing and quality inspection processes to a fully digital product representation using the first and second generations of CAD/CAM. An example of an enterprise which underwent this transition was Sikorsky Aircraft, now a Lockheed Martin Company, which established a complete digital database for two emerging helicopters in the 1990s, while the legacy Blackhawk helicopter, which had been designed in the 1970s, was largely a non-digitally represent product. Over a relatively short period of time, all the Blackhawk data was eventually digitized. Our experience suggests that the same sort of process transition will occur as organizations become truly committed to the digital twin methodology. In the era of digital twins, which will permit product teams and enterprises to globally exchange and update information concurrently and in real-time, we anticipate fully integrated virtual product development teams (IVPDT) will emerge as the dominant performing structure. IVPDT will incorporate subject matter experts, as well as dedicated manufacturing, purchasing (supplier resource) experts and a small number of program centric business/financial professionals; all team members including trusted suppliers and customers will share a common database of accurate and standardized concurrent engineering models and system engineering tools. The structures of these globally distributed product teams will evolve dynamically as the product moves to subsequent lifecycle phases.
Operating Digital Twins Within an Enterprise Process
647
Support for the virtual product development teams will come from the enterprise’s core business resources, such as human relations, enterprise finance, as well as contract and legal experts; we expect that these very senior and experienced resources will provide the virtual teams with the historical, technical, and business signature inherent in the enterprise. Consequently, core, still virtual, organizations of senior technical/manufacturing fellows will serve to carry out the independent review processes necessary to assure product efficacy, adherence to certification requirements, and meet business goals. These core organizations will also benefit from globally networked interconnects, and the common/concurrent physical modeling and functional information database available with a true Digital Twin methodology. We expect that this information will be especially necessary to satisfy localized requirements and environmental regulations. A centralized integrated research organization will still be appropriate after adoption of the digital twin methodology to assure the enterprise’s dominance in emerging technologies with low technical readiness levels; the core research team, again virtually connected via the enterprise global DT information network, will utilize selected subject matter experts drawing upon resources from the world’s best laboratories and universities. Clearly among the issues that a DT connected global virtual team will face are appropriate Cyber Security and the management of intellectual property. While these areas could present obstacles to the virtual product teams, we do not envision the impediments to be of such a magnitude as to delay the effective introduction of a virtual, DT connected, product-centric enterprise. Business and legal managers must learn to work together with their global counterparts and to utilize virtual information available from local experts. Where company’s private confidentiality or government restrictions limit the distribution of information, centrally controlling all aspects of DT integrated information using a dynamically managed accessibility policy process and firewalling selected individuals from various DT network nodes constitute appropriate mitigation steps. We envision Enterprise networks, linked through a global information distribution cloud, with internal networks established as required to protect extremely technical and business-sensitive information. We envision a service and customer support organization, fully linked to the virtual product teams, focused on the needs of the target customer and fully responsive to localized customs and requirements. The task of such an organization is to understand the health and operational suitability of the fielded product and to facilitate real time data transmission through the digital network to the product team. Having access to the relevant technical and system engineering models will enable these field resources to support the customer rapidly, directly impact spares availability, and manage product concerns. Modern enterprises operate in a complex, dynamic and uncertain business environments. We expect organizations employing digital twin methodology to embrace robust, flexible, and agile process management strategies by incorporating what, who, why, where, when and how aspects of uncertainty management. Robust
648
K. M. Rosen and K. R. Pattipati
strategies seek to manage uncertainty, represented by scenarios, by minimizing variability in the expected risk or minimize the maximum risk. Organizations employing robust decision-making strategies systematically evaluate multiple courses of action from which the product team selects one. Flexible strategies adapt to uncertain context by enumerating or brainstorming potential event sequences a priori, conducting what-if analyses and pre-planning response policies. For example, flexible strategies adapt to changed scenarios by recognizing critical events that signal a scenario change. Agile decision-making methods adapt to uncertain context by learning (on-line) an updated model of the decision environment and/or hedging against uncertainty by trading off exploration versus exploitation. These methods adapt to unexpected scenarios by learning the new scenario they are operating in. Typical methods in the context of planning include moving horizon planning, open- loop optimal feedback, and other approximate dynamic programming (including various forms of deep reinforcement learning) techniques [8]. We believe that resilience in organizational processes requires the integration of flexibility, robustness, and agility by exploiting opportunities. As modern enterprises demand enhanced confidentiality and government restrictions strictly limit the distribution of sensitive information, centrally controlling all aspects of DT integrated information using a dynamically managed accessibility policy process is essential. Deriving real business gains from digital twins requires the enterprise-wide process changes. Implementation of digital twins requires people, time, and equipment. It is important for a DT-committed enterprise to invest in talent development and drive people change management to improve productivity, resilience, security, and collaboration, and minimize cost. How does one evaluate individual contributors, assigned as members of the IVPDT, in this new digital twin-impacted organization? We envision that each contributor will experience evaluation from many sources all virtually linked by the DT methodology and networks. Specifically, such evaluation stakeholders could include inputs from the virtual product team itself, target customers, trusted suppliers, senior colleagues, and qualified senior individuals from the enterprise core. A balanced performance scorecard will emerge resulting from an evaluation based on multiple assessments of the individual from disparate sources, not the current practice in today’s functional organizations. Evidently, individuals dedicated to a virtual product team may find themselves in a continuous career flux, as these virtual teams form and disband. Consequently, the DT-impacted enterprise, operating in an increasingly disruptive environment, must aid its human resources in developing the appropriate skills to be effective throughout their career life cycles. The DT methodology facilitates this goal by providing timely and digitally delivered training and education through the DT information grid and through the adoption of temporary educational sabbaticals for all contributors.
Operating Digital Twins Within an Enterprise Process
649
11 What to Watch Out for in Implementing the DT Methodology? Introduction and operation of digital twins in an enterprise is a significant economic undertaking and takes time and people to implement. We recommend seamless interoperability across digital twins and stakeholders. We advocate the use of rich domain knowledge within machine learning for explainable predictive modeling in digital twins. An active DT network may very well require a continuous DevOps commitment to assure product’s validity and safe operation. If linked digital twins are validated prior to their use in decision-making and continuously maintained, design and operational errors will be minimized. Introduction and operation of digital twins in an enterprise is a significant economic undertaking and takes time to implement. We believe that the following cautionary thoughts represent time-honored engineering insights and business practices that are valid in a DT-committed enterprise as well. They fall roughly under three categories: Be cautious of models, Beware of machine learning without domain knowledge, and be cognizant of human fallacies and design errors. • The aphorism “All models are wrong, but some are useful” [10] is worth remembering in a DT-committed enterprise. This is especially relevant for complex CPS that operate under dynamic and uncertain conditions, such as aerospace, manufacturing, automotive, chemical, energy, and other industrial systems. Managing the health of complex, dynamic and uncertain systems is particularly difficult, because developing complex physics-based models for faulty conditions is often difficult because the underlying numerical algorithms used by the model may not even converge. This is the reason for advocating the use of causal models that integrate data, knowledge, and physics for SHM. • While utilizing the digital twin process, and especially in the system health management methodology, one must continuously question the value of the data that is measured, collected, analyzed, and acted upon. The value of sensors that have been instantiated in the product’s value must be determined. Indeed, the enterprise must establish evaluation processes to measure if elements of the DT methodology has really worked and is yielding value or is data simply being gathered and not analyzed or, if analyzed, not acted upon. • Digital twins necessitate handling big data, either in tabulated form, or streamed and collected from sensors in real-time. Automating data collection, secure storage, processing, and management is an enterprise-wide issue. At the enterprise level, one must also be cognizant of the need for considerable effort in data cleaning and pre-processing due to missing data, data errors, data currency, data integrity, and the effect these have on the quality of data-informed models and digital twin-based decisions resulting from them. These catagories lead to a number of issues in implementing the DT methodology:
650
K. M. Rosen and K. R. Pattipati
• Currently, there is no standardization, definitions, and common language for digital twins. This limits their interoperability across the product’s life cycle and across vendors. There is also a skills gap in enterprises because digital twins require expertise in predictive analytics, industrial internet of things, and cloud platforms. • Since DT is an artificially intelligent virtual replica of a real-life CPS, we believe that DT should be compatible with humans having the following six attributes: 1. Symbiotic cooperation of data, models, and knowledge to achieve superior model performance, and to ensure safe operation of CPS. Indeed, machine learning without domain knowledge can be poor. Domain knowledge can provide features gleaned from physics and subject matter experts to make hybrid model-data-knowledge-based models as accurate representations of product’s performance under nominal and faulty conditions. 2. Values of AI aligned with humans, through intelligent agents that observe human actions and learn the human intent. 3. Different time scales for humans and AI, so that humans can be used for reasoning, deriving meaning from a situation and decision making, while AI-enabled IIoT and DT are used for sensing and perception, local interpretation, communication, and decision support. 4. Structured, abstract, and hierarchical view of systems by humans, which fits in nicely with the (hierarchical) multi-functional causal graph representations of CPS. 5. Building curiosity in machines, as suggested in the human-in-the-loop active learning advocated in product testing and improvement phases to address unanticipated failures. 6. Explanatory Capability: DT should provide explanations behind the recommended decisions, thereby leading to trustworthy (semi-) autonomous operation. The explainable AI methods in DT should present causes of the faults to make CPS operations safer. • Since business, financial and engineering decisions in an enterprise involve humans, one must be cognizant of human cognitive limitations in design decisions, such as bounded rationality, limited look ahead, anchoring, recency, elimination by aspects, and so on [22]. • Enterprise managers and users of a DT methodology and network must ensure that their networks are properly architected, verified, validated and stress-tested to assure network reliability, robustness, security, and efficacy. The DT network management process must assure proper protection of information/knowledge content with an elaborate policy schema and that it is current, accurate and complete. The network must control, recognize and facilitate appropriate levels of cyber security and selective accessibility (need to know) within an overarching policy criterion. • Although the DT methodology facilitates the availability and multi-use of information/knowledge, enterprise must understand the sensitivity of incorrect
Operating Digital Twins Within an Enterprise Process
651
Fig. 25 Sample DT maturity model. (Courtesy: Adam Drobot)
•
•
•
•
domain knowledge and the multiplying impact of any information error throughout the enterprise. Humans will clearly interface with the network; therefore, processes need to be in place to track human and non-human impact on the network. This will require a significant and continuous configuration management effort to avoid model corruption with complete traceability of events. An active DT network may very well require a commitment to a DevOps model of continuous integration and regression testing of the Digital Twin to assure the product’s validity and safe operation. This, in turn, may require significant changes in the enterprise’s cultural and engineering practices [4]. Validation of the enterprise digital twin methodology must involve a mechanism for continuously verifying and validating the efficacy of each digital twin line as these are instantiated in the organization’s digital twin architecture. In fact, communications continuity/accuracy between the various digital twin lines and trusted outside entities, as well as the validity/applicability of the instantiated knowledge, tools, and standard processes, must be continuously addressed. A formal approach to accomplishing this requirement is suggested in Chap. 20 (Security, Privacy, and Assurance for the Digital Twin). We also firmly expect that the enterprise’s commitment to DT methodology will eventually be measured by an approach like the CMMI certification methodology currently used to evaluate the effectiveness of an organization’s ability to produce quality software [13]. There currently exist several proposed DT maturity models [24, 44]; one approach to an enterprise DT maturity model is shown in Fig. 25. This model has five levels of digital twin maturity beginning at virtual DTs with standalone digital representations at Level 1; aggregated DTs, instrumentation and analytical models at Level 2; progressing to orchestrated DTs with predictive behaviors, data fusion and rudimentary controls at Level 3; followed by an enterprise-wide use and reuse DT capability with networked sensors and data transmission to facilitate prognostics and diagnostics as a basis for adaptive controls at Level 4, and culminating in a high-level of automation and autonomy, where DT is an essential part of an enterprise ecosystem at Level 5.
652
K. M. Rosen and K. R. Pattipati
This last level features models with high fidelity and granularity, as well as an enterprise-wide infrastructure to support the DT methodology, thereby demonstrating a high-level of business commitment. We expect that industry-wide DT consortiums will eventually agree on a standardized DT maturity model. Efforts in this direction are underway [42]. Such a model should be adopted by any enterprise seeking to truly bring the DT methodology into its organizational structure and culture. With such a model, the leadership should be able to measure the progress towards a DT-enabled enterprise.
12 Enterprise Value Proposition The DT-committed enterprise greatly enhances performance, risk management, cost, quality, time to market, operability, customer acceptance and flexibility of its products in a closed-loop setting.
The US Air Force Global Horizons study [20] states that Digital Twin’s cross- domain, advanced physics-based modeling and simulation tools can reduce development cycle time by 25% through in-depth assessment of the feasibility and cost of integrating technologies into a system. McKinsey [17] estimates that “linking the physical and digital worlds could generate up to $11.1 trillion a year in (global enterprise) economic value by 2025. The impact of digital twins on the aerospace and defense industry will be immense by reducing maintenance costs, improving performance, increasing operational capacity, and optimizing weapons design”. GE study [35] provides an interesting example of the results of using the Digital Twin methodology in the power generation. The study found improvements in startup operational flexibility, where startup times are reduced by 50%; improvements in plant economic dispatch and operations schedule, resulting in the delivery of up to an additional $5 M of additional MWhr; improvements in system availability and reliability, reduced unplanned outages saving up to $150MM/year. Other examples given by GE [35] find that “Network Digital Twins can produce cost reductions of up to 30%, planning time reductions of up to 20%, and reductions in new build and internal process costs by up to 7%. Utilities can also achieve field inspection and back office productivity by as much as 8% as well as improved network asset analysis and data accuracy.” Indeed, DT technologies of advanced analytics, automation, the Industrial Internet of Things (IIoT), Industry 4.0, AI/ML and cloud computing have the potential to improve the productivity of integrated product development teams, increase operational efficiencies and the customer experience [7].
Operating Digital Twins Within an Enterprise Process
653
We firmly expect that adoption of the DT methodology will greatly enhance the expected Internal Rate of Return (IRR) for any new product introduction by reducing development time, reducing non-recurring & recurring cost, and reducing the time to reach positive cash flow. Furthermore, a rigorous continuous improvement program based on accurate product health and operational status, the DT methodology will reduce warrantee costs, improve safety, and greatly extend product life.
13 Concluding Thoughts We recommend that DT-committed enterprises should continue to commit to and gear up for DTs by adopting and adhering to the principles of Platform- based Design, MBSE and Intelligent Knowledge Bases, Intelligent SHM, DT Interoperability, Agile and Secure Processes, Continuous Integration and Regression Testing along the DEVOPs model, and gaining and maintaining trust among stakeholders, customers and certifying agencies through transparent DT processes.
The digital twin methodology will continue to evolve over time. Specifically, we expect that emerging technologies, such as AI, autonomy, adaptive and smart manufacturing, and even quantum computing, may play a role in this evolution. We envision a time where many standard component designs and manufacturing instructions/ processes will be completely automated based upon proven deep knowledge kernels instantiated in the trusted database. Once again, this approach will require a substantive amount of closed loop verification and validation of a continuous nature. The operative word in this process is the use of closed loop feedback to continuously validate how well these autonomy-based knowledge kernels and standardization rules operate. Consequently, we envision that the committed V&V methodology for digital twins will itself evolve requiring a set of autonomously established continuous stress tests. The enterprise must accept significant structural and cultural changes to succeed with a DT methodology. The authors firmly counsel that adaptation of these necessary enterprise modifications will not be easy and, as such, will require top-down leadership to respond to structural and cultural changes with necessary corporate resources (adequate and digitally literate staff, leadership, and hardware-software computing and communication infrastructure, budget). Thoughtful and deliberate implementation of DTs has the potential to deliver “faster, cheaper and better” products by facilitating situational awareness and
654
K. M. Rosen and K. R. Pattipati
effective organizational decision-making by having right models/knowledge/data from the right information sources in the right context to the right stakeholder at the right time for the right purpose, viz., design, manufacturing, optimal operation, monitoring, and proactive maintenance of the products. We recommend that DT-committed enterprises should continue to commit to and gear up for DTs by adopting the following general principles. • Standardized DT Computing/Eco System: The enterprise and related stakeholders must commit to a standardized DT software taxonomy, architecture, and data structure; complemented by an adequately, and continuously, trained digitally literate cadre. This commitment will involve substantive capital and support investment. • Platform-based Design: Platform-based design, facilitated by the substantive DT knowledge base and communication with the trusted stakeholders (including the intended customer), is a rigorous approach that will avoid costly requirements creep, late design changes and accelerate the time to market of a quality product or service. • MBSE and Intelligent knowledge/information data bases: With the ability to represent complex virtual replicas of products in all its lifecycle phases, coupled with its functional requirements and specifications, MBSE tools and deep physics-based domain analyses will accelerate and reduce the cost of product development by discovering design issues/failures early. During product development and testing, the DT methodology greatly improves the configuration and performance accuracy of the product by near real-time updates of the current knowledge and deep physics- based models, thereby greatly enhancing product performance, and safety while substantially reducing cost and schedule delays during the subsequent test programs. The knowledge bases enable the enterprise to address the product’s manufacturing, operational and disposal issues early in the product’s life cycle. • System Health Management: Use of SHM Twins commencing in the Conceptual Phase is extremely valuable because the controllability of life cycle costs is greatest in the early conceptual phase when uncertainty in cost estimation is the highest and creative advanced design of fault management is a key for cost control in product sustainment. Extensive use of SHM digital twins in the product development phase enables the Project Team and operational planners to identify and mitigate technical, business, and financial “risks” in deployment in a much-accelerated and cost-effective manner. SHM Twin has the potential for making every service technician an expert through guided troubleshooting with rich AR/VR interfaces and rapidly generating accurate field failure scenarios for training and support. FRACAS data, linked through the DT information network, will result in a continuous improvement of the product with a substantial positive effect on field operations, customer satisfaction and safety. • DT Interoperability: Introduction and operation of digital twins in an enterprise is a significant economic undertaking, results in a different way of thinking, and takes time, people, and capital to implement. We recommend seamless interoper-
Operating Digital Twins Within an Enterprise Process
655
ability be stressed across all digital twins, domains, and stakeholders. Achieving this extreme, but very necessary, level of communication and interoperability will require standardization of software taxonomy, architecture, and data structures as well as commonality of models. This commitment can only be achieved by the senior leadership proactively supporting the decisions of the dedicated team chartered to implement the DT process throughout the enterprise. • Agile, Resilient and Secure Processes: Digital twin committed enterprise structures are expected to be congruent with the DT activities. They employ robust, flexible, and agile process management strategies to achieve resilience in uncertain business environments. As modern enterprises demand enhanced confidentiality and government restrictions strictly limit the distribution of sensitive information, centrally controlling all aspects of DT integrated information using a dynamically managed accessibility policy process is essential. It is important for a DT-committed enterprise to invest in talented and committed individuals dedicated to continuous improvement, while recognizing the need for security and protecting the enterprise’s confidential information. • Continuous DT Validation: An active DT network may very well require a continuous DevOps commitment to assure product’s validity and safe operation. If linked digital twins are validated prior to their use in decision-making and are continuously maintained, design and operational errors will be minimized. • Stakeholder Trust: For successful implementation of digital twins in an enterprise, it is essential that organizations gain the trust of certifying agencies (e.g., FAA, EASA) and customers on the integrity of enterprise processes and, in turn, the certifying agencies and targeted customers embrace the enterprise’s deep physics-based models, tools and processes in supporting eventual product certification/qualification and ultimately the application of the DT processes.
References 1. Aerodynamic Solutions. (2020). Hyper efficient turbomachinery CFD analysis using GPU accelerators. RTRC Full Engine Symposium, August 12, 2020. 2. AGARD. (1980) Design to cost and life cycle cost. AGARD-CP-289. 3. AMSAA. (2011). Design for reliability handbook. TR-2011-24, 2011. Available at: https:// www.dac.ccdc.army.mil/Documents/CRG/Design%20for%20Reliability%20Handbook%20 (TR-2011-24).pdf. Accessed 7 Feb 2021. 4. AWS. (2020). Introduction to DevOps on AWS, 2020. https://d1.awsstatic.com/whitepapers/ AWS_DevOps.pdf. Accessed 31 Mar 2021. 5. Barricelli, B. R., Casiraghi, E., & Fogli, D. (2019). A survey on digital twin: Definitions, characteristics, applications, and design implications. IEEE Access, 7, 167653–167671. 6. Basile, F., Chiacchio, P., & De Tommasi, G. (2009). An efficient approach for online diagnosis of discrete event systems. IEEE Transactions on Automatic Control, 54(4), 748–759. 7. Behrendt, A., de Boer, E., Kasah, T., Koerber, B., Mohr, N., & Richter, G. (2021). Leveraging Industrial IoT and advanced technologies for digital transformation how to align business, organization, and technology to capture value at scale. McKinsey & Company. 8. Bertsekas, D. P. (2019). Reinforcement learning and optimal control. Athena Scientific.
656
K. M. Rosen and K. R. Pattipati
9. Boschert, S., Heinrich, C., & Rosen, R. (2018). Next generation digital twin. Proceedings of TMCE, Las Palmas de Gran Canaria, Spain. Available at: Edited by: Horvath I., Suarez Rivero JP, and Hernandez Castellano PM, 2018. 10. Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791–799. https://doi.org/10.1080/01621459.1976.10480949 11. CPSC. (2021). Recall list. Available at: https://www.cpsc.gov/Recalls. Accessed: 6 Feb 2021. 12. Cloutier, R. J., & Verma, D. (2007). Applying the concept of patterns to systems architecture. Systems Engineering, 10(2), 138–154. 13. CMMI. (2006). Standard CMMI Appraisal Method for Process Improvement (SCAMPI) A, version 1.2: Method definition document. https://resources.sei.cmu.edu/library/asset-view. cfm?assetID=7771. Accessed 11 Apr 2021. 14. Darwiche, A. (2009, April). Modeling and reasoning with Bayesian networks. Cambridge Univ. Press. 15. Deb, S., Pattipati, K. R., Raghavan, V., Shakeri, M., & Shrestha, R. (1995). Multi-signal flow graphs: A novel approach for system testability analysis and fault diagnosis. IEEE Aerospace and Electronics Magazine, 5, 14–25. 16. Deb, S., Pattipati, K. R., & Shrestha, R. (1997, September). QSI’s integrated diagnostics toolset. 1997 IEEE AUTOTESTCON. 17. Digital Twin Consortium. (2020). https://www.digitaltwinconsortium.org/industries/ aerospace-and-defense.htm. Accessed 1 Apr 2021. 18. Flight Control Division. (1970, April). Mission operations report. NASA Manned Spacecraft Center. 19. Friedenthal, S., Moore, A., & Steiner, R. (2014). A practical guide to SysML: The systems modeling language. The MK/OMG Press. 20. Global Horizons. (2013). United States air force global science and technology vision. AF/ST TR 13-01. 21. Heller, T. (2000, August). If only we had known sooner: Developing knowledge of organizational changes earlier in the product development process. IEEE Transactions Engineering Management, 47, 335–344. 22. Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93(5). 23. Keiner, W. (1990). A navy approach to integrated diagnostics. In IEEE Autotestcon 90 conference record (p. 443-450). 24. Kim, Y-W. (2020). Digital twin maturity model. WEB 3D 2020 The 25th International ACM Conference on 3D Web Technology, November 9–13, 2020, Virtual Conference, Seoul, Korea Industrial Use Cases Workshop on Digital Twin Visualization. 25. Kingsley M., (2009, May 27–29) Advanced mechanical diagnostics and lessons learned from S-92® aircraft. American Helicopter Society 65th Annual Forum, Grapevine, Texas. 26. Korbryn, P. (2016, Spring). Aircraft digital thread: An emerging framework for lifecycle management. AFRL Presentation. 27. Kruse, B., & Shea, K. (2016). Design library solution patterns in SysML for concept design and simulation. Procedia CIRP, 50, 695–700. 28. Levchuk, G.M., Yu, F., Levchuk, Y.N. and Pattipati, K.R. (2003, June) Design of organizations: From hierarchies to heterarchies. 8th International Command and Control Research and Technology Symposium, TRACK 3: MODELING & SIMULATION. Washington, DC. 29. Levchuk, G. M., Meirina, C., Pattipati, K. R., & Kleinman, D. L. (2004). Normative design of project-based organizations: Part III – Modeling congruent, robust and adaptive organizations. IEEE Transactions on Systems, Man, and Cybernetics: Part A: Systems and Humans, 34(3), 337–350. 30. Luo, J., Tu, H., Pattipati, K. R., Qiao, L., & Chigusa, S. (2006, August). Graphical models for diagnostic knowledge representation and inference. IEEE Instrumentation and Measurement Magazine, 9(4), 45–52.
Operating Digital Twins Within an Enterprise Process
657
31. Mackenzie, D., & Pearl, J. (2018). Book of why: The new science of cause and effect. Basic Books. 32. Nadler, D., & Tushman, M. (1980). A model for diagnosing organizational behavior. Organizational Dynamics, 9(2), 35–51. 33. NASA. (2012). Fault management handbook. NASA-HDBK-1002, Draft 2, April 12, 2012. 34. NASA. The ill-fated space odyssey of Apollo 13. Available at: https://er.jsc.nasa.gov/seh/pg13. htm. Accessed 30 Jan 2021. 35. Parris, C. (2020). Four ways digital twins can drive industrial innovation. Forbes Technology Council Article, 2020 (Originally Appeared in Forbes.com). https://www.forbes.com/sites/forbestechcouncil/2020/12/11/four-ways-digital-twins-can-drive-industrial-innovation/?sh=6d85 faac2778 36. Pattipati, K. R., Kodali, A., Luo, J., Choi, K., Singh, S., Sankavaram, C., Mandal, S., Donat, W., Namburu, S. M., Chigusa, S., & Qiao, L. (2008). An integrated diagnostic process for automotive systems. In D. Prokhorov (Ed.), Studies in Computational Intelligence (SCI) (Vol. 132, pp. 191–218). 37. Rios, J. (2021). Digital clone at sentient science. Aerospace and Defense Review, Vendor Viewpoint (Sentient Science). 38. Simpson, W. R., & Sheppard, J. W. (1994). System test and diagnosis. Springer. 39. Starr, L., Mangogna, G., & Mellor, S. (2017). Models to code: With no mysterious gaps. APress. 40. Urban, L. A. (1972). Gas path analysis applied to turbine engine condition monitoring. AIAA Paper No. 72-1082, AIAA/SAE 8th joint propulsion specialists conference, New Orleans, LA, November 29- December 1, 1972. 41. Virgil. (29 BC). Lucky is he who has been able to understand the causes of things. 42. Voas, J., Mell, P., & Piroumian, V. (2021). Considerations for digital twin technology and emerging standards. Draft NISTIR 8356, April 2021. 43. Vyas, K. (2021). 23 of the worst engineering disasters to date. Available at: https://interestingengineering.com/23-engineering-disasters-of-all-time. Accessed 6 Feb 2021. 44. Westermann, T., Anacker, H., Dumitrescu, R., & Czaja, A. (2016) Reference architecture and maturity levels for cyber-physical systems in the mechanical engineering industry. In 2016 IEEE International Symposium on Systems Engineering (ISSE) (pp. 1–6). IEEE. 45. Zeiss. From Powder to Performance. Zeiss #D ManuFACT. Dr. Kenneth M. Rosen has over sixty years of experience in the Aerospace, Propulsion, Turbo machinery, manufacturing and systems engineering communities, much of which has been at the leadership level. Dr. Rosen is the founder and President of General Aero-Science Consultants, LLC. (GASC), organized in 2000; and a Principal Partner of Aero-Science Technology Associates, LLC (ASTA), formed in 2002. Both GASC and ASTA are engineering and business development consulting firms. He is currently an active consultant concentrating on innovative VTOL aircraft, aircraft gas turbines and electric propulsion. From 2000 to 2002, Dr. Rosen served as Corporate President of Concepts-NREC. Prior to this, he spent over 38 years with the United Technologies Corporation. He quickly moved to Sikorsky Aircraft, where he held many major engineering and management positions, including VP of Research & Engineering and Advanced Programs & Processes, directing such advanced projects as the UH-60 Black Hawk, RAH-66 Comanche, S-92 (2003 Collier trophy winner), Cypher (UAV), S-76 and X-Wing helicopters. During this period, he was a member of the Sikorsky Executive Board and managed all of Sikorsky’s research, systems
658
K. M. Rosen and K. R. Pattipati engineering, product development, design, production engineering, ground/flight test and avionics/systems integration efforts. Dr. Rosen holds five US patents and has written numerous papers in the fields of helicopter design, tilt rotor optimization, product development, propulsion, aerothermodynamics, icing, and systems engineering. He holds MS and PhD degrees in mechanical engineering from Rensselaer Polytechnic Institute and is a graduate of the Advanced Management Program at the Harvard University Business School. Dr. Rosen is an elected member of the National Academy of Engineering (NAE), the Connecticut Academy of Science and Engineering, and holds Fellow rank in the following societies: the ASME, the Royal Aeronautical Society (RAeS), the Society of Automotive Engineers (SAE), the AIAA and the Vertical Flight Society (AHS). He is also a recipient of the NASA Civilian Public Service Medal, the Dr. Alexander Klemin Award for lifetime achievement from the AHS, and Vice President Al Gore’s “Hammer” award from the DOD for innovative cost management. In 2007, the AHS selected him to deliver the Dr. Alexander A. Nikolsky Honorary Lectureship and NASA cited him for his work in Heavy Lift Helicopters as part of the NASA Group Achievement Award. He has served as Chairman of the Daniel Guggenheim Medal Board of Award, Chairman of the Board of the Rotorcraft Industry Technology Association, Chairman of the UTC Engineering Coordination Steering Committee, Vice Chairman of the Software Productivity Consortium, and Chairman of the AIA Rotorcraft Advisory Group. Additionally, he has been a long-term member of NASA’s Aeronautics and Space Transportation Technology Advisory Committee, the SAE Aerospace Council, the NRC Assessment Panel on Air and Ground Vehicle Technology for the Army Research Laboratory. Krishna R. Pattipati is currently the Distinguished Professor Emeritus and the Collins Aerospace Chair Professor of Systems engineering in the Department of Electrical and Computer Engineering at UConn. His research interests are in the application of systems theory, combinatorial optimization and inference techniques to agile planning, anomaly detection, and diagnostics and prognostics. Common characteristics among these applications are uncertainty, complexity, and computational intractability. He has published over 500 scholarly journal and conference papers in these areas. He is a Cofounder and Chairman of the Board of Qualtech Systems, Inc., a Rocky Hill, CT firm specializing in advanced integrated diagnostics software tools (TEAMS-Designerdiagnostics and prognostics, TEAMS-RT®, TEAMS-RDS®, TEAMATE®, PackNGo®), and serves on the board of Aptima, Inc. located in Woburn, MA. Dr. Pattipati received the Centennial Key to the Future award from the IEEE Systems, Man, and Cybernetics (SMC) Society in 1984. He has served as the Editor-in-Chief of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS – PART B from 1998 to 2001. He was corecipient of the Andrew P. Sage Award for the Best SMC
Operating Digital Twins Within an Enterprise Process
659
Transactions Paper for 1999 and the Barry Carlton Award for the Best IEEE Aerospace and Electronic Systems (AES) Transactions Paper for 2000. He has won nine Best Paper Awards at several testing and command and control conferences. He also received the 2002 and 2008 NASA Space Act Awards for “A Comprehensive Toolset for Model-based Health Monitoring and Diagnosis,” and “Real-time Update of Fault-Test Dependencies of Dynamic Systems: A Comprehensive Toolset for Model-Based Health Monitoring and Diagnostics”, the 2005 School of Engineering Outstanding Teaching Award, and the 2003 AAUP Research Excellence Award at UCONN. He is an elected Fellow of IEEE for his contributions to discrete-optimization algorithms for large-scale systems and team decision-making and of the Connecticut Academy of Science and Engineering.
The Digital Twin for Operations, Maintenance, Repair and Overhaul Pascal Lünnemann, Carina Fresemann, and Friederike Richter
Abstract Looking at digital twins in terms of their information sets (master and shadow models), a significant part of the shadow models is created in the context of product life. Digital twins must be designed accordingly, focusing on their dedicated added value or business model. This concerns not only the information and data models, but also the communication technologies, processing routes and interaction mechanisms used. With appropriately designed digital twins, product life becomes a source of knowledge for optimizing or tracking product systems. MRO processes play a special role in this. Here, the digital twin becomes a monitoring system, information source, process manager or information sink through suitable functions and thus a potential knowledge repository. Keywords Maintenance, Repair & Overhaul (MRO) · Product · Life cycle
With the completion of production, the product system enters the use phase and thus one of the main phases of product life cycle. According to a study from 2020 [1], the use phase is also one of the most important lifecycle phases for generating added value through digital twins. It is also the most relevant phase for generating digital shadows, adapting digital masters and generating benefit from the digital twin [1]. These processes and central functions supported by digital twins in the context of product use are discussed in more detail below.
P. Lünnemann (*) Fraunhofer Institute for Production Systems and Design Technology, Berlin, Germany e-mail: [email protected] C. Fresemann TU Berlin Chair of Industrial Information Technology, Berlin, Germany F. Richter SIEMENS Large Drive Applications, Berlin, Germany © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_23
661
662
P. Lünnemann et al.
Two potential use-cases are promising candidates for added value: on the one hand the comparison of actual and expected values during the product use enabling e.g., predictive maintenance [2–4]. On the other hand, the feedback to design in order to improve future product generations. Both use-cases require an extended data management. The focus here is on the intended function of the twin system as a request-creating instance. The aim is generally not to collect data as far as possible, but to collect specific data sets from the product life in a dedicated manner and, if necessary, with processing close to the creation. The twin system itself can be used as the basis for the provision of value-added services. A special feature of the utilization phase are the Maintenance Repair and Overhaul (MRO) issues. As a popular example, “predictive maintenance” is often the focus of consideration [5–8]. The continuous connection to the product system and the possibility of comparing the actual operating behavior with the expected behavior makes it possible to reconstruct the expectations, which are otherwise made under best assumptions, on a product-specific basis. In this way, plant availability can be increased [5], maintenance and material costs [8] can be significantly reduced as well as precise, instantiated cost and quality control over time can be enabled.
1 Considered Use Case In the following, the challenges and solution principles of digital twins in the utility phase and in MRO are explained on the basis of electric drives. Asynchronous and synchronous motors up to 100 MW in the high-voltage and high-power spectrum are considered. These motors have a wide range of applications from mill drives to high-speed compressors to ring motors with customers from various industries such as oil and gas, energy, chemical, marine or metal and mining. For high-performance motors there exist specific market and customer requirements. The motors are designed specifically for the individual customers. Therefore, adapted development and Supply Chain Management methods are necessary. The overall process is specialized in Make to Order production (Fig. 1). Today, extensive condition monitoring systems are used to protect these engines. The scope of motor monitoring and the selection of sensors depends on technical boundary conditions such as speed, type of bearing, load profile or operating environment. The sensor data are as both plant operator and motor supplier onside at the customer or online available to draw conclusion about the condition. In order to achieve the longest possible lifetime, stepped maintenance intervals are recommended for each motor type in addition to the defined monitoring. After defined operation hours, a motor specialist mechanical will inspect from outside and inside the motor for e.g., dirt, debris and corrosion. These inspections include a visual control of the winding overhang and check for potential oil leakages. A preventive inspection after 6 years operation, done by minimum motor specialist mechanical and motor specialist electrical is highly recommended. Depending
The Digital Twin for Operations, Maintenance, Repair and Overhaul
663
Fig. 1 High-voltage high-performance motor [9]
on motor type this service level takes 3–5 working days and disassembling of e.g., main power supply, water and oil supply piping is required. In addition to the visual inspections as in the first level, now the bearing, the complete stator and rotor, and the completes winding overhang are examined. On the electrical side the service expert checks the electrical connections and sensors, check polarization index of stator winding and measure DC resistance. The next preventive maintenance step is even more detailed and requires the separation of rotor from stator. Therefore, the outage is 10–12 working days and is recommended after approx. 12 years. More experts are required, like cooler specialist or measuring engineer. Additionally, the service experts will inspect the hydrostatic, the heat changer, replace recommended parts from earlier inspections and measure Tan Delta and partial discharge. To extend the life of the electric drive, an overhaul is recommended by the manufacturer after about 20 years. For this purpose, the motor must be taken to a certified workshop and completely disassembled and inspected by various experts. Usually, all components are overhauled and recommended spare parts are replaced. This process takes up to 25 working days, depending on the motor type. All those inspections and maintenance are to be carried out by qualified specialists and include mechanical and electrical inspections. Therefore, engineering expertise (mechanical and electrical) is necessary. By using digital twins, it will be possible in future to carry out continuous preventive maintenance on electric drives. Maintenance intervals no longer have to be determined statistically and on the basis of empirical values. Real-time analyses and automated mitigation recommendations will give rise to new, data-driven business models that create added value for the customer such as replacing only components
664
P. Lünnemann et al.
that run out of life. In addition, development can improve its models based on field data and thus improve product quality through feedback on design concepts. In the following, we will discuss how digital twins will change the use of machines as well as their maintenance, repair and overhaul.
2 The Role of Digital Twins in the Use of Product Systems The application of digital twins in the use phase is obviously determined by the tasks of the digital twin according to the formulated business model [10]. The use of the digital twin as a function carrier, as a monitoring system or as a source of knowledge for the use of the product system can be distinguished here. As varied as the far-reaching concept of digital twins is, so are the implementations in operation; accordingly, only a categorization of circumstances to be considered can be made in the following, based on [11] (Table 1). A first distinction arises in the question of which data sources are used within the framework of the digital twin. The use phase of the product systems is often the essential phase for data acquisition for digital twins [1]. Prior to this, depending on the concept and business model pursued, data is fed to the Digital Twin in the context of production and assembly. In the context of product deployment, two data sources can be distinguished for the twin: Data from people and data from machines. These two categories can be fundamentally distinguished in their content and form due to the type of standardization. While machines, assuming they function properly, always deliver data in the same and predictable form and quality, human data sources vary due to their adaptation in the context of perception. At the same time, the processing of human-generated data is more difficult, for example, through standardized interfaces and forms in the interaction or condensation of the data into information. Another form of distinguishability of field data results from the need for real- time capability, which is theoretically applicable to both forms of data collection (human and machine), but in practice is only applied to machines. In this context, the functions of the concept of the digital twin determine the frequency of data exchange to be implemented. This becomes clear if we want to provide a function that performs an action, for example the digital twin sends a warning, within a certain time, for example 5 min, on the basis of the data from the operation. In the case of an electric drive, for example, these can be protection signals due to undesired vibrations on the rotor, bearing or foundation. Subtracting the necessary transmission and computation time results in our minimum requirement for a real-time capability of the twin system. We distinguish a hard and soft form for real-time capability. In a hard-real-time requirement, we expect data from the system at a fixed time; in a soft real-time requirement, we expect the data within a fixed interval. A similar consideration arises for the necessary intermediate storage and compression of the collected data. This circumstance conflicts with real-time capability, especially in mobile systems without reliable network connections. The real-time
The Digital Twin for Operations, Maintenance, Repair and Overhaul
665
Table 1 Design in 8D-Model Dimension in 8D-Model Integration depth: What is represented in the twin?
Connectivity: How are the real and digital worlds connected? Update frequency: How often do the twin and represented system communicate? Intelligence: How comprehensive is the “intelligence” of the digital twin? Simulation capabilities: What simulations are included in the twin? Digital model richness: Which models are included in the twin?
Use case feedback to design For feedback to design, the focus is primarily on the machine system and its components. For the interpretation of measured values, however, it is necessary to take the environment into account. A monodirectional data connection is sufficient for feedback to design.
Irregular data provision is sufficient for feedback to design.
Use case predictive maintenance and MRO For predictive maintenance, the focus is primarily on the machine system and its components. For adaptive MRO, surrounding infrastructures such as production and measurement systems are of interest in addition to the product system. For predictive maintenance, a monodirectional data connection is sufficient. For adaptive MRO, multidirectional communication must be provided. For predictive maintenance, irregular data provision is sufficient. For adaptive MRO, real-time data provision is necessary.
Automated functions are mainly used for feedback to design.
For predictive maintenance and adaptive MRO, automated functions are sufficient. AI approaches can provide useful support. Simulation is not absolutely Adaptive simulations can be very necessary for feedback to design. helpful for predictive maintenance and adaptive MRO.
The digital models depict geometries, kinematics and behavior in various physical dimensions according to the feedback sought. Human interaction: Human interaction is mostly Which human represented by information interactions are diagrams. included? Product life cycle: The focus of consideration is on Which phases of the use and development or life cycle are production and development. included?
For predictive maintenance, behavior al data is usually relevant. Adaptive MRO usually also requires geometric and kinematic models. The interaction in predictive maintenance and adaptive MRO can usually be adequately solved by visualisation in diagrams. Predictive maintenance and adaptive MRO focus on the use phase.
capability as well as the intermediate storage must consider the connectivity of the product system or formulate requirements for it. Connectivity sometimes also has a limiting effect on the ability to send data or commands to the digital twin. This must be taken into account during the design phase as well. Another category to be considered is the integration capability of the digital twin. In the application of digital twins, there may be a requirement to establish connections to other product systems or digital twins of these systems. These, like their own twin, can change in their functions and interfaces, which could limit the overall
666
P. Lünnemann et al.
functionality. Accordingly, it is necessary to monitor these interactions as well as the twin itself and, if necessary, to provide for maintenance – a circumstance that will be examined in more detail below. Another point to consider in the use phase of the products is to ask about the stakeholders to be addressed by the twin. It is important to understand the use cases of the stakeholders and the needs formulated here. Scenarios include the further use of the information provided in IT systems, for example the integration of status data of individual subsystems into a dashboard of an overall system, the presentation of information, and the short-term or long-term archiving of information for later use. The data processing in the digital twin begins, looking at the digital shadow, in the sensor. Here, circumstances of the physical world are converted into signals by suitable technologies. Depending on the type of sensor, this data is passed on by means of digital or analogue signals to a computing unit which translates the signal from the sensor into a datum. Before or in this first processing instance, depending on the concept, approaches such as sensor fusion (in the data fusion approach or feature fusion approach) are also used. Here, close to the physics, the highest sampling rate or data transfer rate is found in most concepts. In some concepts, signal processing is done on dedicated programmable logic controllers (PLC) before the data is passed to an edge system. Other concepts connect the sensors directly to the edge. Further processing of the signals on the edge is done using the defined algorithms or applications until the data is temporarily stored and converted for onward transmission. This data processing within the framework of edge systems can generally be configured and adapted to the circumstances of the respective product instance. For example, products that are less frequently in contact with cloud systems may have different data retention and aggregation tactics than those that are in reliable exchange. Depending on the intended use, it can make sense to provide so-called fog computing between the edge and the cloud, which relates several edges to each other. In this way, control loops can be set up close to the relevant physics, which can react much faster and more reliably to the system state than is possible with cloud systems. Depending on the available bandwidth, the reliability of the connection, the data to be transmitted and the purpose of use, the communication between Edge or Fog and the cloud will be continuous or in packages (Fig. 2). Digital twins can also be a critical component for product function in the use phase [12]. Smart product concepts, for example, deliberately address the adaptation of the environment and the integration of external services to provide functions. Here, the concept of digital twins fits seamlessly into the definition of smart products and can form a backbone of them [13]. In these cases, the digital twin itself can become a crucial component of the business model, whereby – it should be emphasized again – digital twins must never be designed for an end in themselves, but always for a specific task [11, 14]. An example of the adaptivity of the product systems through digital twins is the adaptation of the system behavior to the actual operating modes and the optimization of the system control to these. The basis for this is, on the one hand, the understanding of the actual use of the product based on
The Digital Twin for Operations, Maintenance, Repair and Overhaul
667
Fig. 2 Data processing from the sensor to the cloud
operating data and, on the other hand, the mapping of the system behavior in the digital master. Due to the cost-optimized development of product systems, these generally have extensive security and freedom with regard to robustness against different operating variants, even with customized developments. The design of the product systems as a series product, for example, is conceived in such a way that the vast majority of potential customers do not have to expect any damage. Customized products are also often based on components and controls that are as standardized as possible in order to increase efficiency in development and production and reduce costs. These modules often have residual capacities compared to the actual application. By comparing these residual capacities in series and individual products with the operating modes in use, the efficiency of the product system can be adapted to the customer’s needs and thus, on the one hand, the operation can be optimized, and on the other hand, by mapping the actual customer use, the future product generation can be conceptualized more adequately and thus the use of resources can be reduced. Another way to react to this information availability is to offer performance or efficiency improvements to the operators. Here, the actual operating behavior can provide information on the extent to which residual capacities are available in the system and whether these can be used in operation or what influence operation at higher power levels would have on the service life of the product system.
3 Learning from the Field As an essential phase for the generation of data for the digital shadow, the use phase of the product life cycle is the crucial source for feedback-to-design concepts [15]. Both in the generation and collection of the shadow data as well as in the creation of the Digital Master, boundary conditions have to be considered. The starting point for the development of functions for the feedback of information into the design are either data potentials of previous collections or information needs of the engineers [15, 16]. Depending on the concept, the design of the digital master or the digital shadow must be assumed. In the case that a concept is developed from the data potential, the digital shadow must first be designed in such a way that information potentials can be formed. The available data from sensors,
668
P. Lünnemann et al.
Fig. 3 Feedback systematics with digital twins
IT systems and interfaces to people must be processed in this way. In the processing, the above-mentioned limitations must be taken into account and the entire data flow from generation to the data sink at the addressees, including the limitations of the communication technologies used, must be considered. It is also crucial to be able to interpret the data from the digital shadow. This is the role of the digital master, which represents the reference point of the digital shadow. The product system instance-specific design provides the necessary categorization of collected data for reliable aggregation into information. It becomes clear that for the interpretation of the data it is necessary to keep the digital master up to date with the product system in order to avoid a falsified interpretation. If the starting point for feedback-to-design concepts is formed from the information needs of the developers, then modelling for these questions must begin (Fig. 3). The starting point for this is an analysis of the critical components to be considered, for example by means of a Root Cause Failure Analysis (RCFA) [17]. It must be determined which data of the real system can be collected via which measuring devices and in which quality, and how these data relate to the product system. For a high-voltage motor as part of a critical asset such as an oil production facility, for example, this results in extensive monitoring of the rotor dynamics, the electrical system, the cooling, the stator winding heads and other components. The Define Measure Analyze Improve Control (DMAIC) cycle provides a guideline as an established process for determining this [18]. In this way, the template for the digital shadow is created from the data to be collected and the digital master from the relation to possible damage to the product system. The design of master and shadow also results in further requirements for the system, for example sensors to be used, storage, computing and communication systems for the instantiation of the digital shadow, its calculation and administration. In their entirety, digital masters and shadows of numerous product systems thus create a source of knowledge for feedback-to-design concept. In the case of electrical engines, it will be possible to understand damage mechanisms in more detail, to optimize the design of the machine and to adapt new product generations more specifically to the actual operating conditions. The result is a contribution to the demand-oriented use of resources.
The Digital Twin for Operations, Maintenance, Repair and Overhaul
669
4 Maintenance with Digital Twins The basis for prediction over time is built by the collection of sensor data and the integration of this sensor data in corresponding ageing prediction models [4]. In order to permanently compare design, forecast and actual product condition the engineering target values and models need to be kept up to date within the scope of maintenance and repair. Digital twins in maintenance of electrical drives significantly increase their reliability [19]. The reason for this is the ability to continuously provide data on the operating status and – this distinguishes the digital twin from pure condition monitoring – to relate this data to the individual system. In addition to the ability to support maintenance, however, the digital twin itself also requires maintenance in order to be able to carry out its operation reliably and safely. Various approaches can be taken in condition monitoring using the Digital Twin. These are illustrated in Fig. 4. The deviation detection (left and middle picture) and comparison with simulations (right picture), are examined in more detail below. In the case of deviation detection, shadow data consisting of sensor, system or interface data, but also their processing into key indicators, are considered over time. For each of these values, an acceptable tolerance range is defined from which the considered value may not deviate (Fig. XY direct monitoring of the sensor values). The tolerance ranges can depend on the current operating stage and on product-specific masters (Fig. XY configured monitoring of the sensor values). For the electrical drive for example certain vibration of the engine are regular, if the tolerance measure exceeds most likely the shaft bearings start to be or are damaged. The dynamics of the change can also be considered within the scope of this monitoring. If the value moves outside the tolerance range, a trigger signal is provided on one of the processing platforms used (Edge, Fog, Cloud). This can restrict the permissible operating modes for the self-protection of the machine, but also inform the responsible system supervisors about the deviation. An example of an exceed limit in electrical drives is the value range of the winding temperature, monitored by appropriate sensors. If a certain temperature is exceeded, a warning will po-up, if a second limit is exceeded the drive will shut-down.
Fig. 4 Approaches to deviation detection
670
P. Lünnemann et al.
A more complex approach is the comparison with simulations (Fig. 4 simulation- based monitoring of sensor values). The aim here is to map the operating behavior of the system under consideration in a simulation in such a way that a basis for comparison with the actual operating behavior can be given. Especially when the key indicators are conditioned by numerous parameters, the simulation approach can be a good choice compared to deviation detection. Depending on the purpose, the simulation models are adapted to the individual product instance. If, for example, the rotor vibration condition of an engine is to be monitored, various physical measured variables must be monitored by corresponding sensors (e. g. rotor shaft vibration and bearing temperature). The aim is to generate added value for the customer through improved machine transparency on the one hand and to derive a recommendation for action in time to prevent unplanned downtimes or inefficient operation on the other hand. The comparison of the measured variables must be carried out with the corresponding combined simulations for rotor dynamics, considering individual electrical design and repair state. The simulation models must satisfy the control loop of the monitoring in their response speed. This is a particular challenge for highly dynamic characteristic values because of calculation time. Surrogate models might bring improvement. For both approaches, the monitoring of sensor, system and interface data can also be used as trimming for the normal behavior of the system. Depending on the operational behavior of the system, the individually achieved quality of the data generators and the interaction with peripheral systems, the behavior predicted in the context of product qualification can deviate from the behavior in the field. In the context of high-performance motors, environmental parameters such as air temperature, foundation stiffness, inertial behavior of the driven system and vibrations of surrounding equipment determine the normal behavior of the drive. This actual normal behavior to be found in the field should be considered when defining tolerance ranges and also as a verification approach for the simulations. It is crucial that the calibration of the system is also noted as such in order to be able to feed this knowledge back into the development. For electric drives, as with most physical systems, it must be considered that the characteristic values are subject to a run-in phase of the system. This is caused by settling connections and friction points. Accordingly, the respective characteristic values should be checked at regular intervals for the validity of the defined tolerance ranges. The effect of the running-in of the systems is also the advantage of a dynamic consideration of key figures. Here, the wear of the system is less strongly represented than the sudden change. In general, however, the system developers are in a very good position to describe what behavior the system would show before and in the event of damage, so that the definition of the key figures to be considered can usually be made by them. In dialogue with experts for sensor systems and considering the financial framework conditions, monitoring systems for product systems can thus be built using digital twins. For the topic of rotor vibration condition, for example, remote support by factory experts for data evaluation, diagnosis and rebalancing is conceivable. However, not only the product system, but also the digital twin itself must be considered in the context of maintenance. As a software system, the digital twin
The Digital Twin for Operations, Maintenance, Repair and Overhaul
671
must be kept up-to-date in its components. Experience shows that, on the one hand, new functionalities are established in the digital twin over the life of the product and, on the other hand, new findings on the security of the IT components of the system have to be considered.
5 Repair with Digital Twins In the context of repairs, the digital twin offers the possibility of carrying out repair measures in a more targeted and efficient manner. Here, too, the basis is a better understanding of the system’s behavior in the field. While in classical approaches the repair had to start with an on-site inspection, the necessary repair processes and spare parts can be procured in advance on the basis of digital twins. In addition, by predicting the further system behavior, it is possible to determine more precisely when a repair will be necessary, or what the consequences of a repair not being carried out would be in the future. Thus, system operators and manufacturers s can take mature, data-based decisions for example about shutting down the system in order to guarantee the repair measures. Taking the example of the electric drive, the conditions of the installed bearings are continuously monitored. In this case, the knowledge about the condition assessment based on the behavior -describing sensor data lies with the bearing manufacturer. The corresponding models are used by the manufacturer within the framework of the digital twin to provide the operator with information on the system status. The bearings are monitored on an ongoing basis and a replacement of the bearings is arranged with the operator at a suitable time in sufficient advance. But the digital twin can also provide support for the repair processes themselves. The information about the installed subsystems and components, as well as their condition, makes it possible to design adaptive repair processes that enable the installers to implement the measures as efficiently as possible. Disassembly and assembly processes can be provided and interaction systems can be used to record damage patterns. The replacement of bearings on the electric drive is supported by an interaction system. Information on abnormalities and damage is collected in a dialogue system. Disassembly and assembly are supported by a process system that guides the assembler in their activities. For this guidance, the respective digital twins of the engine and the twins of the bearings (old and new) are used. The information collected is added to the digital twins of the motor and the bearing. It is particularly necessary to update the digital twin in its digital master according to the repair carried out. This applies on the one hand to the update of the installed components, if an exchange has taken place, and on the other hand to the behavior al description of any adapted components or attributes, see right hand side of figure one below. Depending on the system under consideration, detailed descriptions of the changes should be made down to the material level in order to be able to update any simulation models. If geometries have been changed, it may make sense to use technologies to digitise the new geometry. In addition to these updates,
672
P. Lünnemann et al.
Fig. 5 Digital Twin evolution over time
the normal behavior of the machine must also be re-determined, as described above. Neglecting this required activity leads to miss-interpretation of affected sensor values or large corrective action expenses (Fig. 5).
6 Overhaul with Digital Twins In the context of overhauling systems, the same framework conditions initially apply as in repair (Monitor, Execute, Update). In the course of the overhaul, however, more comprehensive changes are generally made to the electrical drive system. Consequently, it can be assumed that the changes to the digital master will be significantly more comprehensive than is the case of the repair. If necessary, new development data are introduced into the digital twin or even new data generators. Here, the interfaces of the system must be considered. The new models and components must be able to fit into the existing structures. Last but not least, this motivates a conception of the Digital Twin in a form that can map the dynamics of the changing system. With respect to the large changes on the physical good and its twin at the same time the knowledge of the system’s inner structure and dependencies is of great interest. Semantic descriptions as noted in ontologies monitor these dependencies and support the identification of matching components or data. For digital twins even, an automated update may be considered. When considering the processes of overhauling systems, the use of digital twins can be even more pronounced than in repair. Adaptive production processes are conceivable here, which only carry out the necessary measures depending on the individual damage pattern and thus promote the continued use of components of the system that are as comprehensive as possible. The starting point here is also the product state represented by the digital shadow. This is used to first determine that an overhaul of the system is necessary. Furthermore, these data can be used to estimate costs, effort and personnel requirements for the upcoming overhaul. For further detailing of the necessary measures, the image of the systems that can be captured in the field must usually be further detailed. This can be done, for example,
The Digital Twin for Operations, Maintenance, Repair and Overhaul
673
using geometry-describing technologies (scan, CT, etc.) or behavior -describing technologies (measuring and testing systems). These technologies are generally not sensibly integrable into the product systems in terms of their structure or due to the costs. Nevertheless, these non-product-integrated technologies also create a digital shadow of the product system. On the basis of the digital shadow created, it is possible to specify the necessary measures for the overhaul. On the one hand, the corresponding processes can be tailored to the components, and on the other hand, machine processes can be adapted to the actual state of the components, although at a significantly higher cost. This can be seen, for example, in adaptive manufacturing processes that consider the actual geometry of the components. In the same way, the process can also be applied to abrasive processes. The replacement of individual components in a system instead of the replacement of an entire system can also be carried out using suitable analyses and appropriate measurement technology. When the system is rebuilt and reassembled, production and assembly data are recorded as Digital Shadows and added to the product twin as a new Digital Master. In this way, the Digital Twin of the product can be adapted to the new properties, functions and behavior.
7 Conclusion For maintenance, repair and overhaul, the advantage of using a digital twin is the individualized solution. In the case of electric drives, it is possible to extend maintenance cycles, only replace components that are actually defective and ultimately increase the availability of experts for service activities. In the example of electric drives, companies benefit from improved knowledge of the current condition as well as more reliable use of the electric drive itself. The customer benefits from a lever to reduce maintenance and repair costs. However, the creation and maintenance (e.g. continuous safety updates or product improvements) of a digital twin involves considerable effort. Therefore, each dimension (such as update frequency or computational and simulation effort) of the Twin-To-Be solution must be comprehensively weighed between the actual user needs and the installation and maintenance effort.
References 1. Riedelsheimer, T., Lünnemann, P., Wehking, S., et al. (2020). Digital Twin Readiness Assessment: Eine Studie zum Digitalen Zwilling in der fertigenden Industrie. Fraunhofer Verlag. 2. Liu, Z., Meyendorf, N., & Mrad, N. The role of data fusion in predictive maintenance using digital twin: 20023. https://doi.org/10.1063/1.5031520
674
P. Lünnemann et al.
3. Qi, Q., & Tao, F. (2018). Digital twin and big data towards smart manufacturing and industry 4.0: 360 degree comparison. IEEE Access, 6, 3585–3593. https://doi.org/10.1109/ ACCESS.2018.2793265 4. Uhlmann, E., Hohwieler, E., & Geisert, C. (2017). Intelligent production systems in the era of Industrie 4.0 – Changing mindsets and business models. Journal of Machine Engineering, 17, 5–24. 5. Werner, A., Zimmermann, N., & Lentes, J. (2019). Approach for a holistic predictive maintenance strategy by incorporating a digital twin. Procedia Manufacturing, 39, 1743–1751. https://doi.org/10.1016/j.promfg.2020.01.265 6. Luo, W., Hu, T., Ye, Y., et al. (2020). A hybrid predictive maintenance approach for CNC machine tool driven by Digital Twin. Robotics and Computer-Integrated Manufacturing, 65, 101974. https://doi.org/10.1016/j.rcim.2020.101974 7. Aivaliotis, P., Georgoulias, K., Arkouli, Z., et al. (2019a). Methodology for enabling Digital Twin using advanced physics-based modelling in predictive maintenance. Procedia CIRP, 81, 417–422. https://doi.org/10.1016/j.procir.2019.03.072 8. Aivaliotis, P., Georgoulias, K., & Chryssolouris, G. (2019b). The use of digital twin for predictive maintenance in manufacturing. International Journal of Computer Integrated Manufacturing, 32, 1067–1080. https://doi.org/10.1080/0951192X.2019.1686173 9. Siemens, A. G. (2021). SIMOTICS HV HP – Media gallery. https://new.siemens.com/de/ de/produkte/antriebstechnik/elektromotoren/hochspannungsmotoren/simotics-hv-hp.html. Accessed 1 Dec 2021 10. Melesse, T. Y., Di Pasquale, V., & Riemma, S. (2020). Digital twin models in industrial operations: A systematic literature review. Procedia Manufacturing, 42, 267–272. https://doi. org/10.1016/j.promfg.2020.02.084 11. Stark, R., Fresemann, C., & Lindow, K. (2019). Development and operation of digital twins for technical systems and services. CIRP Annals, 68, 129–132. https://doi.org/10.1016/j. cirp.2019.04.024 12. Negri, E., Fumagalli, L., & Macchi, M. (2017). A review of the roles of digital twin in CPSbased production systems. Procedia Manufacturing, 11, 939–948. https://doi.org/10.1016/j. promfg.2017.07.198 13. Lünnemann, P., Wang, W. M., & Lindow, K. (2019). Smart Industrial Products: Smarte Produkte und ihr Einfluss auf Geschäftsmodelle, Zusammenarbeit, Portfolios und Infrastrukturen, München. 14. Riedelsheimer, T., Gogineni, S., & Stark, R. (2021). Methodology to develop Digital Twins for energy efficient customizable IoT-Products. Procedia CIRP, 98, 258–263. https://doi. org/10.1016/j.procir.2021.01.040 15. Riedelsheimer, T., Lindow, K., & Stark, R. (2018). Feedback to design with digital lifecycletwins: Literature review and concept presentation. In D. Krause (Ed.), Symposium Design for X. Institut für Technische Produktentwicklung (pp. 203–214). Universität der Bundeswehr München. 16. Bergmann, A., & Lindow, K. (2019). Use of digital twins in additive manufacturing development and production. In D. Croccolo, J. S. Gomes, & S. A. Meguid (Eds.), M2D 2019: Mechanics and materials in design (pp. 40–41). 17. Mobley, R. K. (1999). Root cause failure analysis (Plant Engineering Series). Newnes. 18. Kubiak, T. M., & Benbow, D. W. (2017). The certified Six Sigma black belt handbook. 19. Falekas, G., & Karlis, A. (2021). Digital twin in electrical machine control and predictive maintenance: State-of-the-art and future prospects. Energies, 14, 5933. https://doi.org/10.3390/ en14185933
The Digital Twin for Operations, Maintenance, Repair and Overhaul
675
Pascal Lünnemann is a research associate at the Fraunhofer Institute for Production Systems and Design Technology. Parallel to working as a designer and PLM engineer in the field of highperformance electric motors, he completed his studies in mechanical engineering and automotive engineering. Pascal Lünnemann researches the analysis and design of engineering environments of product creation in application and basic research. He designed and follows an integrative approach of holistic modelling of digital value creation. Within this framework, Mr. Lünnemann is also involved in investigating the influences of digital twins as a source of information and design object of product development. In addition to this, the following topics are among the main areas of research: Design of PLM concepts, collaboration in engineering, establishment of company-specific MBSE environments, information integration through semantics, use of artificial intelligence in product development, analysis and optimisation of digital value creation. Carina Fresemann is holding a degree in mechanical engineering and PhD (Dr.-Ing.) with a focus on engineering data management. She has been working in Aviation Industry with Airbus and Airbus Helicopters for several years. Her focus was in the area of the engineering production interface as well as the engineering configuration management. She was responsible to PDM tool implementation processes with a focus on change management. Carina Fresemann change to Technical University of Berlin, Chair of Industrial Information Technology. Since four years she leads a group of researchers in the area of virtual product creation with a focus on model based engineering. As part of the research activities the digital twin establishment and its application has been considered in multiple research projects with and without industry participation.
Friederike Richter is holding a degree in mechanical engineering and has a long history of working within SIEMENS Large Drive Applications with specialized electrical motors. During her career within SIEMENS, she excelled in different roles and functions like design, development and product management for the high-end electrical motor portfolio. In her current role as the lead for a publicly co-funded research project Mrs. Richter is concentrating on the electrical drive systems of the future. With her strong ties into SIEMENS and her history on industrial value chain and digitalization she now focuses on developing holistic digital business models (from cradle to grave) utilizing digital twins as the main enabler while also addressing topics as fully digitized value chain, data security and suggested approaches to commercialization (data availability, legal framework and data management).
Digital Twins of Complex Projects Bryan R. Moser and William Grossmann
Abstract The Digital Twin (DT) concept has rapidly gained acceptance. More recently it has become clear that just as a DTs support product, they can also represent the activities needed to design, execute, and manage projects. DTs bring a remarkable potential to bring complex projects to market successfully and to support after-market phases including training, maintenance, repair and retirement. Project Design is a method that brings DTs to project management based on realistic and reliable project models, forecasts, and ongoing instrumentation. In Project Design, the digital twin represents not only the products, services, and processes being created, but also the project teams and their activities. In other words, the project itself is recognized as a system. Digital models are extended to include people and organization in addition to product and process. Feedback and feedforward with automated flows are a critical characteristic of DTs leading to better attention, decisions, and actions by teams. Three cases are shown which demonstrate Project Design with digital models, digital projections, and digital shadows of complex projects. These cases show a collaborative environment in which teams build models which capture the project as a sociotechnical system. The models integrate three fundamental contributing architectures: products, processes, and organization (PPO). While building the model, a view emerges of the relationships amongst these three which, in turn, promotes shared awareness of the project across teams. The model-building likewise shapes mental models as teams explore the impact of changes, variation in assumptions, architectural options, and other real-world execution parameters. An analytics engine, in this case an agent-based simulator, generates project forecasts which act as digital projections. The forecasts are more realistic than B. R. Moser (*) System Design and Management, Massachusetts Institute of Technology, Cambridge, MA, USA e-mail: [email protected] W. Grossmann Global Project Design, Europe, Berlin, Germany © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_24
677
678
B. R. Moser and W. Grossmann
c lassic methods, as the simulations include the project’s uncertain demands, behaviors, feasibility, and coordination in dynamic interaction. A wide range of feasible project variants with schedules, costs, quality, and utilization are generated by simulation and compared to targets. Teams rapidly assess trade-offs, risks, what-if scenarios and contingencies. The model is adjusted: teams expanded or reduced, dependencies changed, activities added or removed, roles and responsibilities tuned, concurrency increased, worksites changed, etc. The project teams learn quickly how changes in their own roles, commitments, and priorities systemically impact the project results. As the project proceeds, estimates to completion including alternate paths forward are rapidly and easily analyzed. A long-lasting benefit of Project Design is that the DT is used over the project lifetime. A digital shadow evolves with the actual project as refinements and contingencies arise, immediately yielding new forecasts of quality, schedule and cost. Instrumentation of scope, interfaces, and teamwork brings significant new feedback to maintain alignment of the project model with the actual project. The model also acts as digital thread across the model’s connected PPO and across changes in the project over time, promoting persistence for practical leverage of information in future projects. Recent research advances in instrumentation and analytics, including placement of non-intrusive sensors across project elements and teamwork, rapidly reveal the health of the project and chances for success. Taking advantage of these techniques, new insights are yielded on teamwork as innovation and complex problem-solving across various industrial and government project domains. Keywords Digital Twin · Model-based project management · Project design · Agent-based simulation · Instrumented engineering teamwork
1 Digital Twins and Complex Projects 1.1 The Potential for Digital Twins The Digital Twin concept, typically focused on the technical product itself, offers methods to significantly improve performance if applied more broadly to the project itself. Overcoming century old project management as standard, rule-based, and process control of resources, an untapped potential is now within reach based on digital models, instrumentation and analytics applied to teamwork. Rather than notional schedules and idealized plans, a digital twin of a complex project reveals how teams and work align (or not) and how their behaviors and interactions are likely to lead to success or disaster. We are gaining a new view and levers to design and manage the underlying (and real-time) dynamics of teams and work. A look at large projects, particularly those targeted at critical and innovative systems, shows that a large percentage are failing – far over budget, schedule and,
Digital Twins of Complex Projects
679
even worse, many will be abandoned. Projects in the billions of dollars are affected. This situation is prevalent in complex technology, science, and infrastructure projects. Great need and perseverance are required to bring these failing projects to fruition, if they can be saved. The development and deployment of critical software platforms such as healthcare.gov, the Airbus 380 and Boeing 787 airplanes, the Berlin airport, California high speed rail… the list goes on. A recent example from science is the launch of the James Webb Space Telescope (JWST) originally budgeted at 5 billion US dollars and scheduled for launch in 2014. The JWST was finally launched in December 2021 with a final cost of 10 billion US dollars. One can say that a great need by the scientific community and perseverance by the key funding agencies were responsible for the eventual JWST launch and reaching its position at L2. Other examples include critical billion-dollar defense projects that were cancelled due to extreme delays and cost overruns. The reasons for such overruns and delays have been reported and analyzed with the insights typically the same: unrealistic project plans, changing requirements and scope, and poor project management. We begin by recognizing that an enterprise to complete a complex project is a sociotechnical system, requiring an interplay among contributors and stakeholders ranging from the customer, management, technical teams, manufacturing, and supply chain, to name a few. For complex projects, the establishment of a reliable and high-performance sociotechnical environment is a significant challenge. As described later, digital twins and the project design approach leverage models providing several valuable functions: collaborative planning, ongoing integration, dynamic ontology, forecasting, instrumentation, teamwork analytics, and persistence. Project models represent the elements and relationships among product, process, and organization (PPO), providing teams multiple and navigable views of a project from conceptualization through execution. A rich modeling vocabulary increases realism of the model, decreases detail, and improves forecasting. At any point in time the integrated PPO is a snapshot, useful for organizing information, dialogue, artifacts and events. Further, as scope, dependencies, and teams in the project are instrumented, relevant information is more readily validated, archived and available to teams. Analytics assessing the interplay between behaviors and architecture across the project allow new, insightful forecasts and predictive interventions yet unavailable for project management until recently. Others have provided studies of the potential value and barriers from digital twins more broadly [1, 8, 21]. Digital twins of projects, modeled as sociotechnical systems, are a potentially significant contributor to transformation of complex project management, as suggested in three case studies below. We are confident in the potential of model-based project design from research and two decades of field implementation and refinement. Rather than a focus on efficiency through automation (digitization and control), instead organizations will increase capacity to foresee, manage, and adapt their efforts far more dynamically. This distinction is important today, as some recent work trends promoting agile project management may be inadvertently reverting teamwork to overwrought
680
B. R. Moser and W. Grossmann
automated workflow and disconnected, interchangeable resources. Similarly, over- automation of manufacturing in the twentieth century showed instead an unexpected outcome: efficient yet narrow capability and poor quality. For complex projects, such approaches yield short term bursts in local velocity yet teams eventually unable to innovate and unlikely to coordinate beyond their own cell. A leading system engineer for a national aerospace initiative, cancelled after many years and $9B is spending, shared her thoughts with us in a de-brief soon after. From the lens of classic project management, the initiative had been “well managed”. Project management and systems engineering processes, documents, and meetings adhered to the known best practices. The scope, interfaces, schedules, and assignments had all been tidily documented, to great detail. And yet the project was unable to demonstrate meaningful progress. One of her reflections stood out: while teams were provided (too much) information, they had “no feeling for the dependencies,” she said. What we understand now from this brilliant observation is that the approach led to limited mental models and low readiness to interact. Teams were unable to adjust local behavior for systemic outcomes. They didn’t develop an instinct for which of many actions, interactions, and pass-offs mattered more; when to be agile, and when to be not agile. By focusing on their assigned items, teams were not aware of, nor capable of, interacting and adjusting as real-world changes, lessons, and issues emerged. Complex projects are by their nature unique. The potential advance in capability due to models and analytics applied to projects is significant. Teams will become more aware of how their own actions effect critical outcomes though building and interacting with a project model. Teams with a project model will rehearse and exercise interactions across the project. They will build and align their mental models with linkage to value (“Why are we doing this project in the first place, and for whom?”), discipline (“How do we judge what not to do as well as what matters today?”), and coordination (“Are we satisfying dependencies by interacting across interfaces?”). In this paper we focus on the specific opportunities and challenges for digital twins of complex projects.
1.2 Digital Models, Shadows, and Twins Multiple, recent studies have concluded that the term “digital twin” is used variously with ambiguous definitions, although all indicate a real-world technical object (system) is matched with a counterpart digital, or virtual, model connected to some degree by data interchange. Product Lifecycle Management (with PLM), manufacturing (with CAD/CAM) and construction (with BIM) are domains with most common application of digital twins [9, 11, 13, 23, 27]. Kritzinger [11] emphasizes the automation of information flow between virtual and actual systems to distinguish among three concepts: digital models, digital shadows, and digital twins. A digital model is disconnected formally from its
Digital Twins of Complex Projects
681
real-world counterpart, with only by-hand update of the model as the real-world system evolves. In contrast, a digital shadow is connected so that the real-world system is instrumented with automated data flows so that the virtual model remains accurate to the state of the actual system. A digital twin only exists, according to Kritzinger, when data flows automatically in both directions, from virtual model to real world system and real-world to virtual. The two remain in synch over time.
2 Digital Projections and “People in the Loop” Project Systems 2.1 Complex Projects as Sociotechnical Systems Others share our approach to convey digital twins and technical work in context. Rebentisch et al. describe the evolution of digital twins, from early variants with models of technical artifacts to more recent recognition that leveraged benefit of digital models requires their use as part of sociotechnical systems design (STS-D) [22]. Their study emphasizes representation of the context of operations for a technical system, including with those who use, benefit, and are impacted by the engineered system. Consistent with STS-D, our approach takes a further step to emphasize not only the technical artifact but also the total project to realize the product system. That is, to recognize the engineering project itself as the system to be modeled, with representation of stakeholders and teams in addition to product and process. The project model will be leveraged across a project lifecycle: early shaping, targeting, planning, realization, and validation leading to a completed project and a working technical or service solution. The related digital model, digital projection, digital shadow, and digital twins capture the project itself rather than only the product or service that emerges from the work.
2.2 Interplay Beyond All or Nothing Data Automation While the Kritzinger definition of DTs above works well for purely technical systems, if projects are the systems to be twinned a subtler examination of the information exchange between digital and actual is necessary. When people are part of the system, the flow between digital and actual is, by its nature, more than automation. Even with automated data flow, how people organize, convey, explore, and make tacit the information is relevant to outcomes. With people in the loop, the flows can be relevant and efficient, allowing people to perform collectively, or inefficient and chaotic, leading to poor performance.
682
B. R. Moser and W. Grossmann
A missing concept, to be added to the digital model, shadow, and twin above, is the data flow from model of project to actual project. A digital projection. Therefore, a digital twin, which has data flowing “automatically” from virtual to actual and back encompasses both projection and shadow. Figure 1 shows the typical interplay of digital models, analytics and data with real world exploration, decisions, actions, and outcomes in projects. Project data is generated both by virtual models and real-world state. This data is the basis for many explorations, insights, and decisions. With humans in the loop, performance of the system requires a view beyond pure automated control of the project system. Various information which flows between virtual and real systems are enhanced and constrained by the mental models, attentions and capacities of teams. A complex project includes teams of people organized to work and coordinate. Their attentions, insights, decision, and actions are guided and not fully controlled; the teams retain agency.1 Thus, “automation” of the data flow with a project digital twin is better characterized as the integrity, timeliness, and usefulness of data flows and the behavior and capabilities of people to pay attention and respond.
Fig. 1 Digital Twins of complex systems projects. The interplay of digital models and real- world sociotechnical systems shown beyond the purely automated vs. by hand flow of data. Rather, real world systems and virtual models generate data, which in turn are explored through interfaces by people to gain insights, make decisions, and attempt real world actions. (Figure based on presentation [17])
Interestingly, research in manufacturing and HCI have shown repeatedly that over-automation of a sociotechnical system in dynamic environments can lead to decreased performance. Human attention withers, learning slows, and quality drops. 1
Digital Twins of Complex Projects
683
3 Emerging Models of Project Teams and Work Since 1900 3.1 Classic Methods Of course, models of human teams and industrial work are not new. Formally for more than a century there have been models of projects showing humans as resources allocated to tasks. In other words, these models include both technical and social elements and how they interact. However, classic approaches, including Gantt charts [5], Line of Balance charts common in construction [29], and network models as used in CPM and PERT [10, 14] are limited by simplifying assumptions. These methods were envisioned for standard work, whereas modern projects are anything but standard. The classic models of projects and teamwork are useful but lack realism and practical extensibility. Deeply held assumptions, necessary for these classic methods to be practical, avoid the real-world dynamics of human teamwork. Gantt charts capture very little about the work: a set of named tasks with start and stop. The CPM (Critical Path method) assumes all tasks are of known, fixed duration connected by discrete sequence. Both CPM and PERT do not allow loops to be shown, yet iteration is a common phenomenon in creative, technical and science projects. Meanwhile, resources in these classic approaches are often assumed to be interchangeable, replenishable, and lacking agency. The list goes on – resource contention, mistakes, error detection, exception handling, concurrent dependence, coordination, communication, learning, time zones and travel – all are missing. The thin representation and detail in classic methods lead to project models difficult to create and impractical to maintain. The classic methods for project management share a limited vocabulary for representation of project realities, such as the treatment of dependencies as finish to start milestone constraints. These limitations, in turn, drive the project models to mimic real world phenomenon through multi- layered plans and excruciating detail. In report after report on the barriers to successful work and digital transformation, studies emphasize the central importance of people. Experts characterize limits due to embedded practices and biases, requirements for engagement by people in change, and opportunity for synchronized performance including digital tools [2–4, 6]. If our models of work and teams follow the classic methods, these critical characteristics are missing. We can do better. A project model can be significantly more inclusive of work and team dynamics; more realistic, more predictive, and more practical. We have proposed and demonstrated a model-based approach with digital twin of projects, agent-based simulations, and teamwork analytics.
684
B. R. Moser and W. Grossmann
3.2 Development of Project Design Models of projects with rich semantics and integrated models of product, process and organization has been demonstrated in our earlier work [16, 18] and in the cases below. A project digital model is used by cross-functional teams through a method called Project Design. A project architecture is modeled iteratively to capture scope, teams (with abilities and capacities), allocation of teams (in various roles), dependence (including concurrent and mutual flows), and priorities. The digital project model is simulated to generate forecasts. Teams examine many project concepts and scenarios, thinking together how best to proceed. Teams uncover a pattern of when and where both work and coordination are valuable or create waste. As the digital model evolves, teams build awareness and shared mental models. Project Design has been applied over the last two decades on hundreds of complex projects. The method has been shown to improve the awareness of teams, as well as better predict delays, waste and rework caused by poor architecture. Three cases are shared below to show the building of a digital model, active use of digital projection, and a digital shadow through instrumentation of teamwork.
4 Digital Models of Complex Projects Digital twins offer us a new way of organizing and leading projects, to avoid the foibles faced by so many complex and strategic projects.
4.1 Model-Building During Early Planning of Projects Long before project organizations and results are realized, stakeholders shape the projects through dialogue on opportunities, charters and plans [12]. As stakeholders contribute and agree to a charter, genuine project planning begins. The exercise to develop a model-based project plan using a digital model takes place in a workshop setting where team members collaboratively “design” a project plan. Early-on, as various concepts for the project are considered, a model-based digital twin is built. Ideally all or most stakeholders in a workshop contribute according to their individual technical and/or managerial responsibility. For our research and industrial case studies, the TeamPort platform provides fundamental building blocks for the project Digital Twin. A no-code, drag and drop, digital blackboard is used to visually capture a project model including organizational, product and activity architectures. During the workshop the assembled teams propose and experiment with project organization breakdown (OBS), product breakdown (PBS), and phased work
Digital Twins of Complex Projects
685
breakdown (WBS) structures. While many aspects of the project may come from organization standards and recent norms, for complex projects the application of these norms yields a unique project. These design choices for the project and meaningfulness of adjustments are important, if the teams involved are to become aware, agree, and later follow the plan that emerges. The structures are tied to one another through scope-based activities, allocated roles, and dependence. Taken together, the model describes what is to be developed (what product), how the product is to be developed (what work packages and/or processes), and who is tasked with the overall work (organizational breakdown structure). Forecasts of likely performance for a version of the project model are generated by an analysis engine such as a simulator. The digital twin platform used in this research, TeamPort [18], leverages both discrete event and agent-based methods. Each forecast of the project shows predicted schedule, work and coordination efforts, utilization, labor costs, and other analytics unique to the simulator. Much like weather forecasting, a single scenario (as captured in a model) can be predicted using different analytic techniques. For example, a weather “spaghetti chart” for a hurricane in the Atlantic may be forecast leveraging many alternative methods, e.g. the European and American GFS models. In a similar way for a complex project, the digital twin platform provides multiple voices of likely and potential performance. The ultimate goal of the workshop is to generate a tradespace of plan options and select a “baseline” project plan. The baseline can be interrogated for a wide variety of project characteristics such as team work effort and cost, dependencies among and between the various work packages. The baseline project plan is arrived at when all workshop participants agree with the model and the project requirements of scope, cost and schedule have been met as close as possible. Subsequent to establishing a baseline project plan, the generated model becomes the Digital Twin project model used going forward during the life cycle of the project plan. Since the teams were collaboratively involved in rapidly building them model and generating forecasts, they are prepared to update the model and estimates to completion moving forward.
4.2 Case Study 1: Project Digital Model – Construction Vehicle Development This case study leveraged a project model for development of construction equipment. An international company known for its line of construction equipment was faced with developing a vehicle with added requirements to meet new environmental standards. The target schedule required introducing the new product 1200 days after start of the project.
686
B. R. Moser and W. Grossmann
This very large vehicle included core systems, components, and controls integrated from different divisions and suppliers. The scope of the project model highlighted comprehensive prototype development and tests which required test planning, setup, and facilities including cold chamber, electromagnetic compatibility, shaking and vibration labs. Tight windows of availability and costly transfer of large prototypes between various test facilities in different countries was a key project design consideration. Development of the project digital model took place in a two-and-a-half-day workshop resulting in the following results. Table 1 shows key features of scope and other project plan elements in the model. Based on the digital model, a design of the project was rapidly explored and a preferred baseline model (and plan) accepted. The approach was shown to have a positive impact for ongoing plan maintenance, behaviors, and results (Fig. 2). Several insights can be gained from viewing how project model iterations unfolded over a 10-h period during the initial workshop. Figure 3 shows each of the (cost, time) forecasts during the session. Each point is a forecast of a project model variant. The arrows between each point show the path of generated forecasts in the workshop as the model evolved. (We refer to this path as a “design walk”.) As stated earlier, a digital model platform can provide multiple analysis engines, generating different forecasts even for the very same model. In this case, two different simulation methods were used, A1 and A2. Method A1 refers to an analysis of cost and schedule with assumptions consistent with the critical path method (CPM). The A1 method ignores coordination, assuming that communication, time zones, concurrent dependence, and re-work do not factor in the project performance. Method A2 is analysis using a novel method in this research and incorporates coordination forecasting based on the project architecture, including concurrency, team locations, time zones differences, and probability of re-work. The model versions for these forecasts were built by changing teams, roles, and project architecture, without changing the scope of deliverables. During the design dialogue the team leaders adjusted priorities, exposed and mitigated risks, and tested assumptions of coordination and concurrency. In the top right (a) represents early improvements from adjustments to concurrency and staffing: waste was being removed. As the design session evolved the system team leaders began to discover a frontier (b), where gains in time required Table 1 Summary of project model characteristics: case 1, construction vehicle Products
Engine, Transmission, Electronics, Tests, Models, Algorithms, Simulators, Test Beds, Documentation Teams 24 distributed across 3 countries # of scenarios 17 major scenarios Activities 90 major activities in prototype, test, integrate cycles Nominal 13,557 h effort Dependencies 135 among and between activities
Fig. 2 Project model of case 1, showing four key prototypes and their subsystems (red squares) as built and tested across a flow of activities. The model also includes locations, teams, and roles, not shown here
Digital Twins of Complex Projects 687
688
B. R. Moser and W. Grossmann
Fig. 3 Forecast iterations across many scenarios method A1 vs. A2. Each mark is a forecast of a feasible project plan. The target duration was 1200 days. The labor cost shows direct engineering team effort only
trade-off in cost. In contrast to the A2 forecast, (c) shows the forecasts based on A1 analysis method. The frontier revealed by A1 is many months and approximately $500,000 less than A2 and more tightly clustered. The traditional method A1 is less sensitive to changes in architecture, particularly degree of concurrency that drives waste from wait and rework. A1 does not predict sensitivity to factors that are otherwise by A2 shown to have a 1-year impact on schedule in this case. Figure 4 shows a magnified view of design iteration and forecasts from the last 3 h of the project design session. A regulatory deadline shown as (a) drove the team to search for a range of feasible and acceptable changes to architecture, some of which yielded unexpected system duration increases, inconsistent with judgment by experience only. The cost trade-off frontier (b) for the given scope became apparent. Figure 5 shows how forecasts of feasible schedule for three gateways changed during the workshop. Each design iteration is a project model variant, informed by participant judgment and exploration of the digital model and feedback from prior forecasts. We can see the early rapid improvement in architecture (a). During the final hours of the session (b) the team leaders began to look more closely at trade-off of these gateways. By allowing some milestones to be delayed, others more critical milestones could be accelerated with an acceptable impact on cost.
Digital Twins of Complex Projects
689
Fig. 4 Magnified view of cost × Duration frontier in case 2
Fig. 5 Forecasts of Gateways 3, 4, and 5 across design iterations, shown in order from earlier on the left (#1) to later on the right (#14)
690
B. R. Moser and W. Grossmann
5 Digital Projection of Complex Projects 5.1 Forecasts and Ongoing Interaction with Model, Options, and Decisions Not only at the start of projects, but as progress is made and things change, a digital model acts as an ongoing digital projection, from which flows project state with forecasts and sensitivities of actions and changes. People better maintain their awareness of the project condition through ongoing, interactive upkeep and simulation of the digital model. While this flow of information may be nearly fully automated, ongoing human interaction with the model is required for situational awareness to remain and the digital projection to be effective. The case below goes beyond model-building and early baseline development, as it is an example of forecasts at project gateways and later retrospective analysis. The digital model is leveraged in order to validate alternative courses of action. The case also demonstrates multiple simulation engines and the predictive improvements from richer, agent-based model forecasts.
5.2 Case Study 2: Industrial Equipment A global manufacturer planned to develop a next generation industrial air conditioning series for four global markets. This product line was significant to the company since the current generation held significant market share. After this development project of the new generation completed – years later than originally predicted – the company asked “What should we have been able to see at the first gateway, using the knowledge we had at that time?” In other words, could they have foreseen the project delays and cost overruns, and would a digital projection have helped? We applied the Project Design method and visual modeling tools to answer this question. The project scope describes the demand for deliverables – in this case at the start of the project with remaining demands which trigger activity and progress. Table 2 shows a summary of the scope of the first case. Activities included standard work with five gateways (G1 through G5), regional requirements, design, engineering, and manufacturing tasks, related services, an emerging supply chain, and rollout across regional markets. Since the scope had changed during the project, three project models, one for each of these different scopes, were used to generate forecasts. Based on information available early-on in the actual project, these different scope scenarios were modeled:
Digital Twins of Complex Projects
691
Table 2 Summary of project model characteristics: case 2, industrial equipment Products Teams Gateways Scenarios
Activities Nominal effort Dependencies
Drawings, BOMs, Components, Eng. Prototypes, Mfg. Prototypes, Build/ Tests, Pilots 17 G1 – concept G4 – manufacture G5 – release G2 – design G3 – engineer 3 scenarios representing different project scopes, each forecast using 2 methods (A1 and A2) Scope 1 Scope 2 Scope 3 Original Scope 1+ Options 2+ Increase @ G2 26 37 48 ~63,000 h ~92,000 h ~112,000 h 32 43 55
1. original scope basic and common features for all global products (shown in figures as a triangle) 2. original scope and options requested by different regions but not confirmed (shown as a diamond) 3. original, options, and a scope increase after G2 to respond to a new requirement (shown as a circle) The most significant gateway was Gateway 4 (G4), after which decisions for significant capital spending and commitment to suppliers could not be reversed. Forecasts of cost and schedule of G4 across the three scenarios are shown in Fig. 6, marked with a triangle for scope 1, diamond for scope 2, and circle for scope 3. The three scenarios predicted using A1 are millions of dollars and 15–24 months short of the actual date. The A2 forecasts show better cost and duration accuracy, with the full scope (original + options + increase) within 2 months and $200 k of actual. The forecasts of G4 as reported in the actual project by the project manager at each gateway are also shown (@G1, @G2…). Even as project reality unfolded with significant delay and cost increase, the PM’s reporting was heavily influenced by the planning which had followed a method similar to A1. Detailed gateway schedule forecasts by methods A1 and A2 can be seen in Fig. 7. A1 forecasts are shown as dashed lines. A2 forecasts are shown as solid lines. Actual dates are shown as a double line. Across the forecasts, the G5 gateway forecast date varies by 3 years. The actual dates are most tightly corresponding to the A2 method, which predicted these milestones within weeks over a 5-year schedule. In summary, using a digital project model and the novel method A2, it was possible to have predicted within 5% the cost and completion very early (at G1). As full scope became apparent soon after G2, a re-designed forecast would have been possible, 4 years before G5. In that case, measures to change architecture, resources, and/or scope would have provided options for project correction.
Fig. 6 Forecasts of Gate 4 (G4) as provided by the project manager at each gateway compared to forecasts generated by methods A1 and A2. Method A1 approximates classic critical path models of projects. A2 extends the model to include systemic effects due to resource contention, concurrency and distance, including coordination and rework. Even late at G4, guided by classic methods, the PM underestimated the schedule and cost
Fig. 7 Comparison of gateway schedule forecasts vs. actual. The gap between classic method forecast (A1) of the original scope and actual outcomes was 3 years. A2 method is significantly more predictive
Digital Twins of Complex Projects
693
6 Digital Shadow of Complex Projects How does the project model, used earlier to forecast and as digital projection, keep up with the progress and changes so typical in complex projects? How can the model act as a digital shadow? Can we benefit also from feedback on the actual attention and actions of teams? Classic project planning methods are often matched with an elaborate accounting approach, requiring dedicated staff, detailed reporting, and heavy documentation. Usually the data collected is limited and not immediately useful to those reporting. Any insights from systemic feedback are delayed, and with the inability of the reporting system to keep up, eventually untrusted. In contrast, in manufacturing and logistics, advances in control systems that allow for low-cost, (near) real- time sensors have shown a better approach. Can’t we apply the digital shadows now common for the factory and the supply chain to our projects themselves? Instrumentation as applied to projects is a recently advancing capability that brings digital twins to project management. Instrumentation for project digital shadows is defined as the non-intrusive and automated sensing of project state, including attributes of technology, process, and organization. These sensors drive the flow of information from actual to virtual, the basis of a project digital shadow. Recent software projects leverage platforms such as JIRA to monitor scope, although they do not provide evidence of other technical and social dynamics across architecture of most projects.
6.1 What Is Really Happening in the Project as Sociotechnical System? The methods of classic project management discussed earlier, including Gantt charts, network diagrams, earned value, and other project accountings are difficult to keep up to date while limited in their representation. Not only are the measures latent or missing (as they often are), but also often the measures are limited or wrong. For example, in Earned Value Management (EVMS), cost metrics are used to represent plan, progress, and to calculate schedule and cost variance. Even if automated, the insights gained are far from a realistic (and predictive) view of the project. Although required by regulation, it is known that EVMS is an inaccurate indicator in the final third of projects that have deviated from baseline. With digital twins, including instrumentation of teams and work, we can do much better. Along with others, we have been involved in ongoing research to develop reliable sensors and an experimental framework for digital shadows of complex projects. These new capabilities include instrumented teamwork experiments of teams and planning [26], of awareness of dependencies [7], of reaction to surprise during complex infrastructure design [20], and the role of shared strategy during site design [15]. Across these various domain challenges and teamwork phenomena, we are
694
B. R. Moser and W. Grossmann
seeking to define the next generation laboratory and field methods for instrumented teamwork during complex systems challenges [19].
6.2 Case 3: Instrumentation of Teamwork During Project Design In a third case, a model is built of an engineering prototype project and an agent- based simulator generates forecasts. These forecasts are a digital projection of the project. Consistent with the previous cases, teams involved in the project are designing the project by building and iterating on the model. We now introduce an additional capability along with these models: a digital shadow though product and teamwork instrumentation. With a new class of sensors and analytics, the teams – as they plan – can step back and see their own interactions – with one another, with the project targets, and with specific decisions across various project alternatives. The new insights stimulate awareness and help to answer “Is the current pattern of interactions healthy and indicative of high-performance teamwork? Does the feedback tell us if the project teams are likely to be successful?” Figure 8 shows two views of a project model as designed in TeamPort software for project design. Part of a network sketch of project elements and relationships is
Fig. 8 Project model in TeamPort for case 3: electric vehicle prototype project shown with a network view in the background and three breakdown structures of the PPO in the foreground
Digital Twins of Complex Projects
695
shown in the background. In the foreground, three breakdown structure views of the same project model are superimposed: product, work, and organization breakdowns. Table 3 summarizes the project model for this case, which is a product development project to design and implement a prototype small electric vehicle. An initial project model was built by interviewing an SME and the PM and by reviewing project documents, in preparation for a 1-h project design workshop. A forecast of the initial model by an agent-based simulator generated a baseline plan of $10.8 M and 819 days. The simulator evaluated real-world dynamics including concurrency, resource contention, communication, time zones, travel, and rework. Given the baseline digital twin, 24 separate teams were gathered and challenged to explore and improve the project. These teams had not worked with the digital twin nor project modeling software prior to the session. Various changes to the project model could be considered, both elemental and architectural, for example • • • •
team sizes, location, roles, and weekly schedules meeting patterns scope combinations and separation across the project degree of concurrency
In a 1-h workshop, these 24 teams working in parallel created 232 model variants generating 2468 simulations. Figure 9 shows a trade space diagram with the results by duration (x axis) and cost (y axis). Each dot is a simulated forecast based on a fully model and feasible project plan. The color of each dot indicates which of the 24 teams generated the forecast. The initial baseline plan forecast is shown at the intersection of $10.8 M and 819 days. This competitive exploration and improvement of the project model led to and cost reduction from $10.8 M to $9.4 M, without reducing project scope. Two teams reached $8.6 M. The duration was reduced from 819 to 704 days. One team reached 497 days, also without reducing scope. Four teams reached 650 days. These are significant and realistic improvements for a moderately complex project. A key function of DTs is feedback on technical and social interactions (digital shadow) in addition to the forecasting and control of the system (digital projection). Table 3 Summary of project model characteristics: case 3, electric vehicle prototype Products
11 products in a 3 level PBS. Validated Prototypes integrated with Body, Power, Interior, Chassis, Software, and PM deliverables 19 teams in a 4 level OBS distributed across organization and supply chain 3 sites across three continents 7 major phases with several shared stage gate milestones 40 major activities, ranging from low to high complexity ~47,500 nominal effort hours
Teams Locations Phases Activities Nominal effort Dependencies 55, including 3 complex, concurrent dependencies # of scenarios 232 model variants generating 2468 simulations in 1 h by 24 project design teams working in parallel. Each variant is a feasible project plan
696
B. R. Moser and W. Grossmann
Fig. 9 Tradespace as generated in a 1-h workshop. Each dot is a feasible project plan. The initial baseline project model is shown at the intersection of $10.l8M and 819 days. Dot color indicates which of 24 teams working in parallel generated the project model variant
This third case also shows how to leverage the data from the instrumentation of teamwork to understand “Why did some teams explore broadly and achieve better results? Does their problem-solving pattern and interactions tell us anything of their likelihood to perform better?” The data shown in this paper is a small subset of the feedback that a project digital shadow provides, yet meant to be illustrative. Figure 10 shows the pattern for each of the 24 teams separately. The project itself – with specific activities, teams, and products arranged in a project topology – is a significant contributor to the paths of exploration. In this case, we can see several common patterns and a few outliers of poor and excellent performance. How the teams behaved and interacted also mattered. Let’s zoom in a little more. Figure 11 shows four of these teams (teams 3, 4, 7, and 8) more closely. It is clear that some teams were more productive, generating more scenarios. Team 8 not only produced fewer project forecasts, but also deviated little from the baseline in both (cost, duration) outcomes and types of changes (not shown here). The pattern also shows, with team 4 standing out, several breakthrough improvements to the project with very few missteps or side exploration that led to greater cost or duration. Team 7 also shows some breakthrough, but also learned from a few changes that increased duration or cost. Team 3 was able to show tradeoff between cost and duration, yet did not improve both (towards a pareto of improved cost and duration.)
Digital Twins of Complex Projects
697
Cost
Fig. 10 Pattern of exploration as indicator of teamwork performance. Tradespace diagram for each of the 24 teams in case 3 shown separately. Each cell shows a different team. Each dot is a project model forecast. Lower cost and shorter duration, towards the lower left of each grid cell, are preferred
Duration Fig. 11 Zoomed view of four teams in case 3 revealing meaningfully different patterns of exploration
698
B. R. Moser and W. Grossmann
These figures show only the outcomes of each project model change, and yet for each dot we also now understand the aspects of the project that are explored, changed, and discussed along the way. One can also see what aspects of the project are not in the awareness and actions of teams. In other work by our research group, the transition from each dot to another is examined as a full conversation amongst team members, with debate, new information, conflict, and “aha” moments. Thus, the instrumented teamwork along with a model of the project help us to associate the problem-solving performance of teams in the context of a particular complex challenge. These are not generic throughput or process adherence evaluations, akin to the Taylor time-motion studies now a century old that assume static and standard work. The project matters, along with its unique complexities, scope, and teams.
7 Digital Thread for Complex Projects 7.1 Digital Twin Role in Search and Persistence A digital thread links project information across the project lifecycle. Within a single project model instance, the thread links the product, process, and organization (PPO) elements and how they relate through structures, dependencies, and roles [18, 28]. In addition to the formal PPO foreground information, background information such as design intent can also associated [25]. Over time, as the project evolves and changes, including changes in the product, process, and organization, the digital thread allows search and association of related project attributes across time. Given the inherent uncertainty of projects, the digital thread allows a better basis for retrieving related information, characterizing uncertainty, and improved forecasting and decisions [24]. One area of research is in the persistence of data and information making up a Digital Twin project, for example, the JWST. In 20 years when the JWST, nearing the end of life is faced with the opportunity of a new technology that would extend its life, the original JWST Digital Twin project plan would need to be instantiated so that any new design teams can, with short ramp-up time, begin to augment the original project plan to account for the new technology initiative.
7.2 How to Promote Model Persistence Persistence is the “containment of an effect after its cause is removed”. In the context of storing data in a computer system, this means that the data survives after the process with which it was created has ended. The model based Digital Twin for
Digital Twins of Complex Projects
699
projects contains many data types including text, tables, graphics including links among and between all data types. It is clear that the many data storage algorithms in use today may not exist or be supported in the future. One cannot know at present which kind of data base technology will be appropriate for guaranteeing persistence will be available and/or preferred in the future. At one time the Library of Congress was considering digitizing and storing all records from the origin of the United States to be available in the future, say 400 years hence. An admirable goal but what guarantee do we have that we would be able to access and read this stored information at such a distance in the future. What can be done at the very least is to define a schema for data organizing that would remain invariant over time. What must be avoided is to allocate Digital Twin project data to memory only because it will not survive process termination. Only time will tell what future data technologies will be available to guarantee data persistence. Choices between performance and durability will be required along the way to a successful data persistence technology that will last.
7.3 Persistence Benefits Through a Digital Twin Digital Twin data persistence can be crucial to successful project re-starts and future re-visits to past project structures. If we again use the JWST example where the space vehicle will have at a minimum a 20-year life cycle, any major upgrade or repair of system glitches would benefit from the use of the original project Digital Twin. Valuable guidance from earlier decisions and project structure could point out approaches and planning strategy that could positively impact future upgrades or repair of JWST systems. Guaranteeing persistence of the project Digital Twin allows immediate access to relevant data and information so that future critical tasks can be carried out. Another benefit of persistence in project Digital Twin data and information is that of life cycle integrity. This feature is very similar to that just mentioned. The key requirement for life cycle integrity is that a project’s data and information should be available at any time to those whose responsibility is to maintain a continuing overview of a project’s status and health. The reasons are more than obvious and life cycle views of complex projects can result in successful project execution of the lifespan of a complex project. Again, the ability to access data and information in the right form, at the right time and place can make the difference between success and failure in large complex projects.
700
B. R. Moser and W. Grossmann
8 Conclusion and Future Directions 8.1 Project as a Problem-Solving Ecosystem We have shared Project Design, a digital twin method applied to the whole project as a sociotechnical system. Model-building of projects by teams yields digital models and projections. These models have increasingly realistic representation while avoiding detail driven by the limited vocabulary of classic project management methods. A new significant capability is based on the ready feedforward and feedback that emerge as teams and digital models interact. In turn, readiness, execution, and learning are improved as teams contribute, update and align mental models, and rehearse their roles in the project. For complex projects – larger, connected, and innovative these new method and tools are necessary these steps are essential for effective problem-solving and performance. Three cases were shared which show key aspects of digital twins for complex projects: building of project digital models, project forecasting as digital projection, and instrumentation of project elements and teamwork for a digital shadow. The project digital twin can also act as a digital thread, linking project information dynamically during the project, and promoting persistence which is more useful for future related projects. Emerging techniques for instrumentation of teamwork suggest breakthroughs in analytics of health of the project as sociotechnical system. The application of analytics techniques will be possible with consistent and robust project information and the ontology and semantics that come from a fully formed project digital twin. In parallel, recent research on team behaviors and performance in complex projects will allow better project design with project health indicators, warnings of projects on the wrong track, and pro-active project corrections.
References 1. Apte, P. P., & Spanos, C. J. (2021). The digital twin opportunity. MIT Sloan Management Review, 63(1), 15–17. 2. Baculard, L.-P., et al. (2017). Orchestrating a successful digital transformation. Bain & Company [Preprint]. 3. de la Boutetière, H., Montagner, A., & Reich, A. (2018). Unlocking success in digital transformations (p. 29). McKinsey & Company. 4. Bughin, J., Deakin, J., & O’Beirne, B. (2019). Digital transformation: Improving the odds of success. McKinsey Quarterly, 22, 1–5. 5. Clark, W. (1922). The Gantt chart: A working tool of management. The Ronald Press Company. 6. Forth, P., et al. (2020). Flipping the odds of digital transformation success (p. 1). Boston Consulting Group. 7. Fruehling, C., & Moser, B. R. (2019). Analyzing awareness, decision, and outcome sequences of project design groups: A platform for instrumentation of workshop-based experiments. In E. Bonjour et al. (Eds.), Complex systems design & management (pp. 179–191). Springer.
Digital Twins of Complex Projects
701
8. Grieves, M., & Vickers, J. (2017). Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems. In Transdisciplinary perspectives on complex systems (pp. 85–113). Springer. 9. Jones, D., et al. (2020). Characterising the digital twin: A systematic literature review. CIRP Journal of Manufacturing Science and Technology, 29, 36–52. 10. Kelley Jr, J. E., & Walker, M. R. (1959). Critical-path planning and scheduling. In Papers presented at the December 1–3, 1959, eastern joint IRE-AIEE-ACM computer conference (pp. 160–173). ACM. 11. Kritzinger, W., et al. (2018). Digital twin in manufacturing: A categorical literature review and classification. IFAC-PapersOnLine, 51(11), 1016–1022. 12. Lessard, D. R., Miller, R., & others. (2013). The shaping of large engineering projects. In International handbook on mega projects (pp. 34–56). Edward Elgar. 13. Liu, M., et al. (2021). Review of digital twin about concepts, technologies, and industrial applications. Journal of Manufacturing Systems, 58, 346–361. 14. Malcolm, D. G., et al. (1959). Application of a technique for research and development program evaluation. Operations Research, 7(5), 646–669. 15. Manandhar, P., et al. (2020). Sensing systemic awareness and performance of teams during model-based site design. In 2020 IEEE 6th World Forum on Internet of Things (WF-IoT) (pp. 1–6). IEEE. 16. Moser, B., et al. (1997). Global product development based on activity models with coordination distance features. In Proceedings of the 29th international seminar on manufacturing systems, pp. 161–166. 17. Moser, B. R., & Winder, I. (2017, February 7). Uncovering team performance dynamics with data & analytics. Big Data Tokyo. 18. Moser, B. R., & Wood, R. T. (2015). Design of complex programs as sociotechnical systems. In Concurrent engineering in the 21st century (pp. 197–220). Springer. 19. Pelegrin, L., et al. (2018). Field guide for interpreting engineering team behavior with sensor data. In International conference on complex systems design & management (pp. 203–218). Springer. 20. Pelegrin, L., Moser, B., & Sakhrani, V. (2019). Exposing attention-decision-learning cycles in engineering project teams through collaborative design experiments. In Proceedings of the 52nd Hawaii international conference on system sciences. 21. Rasheed, A., San, O., & Kvamsdal, T. (2020). Digital twin: Values, challenges and enablers from a modeling perspective. IEEE Access, 8, 21980–22012. 22. Rebentisch, E., et al. (2021). The digital twin as an enabler of digital transformation: A sociotechnical perspective. In 2021 IEEE 19th International Conference on Industrial Informatics (INDIN) (pp. 1–6). IEEE. 23. Schleich, B., et al. (2017). Shaping the digital twin for design and production engineering. CIRP Annals, 66(1), 141–144. 24. Singh, V., & Willcox, K. E. (2018). Engineering design with digital thread. AIAA Journal, 56(11), 4515–4528. 25. Suzuki, H., et al. (1996). Modeling information in design background for product development support. CIRP Annals-Manufacturing Technology, 45(1), 141–144. 26. Tan, P. S., & Moser, B. R. (2018). Detection of teamwork behavior as meaningful exploration of tradespace during project design. In International conference on complex systems design & management (pp. 73–87). Springer. 27. Tao, F., et al. (2018). Digital twin in industry: State-of-the-art. IEEE Transactions on Industrial Informatics, 15(4), 2405–2415. 28. West, T. D., & Blackburn, M. (2017). Is digital thread/digital twin affordable? A systemic assessment of the cost of DoD’s latest Manhattan project. Procedia Computer Science, 114, 47–56. 29. Willis, C., & Friedman, D. (1998). Building the empire state. WW Norton & Company.
702
B. R. Moser and W. Grossmann Dr. Bryan R. Moser is Sr. Lecturer and Academic Director of System Design and Management, Massachusetts institute of Technology (MIT) and Project Associate Professor at the University of Tokyo, Graduate School of Frontier Sciences. Prior to returning to MIT in 2014, he worked for 25 years in industry at companies including Nissan Motor Company, United Technologies Corporation, and as founder and CEO of Global Project Design, a firm pioneering software and methods for model-based project management. Bryan’s contributions in teaching and research focus on engineering teamwork for complex systems problems, agent-based simulation, and use of model-based methods to improve performance of diverse teams. He received a bachelor’s in Computer Science and Engineering in 1987 and a master’s in Technology and Policy from the MIT in 1989. His doctorate in 2012 is from the University of Tokyo, Graduate School of Frontier Sciences.
Dr. William Grossmann has over 50 years’ experience ranging from scientific research, engineering analysis and design, technology discovery and development, strategic analysis and planning. He currently serves as SVP, Global Project Design responsible for Europe as well as Adjunct Professor, Department of Aerospace and Ocean Engineering at Virginia Polytechnic Institute and State University. Dr. Grossmann has held positions of Aerospace Technologist at NASA’s Langley Research Center, Senior Scientist at the Max Planck Institüt für Plasma Physik in München, Germany, Adjunct Professor of Applied Sciences, New York University; Research Professor of Applied Mathematics and Associate Director Magneto Fluid Dynamics Division at New York University’s Courant Institute of Mathematical Sciences. Additional positions have been Director of the College of Plasma Physics, International Center for Theoretical Physics, Trieste, Italy; Manager of ABB’s Corporate Research Program in Engineering Systems Integration, Heidelberg, Germany; CIO for ABB Germany, Heidelberg; Chief Knowledge Officer for ABB Kraftwerke, Mannheim, Germany; Director of Business IT Alignment, ALSTOM Power, Paris, France. Dr. Grossmann has served on numerous industry and government panels as well as several Boards of Directors, and is often a key note speaker at international science and engineering conferences. He received his BS (1958), MS (1961) and PhD (1964) degrees in Aerospace Engineering from Virginia Polytechnic Institute and State University.
The Role of the Digital Twin in Oil and Gas Projects and Operations Steve Mustard and Øystein Stray
Abstract The Digital Twin concept has been used for many years in the oil and gas industry, typically supporting the simulation of data for training and planning. A more expansive advanced Digital Twin concept is now becoming the norm, providing owner-operators with a seamless integrated end-user experience. This advanced Digital Twin combines spatial information with schematics and static data, integrated with dynamic data from multiple systems of records. The advanced Digital Twin concept is particularly appealing for offshore owner- operators who are looking to reduce costs and improve safety. Reducing costs associated with the number of people offshore can only be done safely if there is better quality information available onshore. As well as conveying and visualizing information in a location- and context-aware manner, the advanced Digital Twin also supports the introduction of new technology such as virtual reality, augmented reality, mixed reality, and machine learning. These new technologies help provide better insights for owner-operators across the entire lifecycle of an offshore asset, from cost avoidance during construction, operational improvements during training, and support for integrity management and fabric maintenance during operation. There are different maturity levels for Digital Twins in offshore oil and gas. As technology continues to advance and owner-operators increase adoption there is the prospect of more mature Digital Twins that specify actions without human intervention. There is an opportunity to greatly streamline offshore operations, reducing costs, improving safety, and enabling rapid reaction to external conditions. Keywords AR/VR in operations · Asset management · Automation · Dashboards · Digital Twins · Energy projects · Offshore platforms · Oil & gas · Oil & gas project management · Production platforms · Subsea operations · Virtual reality
S. Mustard (*) National Automation, Inc., Houston, TX, USA e-mail: [email protected] Ø. Stray visCo AS, Stavanger, Norway © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_25
703
704
S. Mustard and Ø. Stray
1 Introduction The Digital Twin concept has been used for many years in the oil and gas industry. Historically, oil and gas companies favored the process model of Digital Twin. This form enables the simulation of data for training and planning. The more expansive advanced Digital Twin concept that owner-operators currently favor provides a seamless integrated end-user experience. It combines spatial information (three-dimensional or 3D model, seismic and survey data, and photography) with schematics and static data. This is integrated with dynamic data from multiple systems of record. The focus on spatial information stands out in this new approach. Specifically, the way it conveys and visualizes this information in a location- and context-aware manner specific to different user needs, stands out in this new approach. Shell’s goal for Digital Twin is to allow “the operator to identify areas needing prioritized inspection, maintenance, reducing the number of people on board and reducing the number of physical inspections in hard-to-access areas” [1]. These two objectives of reducing number of people on board and reducing physical inspects is similar across Digital Twin deployments at bp, Equinor, ExxonMobil, and others.
2 Value in the Oil and Gas Sector Companies in the oil and gas industry operate in one of three phases of production: upstream, midstream, and downstream: • Upstream, or exploration and production (E&P) companies, locate sources of hydrocarbons (crude oil and natural gas) and drill wells to recover it. • Midstream companies are responsible for hydrocarbon transportation from the wells to refineries. • Downstream companies are responsible for refining hydrocarbons and the sale of the finished product. While Digital Twins can provide benefits in any of these phases, it is the upstream phase that can benefit most. Adoption of the Digital Twin helps drive down costs associated with marine interventions and working offshore. In upstream, companies deploy drilling rigs and production platforms onshore and offshore. Drilling rigs are deployed temporarily to drill and complete wells. The production platforms are then used to collect and stabilize the hydrocarbons for transmission to refineries for further processing into petroleum-related products. There are many types of offshore platform. Figure 1 shows just one example. An offshore production environment is divided into two main areas, subsea and topsides. Environments can be wet-tree or dry-tree. A wet-tree design locates some of the heavy equipment on the seabed. A dry-tree production scenario requires a larger topsides footprint to support this equipment above the waterline. Figure 2
The Role of the Digital Twin in Oil and Gas Projects and Operations
705
Fig. 1 An example of an offshore production platform. (Image from Adobe Stock, ref 295964248)
Fig. 2 A wet-tree subsea environment connected to a floating production vessel. (Image from visCo)
shows a Digital Twin with wet-tree subsea environment. Note the production trees and manifold located on the seabed. The hydrocarbons flow from the wells through the production trees into the manifold. From there piping called “risers” carries the hydrocarbons to the topsides.
706
S. Mustard and Ø. Stray
Fig. 3 Visual representations support daily communication between onshore and offshore. (Image from visCo)
Typical processes on an offshore platform include oil and gas separation, free- water knockout, gas scrubbing, pumping, compression, and metering. After completing these processes, oil and gas is either pumped to export pipelines or vessels. The operations team manages the topsides environment. This team either lives on the platform or on a nearby accommodation platform. A typical production platform may have more than 150 personnel on board (POB) at any one time. This complement includes operations staff and visiting contractors performing inspection and maintenance activities (Fig. 3). Offshore production is expensive and hazardous. Oil and gas operators need to extract and process hydrocarbons in a cost-effective manner, while simultaneously protecting personnel and the environment. Any mechanical failure or human error could trigger a loss of primary containment (LOPC) event, which in turn could lead to fire, explosion, loss of life, and pollution. Additional uncertainties challenge oil and gas operators: • The price of crude oil and gas varies due to market conditions outside of their control. • The global response to climate change requires innovative changes in extraction, production, and consumption of hydrocarbons. • As an ageing workforce retires, the industry loses key “institutional” knowledge that may be irretrievable. Addressing these challenges requires innovative processes, procedures. Any time there is disruption to an established best practice, it opens the door to unforeseen risk. Technology plays a key part in addressing all these challenges and modeling
The Role of the Digital Twin in Oil and Gas Projects and Operations
707
the associated risk to better identify a path forward. The Digital Twin concept is a major tool in this evaluation. It promises to provide many benefits to offshore operations, including: • Better access to data, in context, to help with decision making in a changing operational environment. • The ability to perform what-if scenarios that identify the optimal response to changing business or other external conditions. • Access to information onshore to minimize trips offshore, reducing operational cost, optimizing POB, and taking more people out of harm’s way.
3 Digital Twins in Oil and Gas Projects A typical oil and gas project creates vast volumes of data. These include dozens of 3D models, hundreds of piping and instrumentation diagrams (P&IDs), data sheets, hazard, and operability (HAZOP) and layers of protection analysis (LOPA) reports, cause-and-effect diagrams, risk-based inspection reports, and so on. Many vendors are involved in the data acquisition, storage, and distribution process. While standards may exist for this data, there is a high likelihood that discrepancies and other issues will creep in. The quality of data handed over to the operations team impacts operational effectiveness. Reviewing and validating unreliable data creates additional cost and delays. Oil and gas project vendors can apply the Capital Facilities Information Handover Specification (CFHIOS) initiative to address this problem. The International Association of Oil and Gas Producers (IOGP), an association funded by oil and gas operators, manages this initiative [2]. CFHIOS (also known as Joint Industry Programme 36) includes a handover verification and validation process that enforces data quality standards. This gives operations confidence in the final product. Owner-operators are not yet at the point where they create a Digital Twin at the outset of a project. Typically, they create Digital Twins later as part of a digital- transformation initiative, based on traditional forms of data. This process is evolving toward a point where operators and vendors are working from a common Digital Twin at the outset. The use of Digital Twins in projects can yield many benefits: • Reduced cost of data production, especially rework in response to errors. • Cost avoidance, by identifying design issues – e.g., equipment and door clearances. • Operational improvements, by reviewing human factors more effectively – e.g., location of maintainable equipment (Fig. 4).
708
S. Mustard and Ø. Stray
Fig. 4 Using a Digital Twin during a project can help avoid significant rework by allowing evaluation of material handling activities. (Image from visCo)
4 Digital Twins in Oil and Gas Operations The primary objectives in operations are to: • Relocate as much work as possible onshore, where it is cheaper and safer. • Improve efficiency by removing or minimizing tasks such as data collection (documents, inspection observations, etc.) and reporting (photo processing, inspection report preparation, etc.). • Focus effort on the right areas. Reduce routine maintenance that generates little or no benefit and redirect efforts to higher risk or more immediate issues. Key factors limiting these objectives have traditionally been reliability of data (especially 3D models) and access to the right information. The aggregation of all data into one Digital Twin resolves the latter. Providing other forms of information make discrepancies in 3D data more obvious. Of course, it is still necessary to rectify these issues. But if the operator has a rigorous process, future work is easier to perform because people trust the data.
4.1 Visualization Maintenance and inspection involves the analysis of massive amounts of data: Work-order and work-permit status, deck management, spare parts management, material handling, simultaneous operations (SIMOPS), isolations planning, and so on. Even though all the asset data refers to physical installations there is usually no clear visual representation connecting the data to the physical plant. Locational
The Role of the Digital Twin in Oil and Gas Projects and Operations
709
awareness is fundamental to understand the big picture, but traditionally the visual element is missing. In the advanced Digital Twin concept, establishment of a central spatial context where all data can be associated is fundamental. Some owner-operators are focusing on topsides only. More elaborate Digital Twins incorporate all installations. This not only covers the surface, but subsea equipment and pipeline infrastructure. A uniform 3D environment represents the integrated facility, bathymetric data represents the seabed, and all components including subsurface well paths and reservoir information are placed according to the physical reality. This is a complete end-to-end visual representation of the entire field. This allin-one spatial environment is the primary context for accessing and presenting data. It functions as a singular interface and gateway to any type of data, regardless of where that data may be (Fig. 5). Different role-based functions have their own visualization requirements due to different purpose and context focus. From a visualization point of view, one size does not fit all: For instance, the people who plan work, operate in the field, or monitor status have different needs. Consequently, no single visualization technique is effective at addressing all visualization needs. Therefore, integrating multiple visualization techniques within a single visualization system has become a natural and effective way to assist users in exploring more features of the source data. 3D construction models allow a view of equipment “as designed”, but these models are not usually fully maintained after construction. As a result, engineers do not trust these models when designing changes to facilities. Owner-operators therefore capture 3D object scans using laser scans or light detection and ranging
Fig. 5 Elaborate Digital Twins incorporate a wide range of data representing all installations at surface and subsea and subsurface well paths and reservoir. (Image from visCo)
710
S. Mustard and Ø. Stray
(LiDAR). They also rely on photogrammetry, another form of 3D photography, to provide a view of equipment “as built” and “as installed”. Combining both construction models and 3D object scans and photogrammetry provides the ability to compare “as designed” with “as built” or “as installed”. Such comparisons are essential if operators are to have confidence that their Digital Twin is truly representative of the actual facility. This confidence is especially critical in subsea environments where it is often impractical or cost-prohibitive to review the actual installation (Fig. 6). Integrating static data, such as instrument specifications, data sheets, cause and effect diagrams, and P&IDs into the 3D model can to help users better understand the context of this information. For instance, while the P&ID may show the connection of valves and piping, the 3D model can show the important geospatial context, as well as identify any task requirements, such as scaffolding (Fig. 7). Dynamic data comes in many forms, from real-time process information, changing every few seconds or minutes; to maintenance orders, changing daily, weekly, or monthly. Integrating this data with the static data and the 3D models allows a form of aggregation which can reveal other aspects that would otherwise be difficult to deduce. The best example of this is the barrier model, or asset integrity model. Aggregating the data necessary to determine the status of a series of barriers in an LOPC scenario is a major challenge. Creating a real-time display of barrier status by linking a mature Digital Twin, with a solid underlying data model, to the appropriate data sources greatly improves safety and production performance. While conventional relational database technology could house such a data model, Digital Twin vendors are turning to knowledge graph solutions.
Fig. 6 Advanced visualization allows comparison of the “As Is LiDAR scan” with “As Built 3D model”. The red color indicates deviation between scans and 3D model. (Image from visCo)
The Role of the Digital Twin in Oil and Gas Projects and Operations
711
Fig. 7 Linking 3D models with P&IDs and a location map allows users to better understand information in context. (Image from visCo)
A knowledge graph maps the relationships between data entities. These relationships are then used to analyze or display data to provide better situational awareness. Abstracting the relationships in a knowledge graph creates a solution that is much easier to maintain. Figure 8 shows a simple knowledge graph representing the relationship between various types of offshore oil and gas platform entities. Figure 9 shows the visualization of barrier status that would utilize a knowledge graph to map the relationship between the elements involved in this status.
4.2 Workflow Silos exist in typical upstream workflows, with data crossing functional boundaries. Onshore planners prepare work packs, field operators execute these work packs and report back on status and findings back. Onshore teams evaluate the overall situation. They prioritize and provide direction to planners. Because of the disparate nature of the data sources, it is difficult for these functional groups to look beyond their functional silos to see the impact of their work on the next link in the chain of activities. There is a drive for owner-operators to reduce headcount offshore. This saves money, increases efficiency, and improves safety. To achieve this, while maintaining safe and reliable operations, requires a cross-functional effort. An advanced Digital
712
S. Mustard and Ø. Stray
Fig. 8 A simple knowledge graph showing the relationship between data entities. (Image from Steve Mustard)
Twin that can break down the data and functional silos to transform workflows. Key workflow objectives are: • Support onshore planning of work. Allowing planners to see the actual environment and the 3D context of the inspection route and where the maintenance work will take place. Locational and temporal context helps planners cluster workloads and optimize resources resulting less time spent offshore. • Optimize time spent offshore. Providing better, more immersive, training prepares an offshore worker in advance of their trip. These training methods are described in more detail in the next section. Providing the worker with all the information they need, in context reduces wasted time and effort and allows the worker to focus on the task at hand. Providing spatial data in the field makes it easier for the worker to locate valves, equipment, and pipelines.
The Role of the Digital Twin in Oil and Gas Projects and Operations
713
Fig. 9 Barrier panel combining 3D environment with data table, trend curves and visualization of relationships within the knowledge graph, provides real-time status of asset integrity. (Image from visCo)
• Reduce analysis and reporting effort. Recording data in the field directly in the 3D environment reduces time spent in offshore and has advantages over traditional recording methods. The traditional approach requires subsequent analysis and collation which are largely redundant in an integrated solution like the advanced Digital Twin. In addition, the visualization of field data together with other sources of data provides additional context that may not otherwise be visible. For example, the ability to see upcoming maintenance orders associated with a defective item. Having these orders reported from the field allows better work planning.
4.3 Training Conventional training for operators working on offshore oil and gas facilities involves traveling offshore for supervision through a series of competency assessments. As already noted, traveling offshore is expensive and places people in harm’s way. However, without a viable alternative there is no way to ensure that personnel are competent before they begin their role. In addition, visiting a facility offshore for short-term maintenance or inspection work requires familiarization with the location of all key equipment, major hazards, safety equipment, and evacuation procedures. Workers who go offshore should be
714
S. Mustard and Ø. Stray
able to easily identify and access the maintenance item or inspection point and perform their assigned task. Technology can play a key part in improving training capabilities, saving money, and reducing risk to personnel. Virtual Reality (VR) technology is widely used, for instance in gaming, building inspection, and real estate tours. Training in sectors such as automotive, medical, and military utilizes VR. VR provides an immersive experience for users and the virtual environment can be extremely realistic. Combining VR with Digital Twins enables a virtual environment to exactly match a real environment. This ensures users can have the experience of being in the facility without the need to travel or risk of hazardous situations. The VR experience gives users a better perception of size and distance. It also provides level of spatial awareness that is not possible from a flat screen or 2D drawing. VR users can also meet in this virtual environment from anywhere in the world. Experts or vendors can gather from myriad locations to review tasks and walk the route without leaving their home. Figure 10 shows an example of a user’s view of a VR environment. Existing training can vary from facility to facility, and trainer to trainer. The results of the training also tend to be more subjective, based on the view of the trainer. This can make it hard to repeat or compare the results for individuals. The use of VR and Digital Twin for training supports better consistency. It also offers training scenarios that are repeatable, measurable, and readily compared.
Fig. 10 With an immersive experience in Virtual Reality, users can be familiar with the work procedures before going offshore. (Image from visCo)
The Role of the Digital Twin in Oil and Gas Projects and Operations
715
4.4 Simulation As already noted, oil and gas owner-operators have used the process model for many years. E&P companies use it to support model-based representations of the physical interactions of hydrocarbon processing in software. These simulations incorporate mathematical models, approximations and assumptions representing the processes and are either used to train control room operators or determine optimal settings for a process. Integrating process simulations into the advanced Digital Twin environment yield additional benefits for owner-operators in the various phases of an offshore facility’s lifecycle: • During the early design stages (conceptual design and front-end engineering design or FEED) engineers can model various solutions and identify the optimal arrangement before detailed engineering begins. • The integrated advanced Digital Twin and process simulation allows operators to better plan startup, minimizing potentially unknown situations that could delay the project. • In the operations phase, better training will result in fewer operator errors and allow the owner-operator to quickly and easily evaluate new operating parameters to meet changing business needs (Fig. 11).
Fig. 11 Presenting the output from simulations and analytics in the central Digital Twin spatial environment contextualizes the data, and transforms data and information into knowledge faster. (Image from visCo)
716
S. Mustard and Ø. Stray
4.5 Hosting A comprehensive Digital Twin, of the type needed for monitoring performance of offshore oil and gas facilities, requires considerable computing power. The real- time rendering of 3D data such as models, LiDAR scans, and photogrammetry requires dedicated graphics processing capabilities. Supporting hundreds of users in various locations, all demanding different presentations of data from multiple systems of record, requires substantial capacity. Real-time asset data generated on the offshore facility consists of process data (temperature, flow, pressure, level, etc.), captured in real-time at rates from less than 5 s to up to 15 min. This data is an essential element for digital-twin consumption. Real-time data allows users onshore to see this information contextualized with data from other systems of record, such as: • Work management systems – historic and future planned maintenance activities on the facility. • Integrity management systems – historic and current integrity issues identified during inspections of the facility. • Document management systems – vendor data sheets, piping and instrumentation diagrams, process flow diagrams, cause and effect diagrams, bowtie diagrams related to equipment or areas of the facility. • Advanced analytic systems – for instance, corrosion monitoring analytics, that may provide insights into areas of concern in the facility. A data center will normally host the Digital Twin (Fig. 12). The data center may be one that the owner-operator operates itself, or one owned by a cloud service provider. The systems of record may be in the same data center or elsewhere but will almost certainly be onshore. A control system on the facility records the real-time asset data, and this data is historized in a real-time database. This database may be on the facility, onshore in a data center or elsewhere. Communication quality between the production facility and onshore will vary depending on many factors. Satellite communications are common, and this medium can have limited bandwidth and poor latency but may be the only viable option. Some owner-operators may have the option of connecting to other infrastructure, for example private fiber optic networks. This can greatly increase the ability to share large volumes of data in real time. Communications between onshore offices and data centers may be over private wide area networks or over public infrastructure, secured using virtual private networks (VPNs). Figure 13 shows a high-level overview of the integration of data sources in a Digital Twin. The Digital Twin will incorporate functionality to query systems using Application Programming Interfaces (APIs) on each of the relevant systems. The frequency of the queries will depend on the user requirements. In some cases, constant data queries, at relatively high frequency (e.g., real-time asset data), are
The Role of the Digital Twin in Oil and Gas Projects and Operations
717
Fig. 12 Key elements in an offshore Digital Twin infrastructure. (Image from Steve Mustard)
Fig. 13 Integration of Digital Twin data sources. (Image from Steve Mustard)
necessary. In other instances, the queries will be ad-hoc or on demand (e.g., document management system). This centralized infrastructure for Digital Twins allows owner-operators to better compare performance of their facilities. With standardized data aggregated in a central location, there is significant opportunity to compare production, productivity, safety, and other metrics with an eye towards process improvements.
718
S. Mustard and Ø. Stray
4.6 Cybersecurity Following the discussion of hosting and interfaces, the next question is how to ensure adequate security of the owner-operator’s systems, to protect against compromise by a cybersecurity incident. The deployment of the Digital Twin described in this chapter creates new cybersecurity risks for the owner-operator, as follows: • The connection to the facility that collects real-time asset data, creates a potential new ingress point for malicious actors to leverage. • The connection to other systems of record creates potential new exploitable vulnerabilities. • The deployment of the Digital Twin in an offsite data center extends the perimeter of protection beyond the owner-operator’s infrastructure. The subject of cybersecurity is far outside the scope of this book, but there are some general principles that owner-operators need to consider, to make their digital-twin infrastructures secure. The zero-trust security model [3] is rapidly becoming commonplace. Zero trust, otherwise known as perimeterless security, is based on the principle that every activity is untrusted by default, requiring verification of every activity. This concept lends itself well to the distributed nature of the Digital Twin infrastructure, and the use of offsite data centers. Key principles for zero trust architectures are: • Have a clear definition of the architecture, users, devices, services, and data. Without this definition it is impossible to manage security. • Identify users and devices. The definition of what data and services may be accessed is managed through unique identities. • Assess user behaviors and device health. This includes verifying the application of system updates. It also involves setting up alerts that trigger on attempt to access to an insecure resource. • Enforce authentication and authorization everywhere, considering factors such as device location, device health, user identity, and user status to determine approval for a request. Other essential elements for owner-operators to implement are: • Cybersecurity awareness training for employees, contractors, vendors, and service providers. The owner-operator’s IT department cannot manage cybersecurity in isolation. Everyone involved in the owner-operator’s business, whether part of the organization or not, can be the initiating cause of a cybersecurity incident (e.g., by clicking on a malicious link in an email). They can also be the first line of defense (e.g., by identifying and stopping insecure behavior). • Rigorous enforcement of cybersecurity requirements in contractual terms with all third parties. All parties in an owner-operator’s supply chain must manage their own security.
The Role of the Digital Twin in Oil and Gas Projects and Operations
719
4.7 Levels of Integration The advanced Digital Twin has potential to support all types of upstream activities. This in turn allows people to interact with any type of data regardless of the location of that data. The advanced Digital Twin’s ability to support workflows is directly dependent on the level of integration with these types of data, schematics, and the spatial environment. So, this has an impact on the end-user experience. Seamless integration of disparate data sources is dependent on syntactic encoding. In a seamless integration, sources rely on the same numbering system and consistent use of syntaxes across the asset. The inspection and maintenance scenario explained earlier was based upon a high level of connectivity and interconnection between 3D and underlying data. But it also assumes an owner-operator uses a single structured numbering system as a reference point. All 3D engineering models, and other tag-based systems of record, will thus align according to a consistent numbering system. Connecting external data sources and components in the spatial environment allows users to see data in relation to the context regardless of where the data lives. Unfortunately, this is not the case for most existing assets. As the table below indicates, challenges related to the level of integration vary:
3D data
Level 1 Lack of 3D models, poor quality or fidelity in 3D models, unmanaged image files, point clouds, and LiDAR scans
Engineering numbering specification
No formal specification documented
Equipment tag register
Multiple spreadsheets or files of tag data
Documents
Non-searchable pdf documents
Level 2 Majority of 3D models and other 3D data available but not actively maintained or aligned
Level 3 3D models and other 3D data actively maintained but alignment with other data sources not validated
Level 4 3D models and other 3D data actively maintained in evergreen state as part of management of change process, and aligned across all data sources Formal specification Formal Guidance mandated, specification provided on compliance provided but no numbering but enforced, and data not enforced. No enforcement or automatically validation of data validation validated against guidance Central database Central database of Single of tag information tag information spreadsheet or actively maintained, file maintained. maintained. No No validation of alignment of other and aligned with other data sources data sources data Fully searchable Fully searchable Mixture of documents, with documents, actively searchable and maintained, and no validation of non-searchable data against other validated against documents other data sources sources (continued)
720
Data sources
Challenges
S. Mustard and Ø. Stray Level 1 Lack of alignment with equipment tag identification
Level 2 Some alignment with equipment tag identification
Level 3 All data sources use tag numbering specification, but no validation across systems
Extremely fragmented, difficult to associate data in systems, typically many standalone tools
Unreliable or inconsistent data. Fewer tools but still significant additional effort required
Improved consistency, workflows using other systems impacted by lack of validation between data sources
Level 4 All data sources use tag numbering specification, actively maintained, and validated across systems Maintaining all data sources and validating them introduces extra time and effort into existing processes. Hard to change long-standing behaviors
For brownfield assets each system of record may use its own numbering system for a given asset. These numbering systems may lack full alignment. This limits the potential for operational efficiency improvements. Introducing additional elements to link these data sources addresses this issue. Many owner-operators have different numbering systems and standards that vary across regions. Some may vary from generation of facility to the next, necessitating additional data alignment. As noted earlier in this chapter, a major challenge for Digital Twins is the 3D engineering model quality. Created in the construction phase. These models support the building of the facility. After entering operation, maintenance of these models may lapse, leading to a lack of trust in the data. Engineering and modification work is thus heavily dependent on LiDAR scans or photogrammetry. Updating 3D models can be daunting, Outdated models reduce the benefits of an advanced Digital Twin. However, there are other ways to address this problem. A simplified 3D model based upon general assembly drawings can represent the shape of the asset. This simplified model provides a geo-spatial reference point to all LiDAR scans, panorama photographs, and other data (Fig. 14). While this is an imperfect solution, it provides a starting point to continually enhance the content. Collecting data during every maintenance or survey activity refines the Digital Twin representation. For example, subsequent workflows can benefit from highlighting and tagging part of a LiDAR scan. This links data in other systems of record. The Digital Twin representation continues to develop with every activity, and benefits to the users grow in proportion to this development. Whatever the starting point for an advanced Digital Twin if it is to continue to deliver benefit it is essential to keep it in an evergreen state. The moment a user loses trust in the data, it will quickly fall further out of use. Fortunately, owner- operators already have well defined management of change (MoC) procedures that require updates to documentation and data. It is essential to enforce these procedures, and ensure that they include updating the advanced Digital Twin environment.
The Role of the Digital Twin in Oil and Gas Projects and Operations
721
Fig. 14 A simplified model representing the outline with elevations, function as a geo-spatial reference point for LiDAR scans, improving spatial awareness. (Image from visCo)
5 The Future of Digital Twins in Oil and Gas Spatial information found in advanced Digital Twins contributes to better analysis and decision making. The advanced Digital Twin is much more than another 3D model viewer. The advanced Digital Twin can run analytics on 3D data. Using these 3D analytics in conjunction with conventional data analytics provides additional insights, such as: • Comparison of 3D models and LiDAR scans to provide automatic deviation reports. • Smart navigation instructions to help workers in the field. • Analysis of flow direction though pipes. • Using LiDAR scans to automatically assess integrity at key locations such as where pipe dimensions or material changes. • Checking clash-detection during material handling planning. • Automated assessment of storage space and cargo placement. • Automating and optimizing isolation plans to combine multiple activities. In short combining conventional analytics with 3D analytics provides users with enhanced situational awareness. Advanced Digital Twins can operate at one of many stages of maturity. Quantifying this maturity provides a better understanding of what is possible (Fig. 15). DNV GL, an international accredited registrar and classification society headquartered in Høvik, Norway, has produced a maturity model for Functional Elements
722
S. Mustard and Ø. Stray
Fig. 15 By displaying analytics from multiple sources in the central spatial environment, users can see all relevant analytics presented in context, in a single view. (Image from visCo)
(FEs) of Digital Twins as part of their Recommended Practice DNVGL-RP-A204, Qualification and assurance of Digital Twins [4]. The DNV GL model defines five levels of maturity: Descriptive
The FE can describe the current state of the system or asset: Real-time data streams are available from the asset Describes the real system and provides status, alarms, and events Ability to interrogate and provide information about the current and historical states Informative The FE can present diagnostic information, such as health or condition indicators The FE can support the user with condition monitoring, fault finding and troubleshooting Predictive The FE can predict the system’s future states or performance and remaining useful life The FE further enriches health and condition indicators to support prognostic capabilities Prescriptive The FE can provide prescriptive or recommended actions based on the available predictions The FE evaluates the implications of each option and how to optimize the future actions without compromising other priorities Autonomous The FE can replace the user by closing the control loop to make decisions and execute control actions on the system autonomously The user may have a supervisory role over the FE to ensure that it performs as intended
Oil and gas operators are currently developing and using Digital Twins that are in the descriptive and informative levels.
The Role of the Digital Twin in Oil and Gas Projects and Operations
723
Descriptive Digital Twins collate real-time asset data and merge with other sources of data from systems of record such as work management. Given sufficient time, it is possible to obtain insights from these systems of record, but this is usually impractical. Informative Digital Twins extend the capabilities of descriptive systems by using the same source information but deriving additional insights that would not be observable in any individual system of record. For example, presenting current asset health indicators based on a combination of data. Predictive Digital Twins are rare today, but they will become more common as owner-operators extend the capabilities of their descriptive or informative Digital Twins. Informative Digital Twins provide current health indicators. This can help plan immediate remediation work, e.g., essential tasks due in the next 30–90 days. Predictive Digital Twins take this concept further to provide future predictions of these health indicators. These future predictions will allow planning of remediation work on a longer time horizon, e.g., over the next 3–5 years. This longer-range planning will enable a better use of resources as well as provide a focus on the real areas of concern. This focus should help minimize the likelihood of a major incident due to, for instance, corrosion. It may also extend the useful life of some equipment. For example, identifying pumps that are operating more frequently than others can lead to adjustment of operational regimes and better maintenance, both of which can extend the life of the pumps in question. In predictive Digital Twins, fabric maintenance teams are the Subject Matter Experts (SMEs) in integrity management. They use the insights to recommend essential actions. For instance, a predictive Digital Twin may report that a pump has been running for a certain period and needs servicing within the next 3 months based on current usage. Scheduling a maintenance order to perform the service may be an action arising from the review of this information. In a well-designed predictive Digital Twin, the SMEs provide feedback on the insights to improve the quality of future insights. For example, after investigating a recommendation to service a pump, it may become clear that there is additional data that would have arrived at a different conclusion. In future the predictive Digital Twin will take this additional data into account (Fig. 16). Prescriptive Digital Twins can remove part of the decision-making process in predictive Digital Twins and skip straight to a recommendation, for example to create a maintenance order for completion a certain date. SMEs can access details of how the Digital Twin arrived at this decision, but in many cases, it is not necessary
Fig. 16 Predictive Digital Twins provide information to SMEs. (Image from Steve Mustard)
724
S. Mustard and Ø. Stray
for a human to be involved because the action is so obvious. The SMEs may then analyze other, more complex, situations or address other items in the maintenance backlog. Autonomous Digital Twins take the user completely out of the decision-making loop and adjust processes based on insights generated from the various sources of input data. Users retain an oversight role, verifying decisions and ensuring no unintended consequences are occurring. Like the feedback on insights in predictive Digital Twins, users could make corrections that would better define the decisions the autonomous Digital Twin takes in future. It is likely to be some time before real operations use autonomous Digital Twins. Given the safety-related nature of oil and gas operations, owner-operators are likely to want several years of data from predictive Digital Twins to give them confidence that autonomous operations are safe and reliable.
6 Machine Learning and Artificial Intelligence in Digital Twins Machine Learning (ML) and Artificial Intelligence (AI) are two terms often used interchangeably. ML is in fact a form of AI. AI is a field of computing that aims to design computer systems that mimic human intelligence. ML is a key element in AI because it allows the computer to learn on its own rather than being pre-programmed. ML and AI are key to achieving the potential of advanced Digital Twins (predictive, prescriptive, and autonomous). There are two key forms of ML applicable to advanced Digital Twins. Supervised ML involves training software using specific data sets. Given sufficient data, the software can then identify new data sets. In the oil and gas case, data sets could include: • Images of new or corroded piping or vessels. • A set of process data (temperatures, pressures, flows) for several operational scenarios. The ML software can then receive new data sets with the same information and categorize situations. Unsupervised ML software does not learn from specific data sets. Instead, the software looks for patterns in complex data sets. One current application of ML in oil and gas operations involves using ML to analyze image and LiDAR scans. Advanced analytic engines predict where corrosion issues are developing using the visual and 3D data as references. Figure 17 shows an example insight generated using this method by a commercially available AI inspection solution that detects surface corrosion and classifies it according to its
The Role of the Digital Twin in Oil and Gas Projects and Operations
725
Fig. 17 Insights from automated corrosion identification. (Image from Abyss)
severity. Automated surface analysis using LiDAR scan data evaluates profile of corrosion and quantifies the degree of blistering. The existing method for identifying corrosion issues involves manual inspections. There are two common types of manual inspections, general visual inspection (GVI) and close visual inspection (CVI). GVI covers 100% of the inspection area at a viewing distance of 6 ft (approximately 2 m). CVI targets specific areas of interest at a viewing distance of 2 ft (just over 600 mm). An international standard ISO 4628 (Paints and varnishes — Evaluation of degradation of coatings — Designation of quantity and size of defects, and of intensity of uniform changes in appearance [5]) defines the principles for designating the quantity and size of different coating defects such as rusting, blistering, cracking, and delamination. Inspection involves verification of wall thickness, bolt condition, and assessment of structural components and insulation. These manual inspections are time consuming and require an offshore presence. As noted previously, limited bed space and conflicting priorities make offshore presence challenging. Not even considering the cost and safety aspects of sending people to remote hazardous locations. Various bodies provide guidance on inspection frequency. For example, the American Petroleum Institute (API) recommended
726
S. Mustard and Ø. Stray
practice 572 (API RP 572 – Inspection of Pressure Vessels [6]) requires examination or testing of every pressure vessel every 5 years. Considering a typical offshore facility with a large collection of pressure vessels and piping, meeting these inspection requirements involves a permanent inspection presence offshore. Furthermore, during the recommended inspection cycle it is possible for corrosion issues to go undetected using the manual process. Such an issue can result in failure of a pressure system which in turn could lead to an LOPC event. The use of ML in analyzing image and LiDAR scans can greatly improve corrosion monitoring of offshore facilities: • Conducting analysis onshore, away from the facility, minimizes costs. • A computer system can perform a complete analysis of a facility automatically in a matter of days or weeks. • The ML program is objective, consistent, and not subject to human bias and error, and so will likely identify more areas of concern. The analysis does require up-to-date image and LiDAR scans, but, thanks to technology advances, (particularly with laser scanning camera systems and software) collecting these scans is now a routine task.
7 Future Technology Advances Supporting Digital Twins Advances in technology continue. There are many potential applications for future Digital Twin use cases. As previously noted, advances in LiDAR scanning camera systems and technology now allow owner-operators to more frequently collect imagery of their offshore facilities. The technology is requires minimal training to operate the camera on the facility. Performing the skilled planning and processing of the data onshore minimizes POB requirements and reduces cost. Many sectors now use Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) technologies. Integrating these technologies with Digital Twins can provide many new use cases. The terms VR, AR, and MR are used interchangeably but are distinct modes of operation. VR fully immerses a user in a realistic 3D representation of an environment. AR and MR both involve a user interacting in the real environment. In AR, overlaying operational data allows the user to see it in context (e.g., operating data for a pump that the user is looking at). In MR the user can interact with a combination of real and virtual representations. For example, an inspector may overlay a model of a piece of equipment over the real equipment to compare the two. Figure 18 shows a potential application for AR integrated with Digital Twin environments. In this scenario using spatial data in conjunction with 3D data from the Digital Twin provides the user with turn-by-turn navigation to locate a piece of equipment. In large facilities, locating equipment can be time consuming. This type
The Role of the Digital Twin in Oil and Gas Projects and Operations
727
Fig. 18 Using AR on Ex-certified tablets, helping personnel safely navigate through a complex facility. (Image from visCo)
Fig. 19 Using MR on a Hololens to provide procedure guidance. (Image from visCo)
of feature can significantly reduce wasted time as well as improve safety by providing safer routes. Figure 19 shows a possible MR application involving Digital Twins. In this scenario the application provides the operator with step-by-step guidance to perform a maintenance task. The guidance includes overlay of 3D model representation of pieces of equipment over the real equipment, as well as instructions. Such guidance can greatly improve safety in maintenance operations.
728
S. Mustard and Ø. Stray
Many existing oil and gas operational activities involve paperwork, either: • Collecting drawings, data sheets and other material before performing a task or, • Producing reports from disparate sources of data after a task. In addition, some activities require visibility of equipment on the facility, e.g., confirming the location of terminations on a junction box. Mobile technology, such as tablets and smart glasses, in conjunction with Digital Twins, can streamline all these processes. The Digital Twin should have access to document management systems. This too provides a user with access to the necessary information from the mobile device, without any need for collection of this data. Once in the field, the user can collect data on the mobile device, including: • Inspection comments. • Measurements • Photographs and videos These are already associated with the inspection job. As a result, report generation is automated. Using mobile technology, especially smart glasses, allows users on the facility to share what they see with users onshore. There is no need for special skills, other than basic training on the technology. The users onshore can direct the offshore user to what they need to see. Collection of videos and photographs in real-time saves additional expensive trips offshore (Fig. 20). Animatronic robotics are mostly seen as a novelty today. However, this technology has potential when incorporated into an oil and gas Digital Twin ecosystem.
Fig. 20 The use of mobile technology-based Digital Twins greatly reduces the need to manually collect data and process results. (Image from visCo)
The Role of the Digital Twin in Oil and Gas Projects and Operations
729
Fig. 21 Animatronic robots such as Boston Dynamics’ Spot can use spatial data to determine a route for their tasks. (Image from Adobe Stock, ref 446063212)
Programming robots, such as Boston Dynamics’ Spot, with repetitive tasks and defined routes allows them to run autonomously, feeding back positional and other data to operators in real time. In addition, operators can control the robot in real time to perform ad-hoc tasks. Various attachments, such as digital camera, LiDAR scanner, and gas detector can be fitted to robots such as Spot. When integrated with a Digital Twin, downloading a defined route (like the one created for the user described earlier) would allow it to autonomously navigate through a data collection task. Such data collection tasks could include recording readings on manual gauges (by taking photographs) or LiDAR scanning of areas of the facility. The robot could also potentially receive data from the Digital Twin about a condition that requires investigation and perform safety checks (such as detection of gas) before allowing humans to enter the area (Fig. 21).
8 Summary The Digital Twin concept promises to provide many benefits to offshore operations, including: • Better access to data, in context, to help with decision making in a changing operational environment.
730
S. Mustard and Ø. Stray
• The ability to perform what-if scenarios to identify the optimal conditions to respond to changing business or other external conditions. • Access to information onshore to minimize trips offshore, reducing operational cost, optimizing POB, and taking people out of harm’s way. In projects, Digital Twins can provide many benefits: • Reduced cost of data production, especially rework in response to errors. • Cost avoidance, by identifying design issues – e.g., equipment and door clearances • Operational improvements, by reviewing human factors more effectively – e.g., location of maintainable equipment There are several initiatives (such as CFHIOS) that will greatly help with the quality and reliability of data produced during projects. To use Digital Twins successfully in operations, quality data from projects is essential. Once in operation, maintenance of this data is also critical. Technology helps here too. Mobile technology and cost effective, simple-to-use tools for LiDAR scan collection help oil and gas operators better manage their data. Technology such as VR, AR, MR, and ML are all in wide use. When combined within a Digital Twin ecosystem they can yield significant benefits such as: • Better quality and more repeatable training for operators before visiting offshore facilities. • Better and safer guidance for operators on offshore facilities. • Better insights into areas of concern for integrity management and fabric maintenance SMEs. Digital Twins in offshore oil and gas currently tend to be descriptive and informative. These provide significant benefits to owner-operators; however, predictive, and prescriptive Digital Twins are the next logical step. These advances will provide better insights and specifying actions without human intervention. The autonomous Digital Twin may still be some way off, but once owner-operators grow confident with the decisions of prescriptive Digital Twins there is an opportunity to greatly streamline offshore operations, reducing costs, improving safety, and enabling rapid reaction to external conditions.
References 1. Reed, Ed. World’s largest Digital Twin launched for Shell. Accessed 9 Dec 2021. https://www. energyvoice.com/oilandgas/africa/262522/akselos-shell-bonga-digital/ 2. Capital Facilities Information Handover Specification, IOGP. Accessed 9 Dec 2021. https:// www.jip36-cfihos.org/ 3. Special Publication 800-207, Zero Trust Architecture. National Institute of Standards and Technology. Accessed 9 Dec 2021. https://csrc.nist.gov/publications/detail/sp/800-207/final 4. DNV-RP-A204 Qualification and assurance of Digital Twins. DNV GL. Accessed 9 Dec 2021. https://www.dnv.com/oilgas/download/dnv-rp-a204-qualification-and-assurance-of-digital- twins.html
The Role of the Digital Twin in Oil and Gas Projects and Operations
731
5. Paints and varnishes—Evaluation of degradation of coatings—Designation of quantity and size of defects, and of intensity of uniform changes in appearance—Part 1: General introduction and designation system. ANSI. Accessed 9 Dec 2021. https://webstore.ansi.org/standards/iso/ iso46282003 6. API RP 572 4TH ED (2016) Inspection Practices for Pressure Vessels; Fourth Edition. American Petroleum Institute. Accessed 9 Dec 2021. https://www.apiwebstore.org/publications/item.cgi?4e19396d-8f57-4202-bda3-871ce2f8b6d4 Steve Mustard is an industrial automation consultant with extensive technical and management experience across multiple sectors. He is a licensed Professional Engineer (PE), ISA Certified Automation Professional® (CAP®), UK registered Chartered Engineer (CEng), European registered Engineer (Eur Ing), GIAC Global Industrial Cyber Security Professional (GICSP), and Certified Mission Critical Professional (CMCP). Backed by 30 years of engineering experience, Mustard specializes in the development and management of real-time embedded equipment and automation systems and cybersecurity risk management related to those systems. He serves as president of National Automation, Inc. He was the 2021 President of the International Society of Automation (ISA) and a member of the Water Environment Federation (WEF) Safety and Security Committee. Mustard writes and presents on a wide array of technical topics and is the author of “Mission Critical Operations Primer” and “Industrial Cybersecurity Case Studies and Best Practices”, both published by ISA. He has also contributed to other technical books, including the Water Environment Federation’s “Design of Water Resource Recovery Facilities, Manual of Practice No.8, Sixth Edition”. Mustard’s previous and current client list includes: the UK Ministry of Defence; NATO; major utilities, such as Anglian Water Services and Sydney Water Corporation; major oil and gas companies, such as bp, BG Group and Shell; Fortune 500 companies, such as Quintiles Laboratories; and other leading organizations.
Øystein Stray’s 32 years of experience in asset driven industries has grounded him in solving business performance challenges with innovative digital and visualization solutions. He is the founder, owner, CEO, Chairman of the Board, and Digital Twin Program Manager in VisCo. He is accountable for defining and driving the future digital strategy for VisCo and he is focused on helping owner-operators across industries change the way they work for the better. Stray’s mindset is formed by business process management (BPM), knowledge management (KM), situational awareness, total quality management (TQM), and user centered design solutions. He is deeply experienced in helping asset driven industries understand and utilize advanced visualization, virtual worlds, virtual reality (VR), augmented reality (AR) and mixed reality (MR), and analytics related to spatial environments in a scalable manner.
732
S. Mustard and Ø. Stray Stray has a Master of Management degree from BI Norwegian Business School, Norway with a focus on BPM, TQM and KM. Prior to founding VisCo he held various management positions within oil and gas drilling and subsea operations, and in the maritime industry.
Part IV
Vertical Domains for Digital Twin Applications and Use Cases
Digital Twins Across Manufacturing Eric Green
Abstract Manufacturers are realizing that the saying “anything is possible” isn’t just hyperbole. Today’s manufacturers are dealing with disruption and change in all forms. How do you know how your manufacturing and supply chain will respond? More importantly, how can you leverage technology and “gamification” to predict how the manufacturing and supply chain will behave if “anything happens”? The answer is through the use of Manufacturing Digital Twins. Digital Twins are not limited to the product anymore. Manufacturing and Supply Chain Digital Twins exist in many companies today and represent a wide arrange of scope and depth of detail. Technology advancements have enabled early stage manufacturing digital twins to evolve from being equipment focused to completely represent the factory, manufacturing processes, and even the supply chain. Manufacturing Digital Twins break down data and operational silos, provide holistic views, and enable multiple strategies to test different business scenarios – in a virtual environment and without impacting the physical production and order flow. Manufacturing Digital Twins are all different, based on the industry, manufacturing model, and supporting supply chain. Companies today are reaping the rewards for early and continued investment in Manufacturing Digital Twins; others are just starting the journey. Many lessons can be learned from the history of Manufacturing Digital Twin’s evolution, the impact of technology and computing power, and how company strategy and approach affects success – or failure. “Anything is possible” as Manufacturing Digital Twins and Supply Chain Digital Twins are here. Leaders are using them to differentiate, win and compete in today’s markets. This chapter explores the Manufacturing Digital Twin evolution, scope, key considerations, value, and benefits while providing guidance for you to succeed.
E. Green (*) Dassault Systèmes, DELMIA, Dallas, TX, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_26
735
736
E. Green
Keywords Discrete manufacturing · Manufacturing · Manufacturing Digital Twin · Manufacturing modeling and simulation · Manufacturing transformation · Process manufacturing · Supply chain · Supply chain Digital Twin · Supply chain modeling and simulation · Supply chain transformation
1 Introduction Manufacturers have faced massive challenges over the last 20 years: the Fukushima earthquake, oil price fluctuations, generational labor challenges, trade wars, the COVID pandemic, and energy shortages. We can all talk about going back to “normal”, but a wise executive would realize that there is no such thing as “normal” and will anticipate future disruptions as change is the only constant. Along with efficiency, cost per unit, operating margin and asset utilization, agility is more important now than ever before. But there’s a new word of the day in manufacturing: resiliency. This new theme of resiliency must now be added to the long list of metrics that manufacturers should pursue IF they are going to be ready for the next set of disruptions. In the face of these repeated disruptions, manufacturers are pursuing transformative initiatives including the deployment of technologies to achieve resiliency, agility and a step change improvement in their operations. Core to these initiatives is the pursuit of Digital Twins in Manufacturing. The Manufacturing Digital Twin can have a profound impact on manufacturing, in addition to the agility and resiliency that is so vital for business sustainability. The value of Digital Twins derives from the need for: • • • • •
Operational innovation Improved agility and resiliency Increased manufacturing efficiency Improved Quality and Compliance Accelerated New Product Introduction (NPI)
All told, the Manufacturing Digital Twin can have a positive impact on the financial performance by notably reducing cost of goods sold (COGS) and increasing revenues; and executives are seizing these opportunities to improve their company’s position. It is easy to see Digital Twins as being the digital representation of physical things. Analysts and the media have focused on the potential in the disruption caused by the “Servitization” of products (selling usage of a pump as an operating expense rather than selling the pump itself as a capital asset) as a number of chapters of this book have already highlighted. In this chapter, we will move beyond the digital representation of physical products and focus on how Digital Twins aid in manufacturing.
Digital Twins Across Manufacturing
737
Digital Twins can easily represent processes for maintenance, manufacturing or even the supply chain. Digital Twins can also represent the assets, infrastructure and products that pass from the supplier to final customer delivery. In this chapter, this is the focus: the Manufacturing Digital Twin and its related sibling, the Supply Chain Digital Twin. Yes, this goes beyond a Digital Twin of a machine to a Digital Twin of a full production line, a manufacturing or compliance process, or a whole supply chain. In this chapter we will discuss the definition, scope, and technical considerations of the Digital Twin of Manufacturing (DTM), along with the differences associated between process and discrete manufacturers. The convergence and evolution of manufacturing and supply chain Digital Twins will also be explored. More importantly, we will use deployment experiences to describe business value and highlight use cases and users, all to articulate why Digital Twin of Manufacturing is a key imperative for business today. Let’s start by drilling into the definition of Digital Twin of Manufacturing. Key Points Manufacturing Digital Twin can have a positive impact on the financial performance of an organization by significantly: • Reducing the cost of goods sold (COGS) • Increasing revenues Executives are seizing these opportunities to improve their company’s position to stay competitive and to ward off disruptions.
2 Defining Digital Twin of Manufacturing The definition of Digital Twins began with a product centric view. As Dr. Michael Grieves described earlier in his book, he and John Vickers originated the concept of Digital Twins and defined it as: The Digital Twin is a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level. At its optimum, any information that could be obtained from inspecting a physical manufactured product can be obtained from its Digital Twin [1].
The Digital Twin and its evolution initially were closely linked with Product Lifecycle Management (PLM) systems. PLM software companies, such as Dassault Systèmes, were developing Digital Twin solutions well before the term was even coined.
738
E. Green
2.1 Early Deployments: Digital Manufacturing Before going any further, let’s consider why manufacturers were pursuing the early Digital Twin concept. Market demand for sophisticated products with higher quality expectations and fit tolerances were rapidly increasing. In addition, trends such as miniaturization and weight reduction were greatly influencing innovation in product designs and manufacturing practices. And, of course, the pressure to maintain or reduce costs remained ever present. It is for these reasons that the early concept of Digital Twin in manufacturing was derived. The objective was to replicate equipment in a virtual environment in order to test and validate how it would perform in the real world/physical world. It would also eliminate the cost of building a physical prototype and provide savings in the form of energy consumption, reduced material or inventory loss from machine testing, and costs associated with delays in time-to-market. These early applications were (and still are) primarily used by industrial and manufacturing engineers. As technological solutions progressed, manufacturers were able to utilize applications that could create a Digital Twin of machinery, such as a CNC machine. These software applications also included simulation capabilities that capture attributes of the machine such as: run rates, quality attributes across a time horizon, energy consumption, etc. The net result was a Digital Twin of the equipment and how it would operate. Then simulation scenarios could test how any changes to the Digital Twin of a machine would impact key parameters. As the modeling fidelity improved, these early Digital Twins in manufacturing could more accurately represent a piece of equipment and how it would perform under different conditions. Sidebar Other Early Digital Twins of Manufacturing An early implementation of Digital Twins in manufacturing can be found in Computer Aided Manufacturing. The CAD CAM label conveys twins. Engineers could design components in the CAD system and the computer would provide active assistance in figuring out how the NC machine might cut the metal, or how the pick and place system would insert the electronic components. Another area and usage of early equipment or asset-based Digital Twins is maintenance. Manufacturers across a wide range of industries have deployed Digital Twins to provide Asset Performance Management (APM). Capturing detailed, near real-time data – typically from a data historian – manufacturers built sophisticated models of equipment to provide predictive and prescriptive insights into equipment maintenance (more about this in the use case section below). Process industries like oil & gas and chemicals were some of the heaviest adopters of Digital Twins because
Digital Twins Across Manufacturing
739
of the wealth of historian data. Historian data is time series data that is generated at the microsecond level and can indicate trends. By leveraging this data in these manufacturing asset-based Digital Twins, process manufacturing companies have been able to assess and improve utilization, maintenance, safety, and quality metrics resulting from the production performed by the piece of equipment in their factory. These early strides in the asset-based Manufacturing Digital Twin provided value, but had limitations: • IT infrastructure capacity to process large amounts of data and robust simulations limited the scope of the analysis to specific equipment, work stations or work cells. • Scope was typically limited for the reasons above. As a result, these early Digital Manufacturing Twins addressed finite challenges involving only a portion of the manufacturing footprint. • The effort to create the models, set up simulations, and ultimately test different scenarios was time consuming due to the nature of the data required and knowledge needed to translate the simulation algorithms into meaningful analysis. While one evolution of industrial engineering focused Manufacturing Digital Twins was just discussed, there was another parallel lane of Digital Twins evolving in manufacturing.
2.2 MRP as an Early Digital Twin Material Resource Planning (MRP) systems – the precursor to Enterprise Resource Planning (ERP) systems – attempted to create twins of manufacturing. Assembly processes were structured online. Material flows were assigned (part 123 is needed at step ABC). “Standard costs” were assigned to each operation. Fixed lead times were assigned to each operation, and actual work was tracked against this virtual model with “variances” being recorded. MRP, and later ERP, as an early form of a Manufacturing Digital Twin had significant limitations: 1. Time horizon MRP systems by definition are planning systems. “MRP runs” were, at best, generated nightly and provided a snapshot of what happened and what the ‘plan’ was for the upcoming shift, day or week. No true representation of the reality of what transpired between “runs” was available, nor an ability to accurately represent the variability in operational processes as time parameters were static. In addition, the real-time state of operations was not captured. 2. Scope MRP, and later ERP, systems are fundamentally financially focused, which resulted in a very narrow scope. If something was not financially important, it was seldom modeled, tested or tracked. For example, specific operator or
740
E. Green
machine level detail may be may be critical to the quality of the product and the efficiency of the operation; yet, these systems did not capture this information. 3. Data The most profound limitation of these earliest representations of Manufacturing Digital Twins was the lack of detailed data and supporting business process logic. All the data that was collected by data historians, Supervisory Control and Data Acquisition (SCADA) systems, and other forms of plant level automation were ignored. The same could be said for supplier and transportation data. Further, there was no representation of the interaction of these systems as products moved through the manufacturing process. The term silos, firewalls, data tombs, and information islands are often used in describing the data challenges. In fact, the data chasm was typically reinforced by organizational paradigms. MRP/ERP is considered an Information Technology (IT) system and is managed by the enterprise’s IT organization. All the plant level data is typically managed by the Operational Technology (OT) team. In process manufacturing, this is typically a formal department with its own budget. In discrete manufacturing, it is typically a combination of manufacturing or industrial engineering, and ‘shadow’ factory resources (individuals who have another job title; but focus on OT to ensure factory performance). IT and OT groups operated independently often with little coordination across teams or systems. 4. Analytics MRP provided a wide range of canned reports, but seldom offered insightful and actionable analytics or real-time dashboards. ERP evolved the offering to include business intelligence, but seldom actionable operational intelligence. Manufacturing users simply have not been served well by these transactional systems. 5. Simulation MRP was fundamentally focused on the financial dimensions of manufacturing and capturing what happened, including: value added to WIP, parts consumed, products manufactured, etc. MRP provided little or no additional decision support in the form of “what-if” analyses as delivered by modern simulation programs. Engineers and planners were required to do “what-if” scenario evaluation outside the system. 6. Graphical Visualization and 3D MRP applications were built on a structured database (SQL or its earlier precursors). Visualizing workflows, components and/or products were simply outside the scope of the system. Manufacturing engineers and operators struggled to visualize sub-assemblies and final products, or used paper in the form of work instructions and/or parts drawings to help understand the complexities of specific operations and tasks. While MRP/ERP had profound limitations, these solutions telegraphed the need to go beyond the physical dimension of Digital Twins to include process. Digital Twin of Manufacturing embraces fully that “physical thing” definition of the Digital Twin but goes beyond to add a “process” dimension. What do we mean by process? The business logic that is required to:
Digital Twins Across Manufacturing
741
• Manufacture a product – part A must be assembled first by a qualified operator that has already set up the machine. The setup of the machine is, of course, its own process. • Ensure regulatory compliance – for example, in the pharmaceutical industry a batch record must be created for each batch of pharmaceutical products and must demonstrate that the products have been manufactured to Good Manufacturing Practices (cGMP) and to the process that was validated earlier. The sign-off of the batch record is another process. • Coordinate assembly and part replenishment – an assembly worker pulls parts from Kanban storage locations to be used in an assembly operation. This triggers a need to replenish those parts to the storage location while the operator assembles parts together in a defined sequence. Both the operator’s steps and actions, and the material flows are processes. • Manage the company’s supply chain – all potential vendors must be audited, and their lead times, price and delivery requirements established in the Enterprise Resource Planning (ERP) system. The audit process is, of course, another process. Processes abound in manufacturing and are often the focus of continuous improvement programs such as Lean/Toyota Production System, World Class Manufacturing or Six Sigma. Larger companies seek to identify best practices (processes) and seek to replicate them across their manufacturing network. The Digital Twin of Manufacturing process dimension has gotten attention only in the last few years as computer systems became capable of handling the complexity and variability of human interactions and system behavior. It is not a simple task to model the performance of a robot or CNC machine, but the physics are known and other ‘first principle’ characteristics understood. But human behavior and ergonomics with the interaction of machinery or robot (called cobots), are only being fully modeled in recent years. Computing power has now become cost effective enough to be applied to sophisticated process models found in manufacturing, from human ergonomics to composites, 3D printing, and even supply chain processes. Further, manufacturing is a system. A widget may be part of a machine which is part of a line which is part of a plant which is part of a manufacturing network. In fact, the finished “product” of one plant is often a component of the next plant’s product. And in discrete and batch manufacturing, one machine may be used in different ways or in the production of multiple products. In a vertically integrated manufacturing company, process manufacturing and discrete manufacturing worlds can collide where both environments exist within a single plant (very much including the world of additive manufacturing) (Fig. 1). Therefore, as we think about the Digital Twins for Manufacturing, we believe the incorporation of processes and best practices are instrumental and must be included in the definition. This ensures manufacturing is optimized for business performance. We see this expanded definition increasingly used by consultancies, industry analysts and solution providers.
742
E. Green
Fig. 1 The three aspects of Digital Twins for Manufacturing: Process, MRP, and Products
With an expansive definition of Digital Twin of Manufacturing now established, we proceed to look deeper at its characteristics. Key Points Specific operator or machine level detail may be may be critical to the quality of the product and the efficiency of an operation; yet, early systems did not capture this information. The Digital Twin of Manufacturing process dimension has gotten attention only in the last few years as computer systems became capable of handling the complexity and variability of human interactions and system behavior simultaneously. Another early form of Digital Twins in manufacturing can be found in Computer Aided Manufacturing. The CAD CAM label conveys twins. Engineers could design components in the CAD system and the computer would provide active assistance in figuring out how the NC machine might cut the metal, or how the pick and place system would insert the electronic components.
3 Key Characteristics of Digital Twins in Manufacturing What differentiates modern efforts around Digital Twins in Manufacturing from those early instances is that modern technology frees manufacturers from the constraints referenced previously. In addition, other factors resonate when we think about the current versions of the Digital Twin of Manufacturing. As computing power, artificial intelligence, machine learning, and other science-based technologies mature, the opportunity to expand beyond assets to business processes in manufacturing emerge to the surface.
Digital Twins Across Manufacturing
743
3.1 Timeliness Given the capabilities of modern edge computing, Digital Twins in Manufacturing can be “near real-time”. For example, it is now possible to monitor and compare streaming real-time data against a Digital Twin of Manufacturing model with machine learning to identify anomalies in machine performance. This can lead engineers to fine tune settings and/or plan maintenance well before disruptions may occur. More recently, autonomous decision support can also enable common and routine decisions, such as preventative maintenance scheduling, automatically without the need for engineers to intervene.
3.2 Scope The capabilities of modern computing at the edge, on-premise, and in the cloud provide a more expansive vision of scope. Hundreds of parameters can now be measured around a machines’ operations. Extrapolated across a factory these numbers scale exponentially. Quality measurements can be incorporated into the holistic picture of the product from the first stages of production, through Work in Process (WIP), to final completion in the factory. Characteristics of the materials themselves can be used to provide a prescriptive view of the quality of the final product and/or help optimize the performance of the plant via a Digital Twin in Manufacturing. For example, in the oil & gas industry the specific type of crude oil is used to set refining parameters in order to optimize the performance of the refinery. A Manufacturing Digital Twin assesses and determines optimal performance targets for the specific material inputs. As the refinery produces, operational data from IIoT devices, data historians, and Manufacturing Execution Systems (MES) is leveraged by the Digital Twin to correlate data across all the elements of manufacturing (crude oil, additives, other raw materials, equipment, labor, tooling/containers, etc.). This correlation of data provides meaningful real-time status and improves performance. The Manufacturing Digital Twin can provide indicative trends to the planner, who then can test scenarios to deploy to the refinery, so that plant performance can be optimized.
3.3 Data A profound improvement in the Digital Twin capabilities is around data. In fact, data is often called the “new gold” or the “new oil”. Tied directly to the first two improvements, the expanded use of data is fundamental to Digital Twins. Most manufacturing and business systems deploy structured data in the form of a database. Modern Manufacturing Digital Twins builds on that data to also include
744
E. Green
Fig. 2 The contextualization of Digital Twin Data
semi-structured data – especially time series data captured by data historians. Data historians capture voluminous data but it is often hard to leverage because it does not have context that makes it readily meaningful. Digital Twins for Manufacturing leverage that data by creating context to provide an extraordinarily detailed view of the activity or process over time. In addition, unstructured data such as video and social media references provide a full 360-degree view. For example, in the food manufacturing sectors, vision systems are increasingly being used to inspect the final product for defects of all sorts. Modern Manufacturing Digital Twins are built using structured, semi-structured and unstructured data as pictured in Fig. 2 [2]. The importance of going broad with the data can be found in any number of industries: • A steel company found that by correlating vendor source into their data model they were able to improve quality and yield despite the fact they already leveraged detailed product specification data. In theory, the vendor source data should not have mattered; but it was key. • A bakery chain found that correlating outside temperature and humidity led to breakthrough insights even though they already tightly controlled the temperature and humidity inside the bakery. • Manufacturing in the aerospace and defense (A&D) industry has repeatedly found that multivariate analysis is critical to optimizing yield and throughput of composite aerospace parts.
Digital Twins Across Manufacturing
745
This expanded use of data from a wide array of sources provides valuable insights in the manufacturing process performance, leading to actions in manufacturing such as changing manufacturing processes or even the product design itself. In other words, modern Manufacturing Digital Twins specifically address the limitations of the earlier instances of Digital Twins in manufacturing discussed above. They also go beyond in five additional ways listed below.
3.4 Process As noted above, Digital Twins of Manufacturing focus not only on physical things, but also on processes. This goes beyond a machine center or work center since manufacturing processes start with the receipt of supply materials and extends to the product leaving the factory. Many complex discrete products, such as a car or home appliance, may have dozens to hundreds of processes within a factory. Modern Manufacturing Digital Twins can capture all these business processes and their interdependencies. A key element in every business process is the human worker’s interaction. Modern computing capabilities enable engineers to model human ergonomics and behavior to build an even more complete virtual representation of operational systems – how humans interact with plant equipment at each exact stage or process in production. Safety, efficiency, and quality can be simulated, assessed and optimized because of the comprehensiveness of the model. This is increasingly important as manufacturers invest in machinery, robots and cobots to automate manufacturing steps and activities, resulting in more human interaction with machinery.
3.5 Supply Chain Digital Twins interconnect the real and virtual world of product and process. They provide richer and more sophisticated models that empower more realistic and holistic views of their source (product or process). Moore’s Law has made computing power cheaper, more powerful, and readily abundant so manufacturers can now leverage computing at the edge, on-premise and in the cloud to drive these Digital Twins. Analysis of the growing and evolving Digital Twins can provide profound insight into fundamental design, process and operational characteristics that would have been unattainable even a few short years ago. Supply chains have been impacted by a list of major disruptions over the last decade: the Fukushima earthquake, oil price fluctuations, generational labor challenges, trade wars, and energy shortages. As noted in Fig. 3, recent supply chain disruptions alone have forced manufacturers to reevaluate supply chains in any number of ways [3].
746
E. Green
Fig. 3 Factors in Supply Chain Disruption
A Digital Twin of the supply chain provides powerful insight into all the alternatives surveyed. “What-if” analysis can provide very real data to help manufacturers make critical decisions on how to respond to supply chain disruptions and to improve their competitive position. Supply Chain Digital Twins allow manufacturers to make trade offs across supplier, manufacturing, contract manufacturing, distribution and transportation scenarios.
3.6 3D (Three Dimensional) Models Modern Digital Twins can have a graphical representation, as shown in Fig. 4. 3D models of the manufacturing plant, key equipment, process lines, etc. can be vital to understanding operational characteristics. Some of the examples of how 3D can provide valuable insights not otherwise possible include: • Can a part be replaced easily or is it in a position where multiple other parts will need to be removed as well? • Can this part be printed via additive manufacturing? • Can an operator safely reach the required part? • Is there enough room for the packaging required for the new product? Can the machinery handle the larger size? Do I have floor space for all the packaging required? • Will inclement weather create issues that will require extra maintenance over time?
Digital Twins Across Manufacturing
747
Fig. 4 Screenshot of 3D Digital Twin Representation
3.7 Analytics Digital Twin in Manufacturing is one specialized form of analytics. It becomes the core to the analytics migration from descriptive, diagnostic, predictive, prescriptive to prognostic. Digital Twins in Manufacturing are typically implemented as real time models of the asset or processes. The model enables detection of deviations from the intended/expected and can provide deep analytical insights into the correlations and causations across those deviations. The DTM can build on all sorts of analytical models to provide those insights including statistical, “first principle”, artificial intelligence and machine learning. In fact, much of the power of DTM is that it can leverage multiple models to provide profound insight. These insights can be delivered in the form of dashboards or simply as alarms/alerts.
3.8 Simulation As a manufacturer progresses and has the ability to move up the analytics stack from descriptive and diagnostic analytics to predictive, prescriptive and prognostic, the goals of the Digital Twin morph. Early in analytics maturity the goal of the Digital Twin is to replicate the asset so it can be understood: What is happening? Was that event meaningful? Over time and with increasing maturity, the goal shifts
748
E. Green
to predicting and prescribing the future. Core to prescription is “what-if” analysis and, therefore, simulation. Many Digital Twin projects stagnate in manufacturing because they do not provide enough insight for proactive decision support. In a 2021 Industry Week survey [4] over 50% percent of manufacturers surveyed are planning new facilities or are actively building them. Manufacturers benefit greatly in simulating changes within a plant and/or the benefits of a new plant. Simulation is a key tool in proactive decision support and something only recently enabled for a wide range of applications as the cost of computing power has come down and mathematical models have become more user friendly. These element of a DTM are important acrosss all implementations, but DTM deployments very widely across industries. Lets take a look at the variation across industries to get a more subtle picture of DTM. Key Point The expanded use of data, models, and processes combined with optimization and simulation are fundamental to Digital Twins.
4 Differences Across Manufacturing Industries Digital Twins have different manifestations in the process industry than in discrete or even batch manufacturing as indicated in Fig. 5.
4.1 Manufacturing Digital Twin in Process Manufacturing Characteristics and history have impacted how Digital Twins have been deployed in process industries such as oil & gas, chemicals, and mining & metals. First and foremost, process (especially continuous process) manufacturers tend to be asset intensive. Therefore, APM has been a key focus of Digital Twins’ programs in the process industries. Second, the asset intensiveness of these industries has led to widespread deployment of data historians to capture large volumes of time series data that is being recorded in milliseconds. Many of these systems have been deployed for a large number of years so detailed historical data is often abundant for many of the key pieces of equipment in these factories. The challenge is this historical data is often without meaningful context (especially when aggregated). Therefore, this history is only useful in context which is enabled by the Manufacturing Digital Twin. Third, process manufacturers typically have a data model for their plants and operations. Process Simulation and Optimization are commonly deployed in these industries to offer advanced decision support. These elements come together to form the core context of a Digital Twin in process manufacturing.
Digital Twins Across Manufacturing
749
Fig. 5 Attributes and aspects of Digital Twins for Discrete and Continous Manufacturing
In Fig. 6, LNS Research provides a visualization of all the factors that can go into a Digital Twin data model in the process industries of a simple control valve [5]. Of course complexity escalates as one goes from a simple control value to the overall plant, but the reality still holds! A modern Manufacturing Digital Twin is built with a comprehensive set of data including the physical properties; “first principles” of chemistry, physics, and thermodynamics; 3D model data, algorithmic parameter data; real-time data from actual operations; historical and maintenance data. And from that mix of data the Digital Twin can provide key analysis, simulations, and insights about the performance, from the plant down to the valve example below.
4.2 A Customer Example in the Mining Industry An example may help. A large mining company is using a Digital Twin of its mining operations to improve operational performance within the mine. Using a 3D model of the mine as the key visual and modeling tool, ergonomic and process simulations help the mining company execute different types of “what-if” analysis needed to build an optimum plan. Near real-time data from SCADA provides detailed data on the operation of the crusher and provides the core of an APM solution and ensures optimum operations planning and execution. The execution system provides an
750
E. Green
Fig. 6 An example of a Digital Twin Model
accurate, real-time picture of what is happening across the mine (where diggers are, where trucks are, which trucks need to be refueled) to help build the sophistication and accuracy of the twin. In addition, the mining company can simulate each operation to ensure the safety of workers in a dangerous environment. Further, the digital twin of mining operations allows the concentration of key expertise in central operations centers where recruiting is significantly less challenging than very remote mine locations.
4.3 Manufacturing Digital Twin in Discrete Manufacturing Discrete Industries such as automotive, aerospace, industrial equipment and electronics have progressed further in the deployment of digital twins in comparison to the process industries. As much of this book has noted, Digital Twins of discrete products are readily attainable if not yet completely common. The newest challenge for discrete manufacturers is the Digital Twins of plants and processes. Discrete manufacturers face 2 major challenges in these more advanced forms. First, data exists in a wide array of systems and equipment but is not as uniform in structure as found in the data historians used in process manufacturing. Therefore, discrete
Digital Twins Across Manufacturing
751
manufacturers often face a data challenge and must normalize data and fill data gaps by either adding sensors, other forms of automation or deploy systems before they fully benefit from a closed loop manufacturing Digital Twin. And, when they do, they must model and build their Manufacturing Digital Twin from newly collected data, so it takes time to build a meaningful model of operations. Second, discrete manufacturers are data model challenged. Few discrete manufacturers have a holistic digital representation of the plant. Individual lines or pieces of equipment may have a digital definition but seldom is the plant comprehensively modeled. To compound the challenge an individual piece of equipment may be used for multiple products or in multiple ways so the “context” of the associated data (which processes was it executing and which materials was it working on) becomes critical. So even as discrete manufacturers begin to capture more data by applying more sensors and automation, data contextualization becomes a major challenge because of the lack of a plant-wide data model. Recent advancements in scanning technology now have the depth and detail to accelerate the creation of the virtual representations of a factory, production line or work cell. These scanning devices create a ‘cloud of points’ which in turn can be imported into a 3D platform to create the basis of a visual 3D model that represents the factory. These scanning technologies can typically be used to scan an average size factory in a manner of hours. By doing so, manufacturers can significantly reduce time and cost in creating their virtual factory model. This allows the manufacturing or industrial engineers to focus on configuring the parameters, business process characteristics, and attributes of the factory rather than the visual representation of the factory itself. Manufacturing Execution Systems (MES)/Manufacturing Operations Management (MOM) systems offer a key method for discrete and batch manufacturers to address data and data model challenges and, therefore, are typically a key component of a Manufacturing Digital Twin strategy. MES/MOM systems provide visibility into, control over and synchronization across the elements of manufacturing: material, production, equipment, tooling/containers, data, labor, and suppliers. As Tom Comstock of LNS Research [6] notes: Discrete and batch manufacturers are often struggling in their transformation programs because they do not have a plant-wide or network-wide data model (process manufacturers typically do). The comprehensive nature of MES provides that data model … [by] providing a system of record across the six elements of manufacturing.
MES/MOM have not yet been widely deployed in discrete and batch manufacturing with no more than 20% of plants having been automated in this way. MES/MOM systems have significant value in themselves. These systems can help reduce manufacturing costs, improve efficiency, and increase quality and compliance. The value of these systems increases notably if they become core to the data model of a Manufacturing Digital Twin strategy by leveraging their structure to provide context across and between all the elements of manufacturing.
752
E. Green
4.4 A Customer Example of Digital Twin in Discrete Manufacturing: An Automotive OEM Let’s see if we can bring all this together by looking at one expansive case study: the usage of the manufacturing Digital Twins in a global, automotive OEM. This company uses their Manufacturing Digital Twin to support their vision for innovation. Innovation in both the automobile and how the automobile is produced. This starts with leveraging the R&D 3D product design of the different vehicle makes and models along with the different variations and options that each vehicle model may consist of. Combining with this 3D product data are the specific manufacturing plant’s process attributes and the resources within the bill of materials used for those vehicles in the specific factory. That’s a lot to consume. Let’s break this down into details. The Digital Twin of the vehicle is a virtual representation of the physical make and model. To understand the Manufacturing Digital Twin, as mentioned previously, the factory layout, assets, and infrastructure also requires a 3D model and data set. This includes all the building infrastructure such as walls, paint booths, assembly lines, robotic lines, material handling equipment, roof lines, all power and power tool locations, and all workstations. The last element is the process plan and bill of materials which also must be represented in the 3D model. The process plan, simply put, is the set and sequence of the detailed steps to assemble the vehicle. The bill of material represents all the parts, material, and resources needed to be utilized to build the vehicle described in the process plan. The three sets of data bring together foundational elements to represent the Manufacturing Digital Twin. With these in place, simulation is performed to assess everything from the highest level of factory performance to the detailed performance of assembly and production lines, business processes, resource utilization, worker safety, and the inventory or material consumption in a virtual 3D environment prior to the plant being built/ reconfigured. Let’s look at some detailed examples with this OEM. With the 3D model in place, this automotive OEM has transitioned from the time consuming and labor-intensive process of a physical model changeover or new model introduction to a virtual validation of each car model within a given factory. By having a Manufacturing Digital Twin of the factory encompassing product, process and the physical plant itself, engineers are able to simulate for a host of different scenarios. Let’s drill into a new model introduction example: The Manufacturing Digital Twin of a given assembly factory enables the OEM R&D engineers and manufacturing engineers to see changes to the vehicle options, part consumption, and process steps concurrently, allowing the different departments to visualize when a change is made. For example, if the process plan for a car model assembly does not reflect the proper material consumption of a part or the correct location for the sub-assembly, the engineer will see this in the Digital Twin and make the corrective change. In doing so, the change may impact the assembly line configuration, requiring additional space. The manufacturing engineer will be able to understand the impact of the process plan and change the line layout to best fit the assembly process well before the plant is reconfigured for the new model. The Manufacturing Digital
Digital Twins Across Manufacturing
753
Twin allows for the product engineers, manufacturing engineers, and the factory personnel to work off the same version of the “truth”. By doing so, the team virtually simulates how the new vehicle will move across the factory along with all the supporting activities and resources needed to assemble the car. This ensures no steps, parts or activities are overlooked throughout the Manufacturing Digital Twin and later in real production in the reconfigured plant. To summarize, this automotive OEM is using a Manufacturing Digital Twin to represent their factories’ real world virtually. From plant characteristics, building infrastructure, business processes and the make/model of the vehicles produced in the factories, the OEM’s Digital Twin accelerates their decision making and ability to innovate by: • Virtually simulating new makes and models in a given plant to ‘virtually commission’ the plant for the vehicle and to eliminate time, expense, excessive inventory and quality defects. • Virtually determining how model changeovers will require new adaptions in the factory assembly and production lines, reducing downtime from weeks to days. • Assessing how to improve worker safety among the factory configuration changes. • Analyzing manufacturing asset utilization. All in all, this comprehensive and advanced Digital Twin enables the OEM to find errors virtually, prior to actually reconfiguring the plant, thereby substantially reducing waste, rework and the time-to-volume production. Now that we have looked in detail at how a DTM is defined, its elements, and how it can be leveraged across a manufacturing plant in various industries, it is time to go beyond the plant and look at a Digital Twin of the Supply Chain.
Sidebar Digital Twin of Operations? Don’t be surprised to see the term in the future Digital Twin of Manufacturing and Digital Twin of Supply Chain both leverage the same foundational elements: assets, business processes, 3D models, and timely data to analyze future plans against historical performance. Would it therefore be more appropriate to categorize a manufacturer’s business, supply chain and service, simply as “Operations”? Possibly we should be using the term Digital Twin of Operations? The phrase Digital Twin of Operations may be more appropriate as it encompasses manufacturing, quality, warehouse, logistics, supply chain, regulatory and maintenance processes. This isn’t intended simply to create yet another acronym for the sake of technologies; but rather, to define and help categorize the Digital Twin of the “manufacturer’s business”. It’s entirely conceivable in the future there will be this level of broader Digital Twin.
754
E. Green
Key Points Discrete and batch manufacturers are often struggling in their transformation programs because they do not have a plant-wide or network-wide data model (process manufacturers typically do). Some Companies have effectively used their Manufacturing Digital Twin to support their vision for innovation.
5 Digital Twin of Supply Chains 5.1 Manufacturing or Supply Chain? So far, this chapter has focused largely on the Digital Twins for Manufacturing with a comment here and there about the supply chain. With the inclusion of business processes and growing interdependencies between factories and manufacturing partners, one must consider how far the Manufacturing Digital Twin extends and where a Digital Twin for Supply Chain begins. Most view Digital Twins for Supply Chains as very immature. Yet, there are profound opportunities to leverage Digital Twin for Manufacturing as the focal point and force behind Digital Twins of Supply Chain. This is especially true in industries where manufacturing, product design, and product quality rule as the primary business decision support criteria. Sourcing, distribution, logistics and transportation, while important, will not require the level of sophistication as manufacturing in a Digital Twin. In these industries the manufacturing Digital Twin 3D model, simulation, optimization and data can be extended to include the broader supply model, data and characteristics. Conversely, distribution intensive industries where product differentiation is based on product availability on store shelves, price, and promotion, the decision support criteria shifts to topics such as product and SKU mix, landed product costs, and channel profitability. In these industries such as consumer packaged goods and retail, the Digital Twin model emphasis is less on the activity within the four walls of manufacturing and more on those areas outside the factory. These supply chain Digital Twin 3D models emphasize the logistics and distribution characteristics, business processes, and services assets (such as delivery or installation). In both cases, simulation, artificial intelligence, machine learning and optimization are used to drive decision support and create new innovative opportunities. They can: • Automate rudimentary, basic and repetitive decisions such as non-seasonal order fulfillment. • Optimize the business scenarios given an ‘event’. This could be anything from a holiday promotion for a retail product to a supply shortage due to scarce material or labor.
Digital Twins Across Manufacturing
755
• Identify new manufacturing capacity or network configurations for expanding into new markets, creating new sourcing or verticalization strategies to drive efficiencies, or isolate and reconfigure manufacturing or supply chain elements to mitigate risk to the business.
5.2 A Customer Example of Digital Twin of the Supply Chain: An Aerospace Supplier Up until 2020, the aerospace and defense industry had been transforming with new players entering the market whether from new countries in Asia or new start-ups in Silicon Valley. Many aerospace companies have progressively shifted their supply chain strategies to outsource much of their parts production in order to focus on either integration or core competencies in specialized areas. Of course, when the COVID-19 pandemic hit in 2020, another transformational shift occurred impacting travel and the aerospace industry overall. One Aerospace supplier shifted their strategy and used a Manufacturing Digital Twin to reinforce their strategic decision- making; but also, to hone the operational performance while minimizing risk from outsourcing. This company made an assessment that a change in their supply chain and manufacturing was needed to be competitive. Supply part issues were increasing to the point that these sourced part problems were impacting the company’s ability to deliver to their end customers. The Manufacturing Digital Twin was to be a key component to executing a new strategy. How? As a solution to this challenge, it was decided that more internal manufacturing capacity was needed for building these parts. By producing parts and sub-assemblies for the final assembly factories, the company envisioned that this newly built manufacturing capacity would provide core building blocks of their strategy to develop a focus on their own intellectual property. This verticalization strategy was enabled by a Manufacturing Digital Twin. The Digital Twin was utilized to validate factory line layouts, equipment and machine center positioning, and material flows. By simulating factory configurations with the different product families, the company could assess which machining equipment would best serve their needs as well as how to reconfigure their production processes. The Manufacturing Digital Twin enabled changes to the factory layout and flows for optimizing output. The engineering team used the Manufacturing Digital Twin to test all the new machinery and equipment to ascertain how to produce the parts and sub-assemblies. This ‘virtual commissioning’ eliminated the need to test produce parts which wastes material and consumes valuable resources. The company essentially could go to full scale production much faster while ensuring better quality. The simulation performed in the Manufacturing Digital Twin allowed the company to size production rates and resource loading precisely. Virtually determining these capacities down to each piece of equipment ensured that the company would not create unwanted bottlenecks in the overall production process.
756
E. Green
As the company continues to expand and change with new equipment or products, they intend to use the DTM to simulate, optimize, and assess possible business outcomes. As a supporting component of their verticalization strategy, the DTM has helped them reveal ways to save in component costs, improve their overall product quality, and be in a better position to pivot as their business dynamics change. The Digital Twins delivered by: • Having a virtual representation of their manufacturing facility, products and enabling a ‘digital first’ strategy. • Virtually simulating and evaluating the potential of ‘on-shoring’ parts to improve their agility. • Providing a showcase of the benefits of a ‘digital first’ approach. • Promoting to new customers how the company is focusing on innovation to better serve them. • Enabling “what-if” supply chain analysis. Overall, the Digital Twin is enabling their long term growth strategy while improving their ability to cope with the dynamics of the current industry environment. We have now defined a DTM, identified the key elements, looked at deployment across process, discrete and the supply chain. But there is a nagging question that lingers: is it one twin or more? Let’s find out. Key Point In practice companies have developed the Digital Twin in assessing new manufacturing capacity and utilized the Digital Twin with simulation and optimization to validate factory line layouts, equipment and machine center positioning, and product flows.
6 Is It One or Many Digital Twins? Before we can dig deeper into Manufacturing Digital Twins and Supply Chain Digital Twins lingering questions must be answered. Is it one twin or many? How are twins managed? What is the lifecycle of a manufacturing data model or Digital Twin? A supply chain data model or twin? What about the digital twin of the machine or the robot in the factory? It’s clear that each of these Digital Twins are subsets or supersets of the other. Companies will have to put in place categorization and a lifecycle management approach across their Digital Twin program. The question becomes: is it possible to place all the data across all of a manufacturer’s plants rolled up into one massive Digital Twin model? One holistic model of the manufacturing network, supply chain and all the supporting detail ranging from the future plan to the actual real-time performance across all the processes could be the CEO’s nirvana. However, this may never come to fruition. Thus, there needs to be a plan and process to manage a Manufacturing
Digital Twins Across Manufacturing
757
Digital Twin in the meantime. For example, in the APM – asset – space, “…consider that you may have dozens, hundreds or even thousands of classes of assets and equipment.” [7]. The digital representation of CNC equipment will be different than that of the Digital Twin for pick and place PCB board build equipment or material handling robots. Over time an organization may benefit by having Digital Twins for a large number of those asset classes. And, of course, each of those classes will have a specific model that will have its own lifecycle as new insights are achieved over time. The Gartner Group has pictured this for us in Fig. 7 [7]. These Digital Twins are not static software objects. They change as operations evolve, additional learnings are made and business needs change. Gartner asserts “… the Digital Twin developer … will likely produce new versions of the Digital Twin object class over time… And you will be running multiple instances of the Digital Twin runtime software, one for each instance deployed in operations.” [8]. So manufacturers should expect to have a vast number of Digital Twins in operations over time, just in the asset domain. And given our expansive definition of Digital Twins, manufacturers are likely to also have Digital Twins of manufacturing lines, plants, and the supply chain. The need for multiple Digital Twins may be driven by the different requirements around data fidelity and/or model monitoring. All and all, large manufacturers can expect to have many twins over time (hence the data governance recommendations detailed below). With that question answered, we now explore the use cases and users of DTM.
Fig. 7 The portfolios of Digital Twins grow on three axes
758
E. Green
7 Use Cases and Users of Digital Twins of Manufacturing There are hundreds, may be even thousands, of use cases across all the different industries in the application of DTMs. Below is an exploration of 4 categories ranging from assets to supply chain. Each is applicable to any manufacturing executive who is concerned with serving customers better, improving performance and building a sustainable manufacturing enterprise. This section wraps up with a fifth category and one that may be the most important. This last category blends value, benefit and operational differentiation and is one that all CEOs are working to address. Asset Performance Management The most common Digital Twin for Manufacturing is Asset Lifecycle Management/ Asset Performance Management. As noted above and shown in Fig. 8, this is particularly important in asset intensive industries (the continuous process industries like oil & gas). The asset can be at any level of abstraction: component (pump), system (reactor), or meta-system (refinery). APM offers these manufacturers the ability to describe, diagnose, predict and prescribe the status of key equipment. Manufacturing Digital Twins with an APM focus provides the manufacturer with answers to questions like: • Will the machine go down before the next scheduled maintenance? • Can I run it in a degraded state and still meet my customer commitments? • Can I manufacture product X with the machine given these characteristics and constraints? The specific value of these use cases has been identified: Gartner survey data tells where discrete manufacturers are finding cost optimization value around APM, as shown in Fig. 9 [9]. APM is in many ways the core use case associated with asset based Manufacturing Digital Twins but it is only a starting place.
Fig. 8 The interaction between Physical Manufacturing, the Digital Twin, and Data
Digital Twins Across Manufacturing
759
Fig. 9 The range of different cost optimization roles performed by Digital Twins
7.1 Users of DTM in Asset Performance Management Plant and asset engineering would be the key users of DTM deployed into Asset Management. They would be using the DTM to seek analytical insight into equipment status and looking for predictions into the future. The most obvious use case is a trigger from the DTM that an asset will trend out of tolerance based on future, expected production scenarios. Such an alarm would enable plant engineers to assign and execute maintenance tasks well before any issues arise from the asset. Plant floor operators could also be key users as the DTM model could provide insights about potential forthcoming issues based on the predictive insight. Typically the operator would interact with the DTM only through the execution system (they generally see the alarm in whatever system is their core application) but would be a key – even if indirect -- user of the actionable insights. Equipment up-time can be optimized in these scenarios.
760
E. Green
7.2 Process Optimization In such process industries as oil & gas and chemicals, simulation is used to model plants and their operations, even before they are built. These big capital investments must be able to manufacture the product specified, ideally at a price that is competitive on the world market. New major capital projects are modeled before concrete has been poured in order to ensure manufacturability and to optimize the plant for the intended manufacturing product set. Oil and gas companies are using virtual representations to optimize the design of refineries and off-shore platforms and to train operators on how to run those assets according to an optimized plan.
7.3 Users of DTM in Process Optimization Chemical, industrial, and process engineers become the most active users of the DTM for process optimization. They use the DTM to execute “what-if” scenarios to evaluate potential plant/refinery/mine/asset layouts and material flows. Another important use case in Process Optimization, is risk evaluation. What is the potential for safety or quality issues to develop from each potential configuration? A virtual representation of the plant and process can provide key insights into such issues. Training personnel can use the DTM to create advanced and accelerated training models for asset operators. With a goal of minimizing the “door to floor” time, training professionals can leverage the 3D model of the asset to replace the need for physical walkthroughs of the plant. Again, plant floor operators are also key but indirect users of the DTM model as the system is typically configured to provide insights about potential forthcoming quality, safety or efficiency issues based on the predictive insights.
7.4 Plant Optimization Discrete manufacturers are increasingly doing plant optimization. Using advanced simulation capabilities, they are modeling how products can best (efficiently and safely) be built, how plants can best execute specific orders with existing and/or potential new manufacturing resources (equipment and/or labor). These plant-level optimization simulations are key to going beyond improving Overall Equipment Effectiveness and provide more compelling answers to improve plant performance as well as assessing responsiveness to disruptions and speed of adapting. “What-if” simulations are critical to the optimization at this “system” level versus the component level.
Digital Twins Across Manufacturing
761
7.5 Users of DTM for Plant Optimization Manufacturing and industrial engineers are the most common users of this form of DTM given their responsibility for plant layout and process efficiency. Actively working on the digital model becomes core to their jobs as they seek to maximize efficiency of potential plant configurations and process flows. Production planners may also interact with the system to do “what-if” analysis around potential workloads and capacity constraints.
7.6 Supply Chain Optimization Manufacturers are increasingly deploying Digital Twins in the supply chain space. Process, discrete and batch manufacturers are all deploying Digital Twins and simulation capabilities to optimize their supply chain against any number of factors: cost, efficiency, and order fulfillment. These models are typically deployed at a higher level of abstraction than more production oriented twins. Rather than trying to optimize performance in near real-time, their focus is typically to enable “whatif” analysis across long time horizons to find answers to supply chain questions such as: • How would customer deliveries be impacted by continued back ups in Los Angeles ports? • What if I shift product A to a contract manufacturer and replace it with product B? • Can I increase production on product A if I find more suppliers of component X? • Can my current transportation suppliers handle the volume if we launch a new promotion? • Does adding a new shift in plant Q enable me to meet customer demands in North America? • What happens if I am not able to add labor as planned? • Would overtime for all 3 shifts in plant R be enough to get us back on plan? In these times of multiple and overlapping “black swan” disruptions, manufacturers have found this “what-if” analysis critical.
7.7 Users of DTM in Supply Chain Optimization Supply chain planners, production planners and schedulers use the DTM to evaluate potential suppliers, product flows and promotions. Further, these users can use the DTM to identify hidden production constraints that may impact production. Procurement uses the system to evaluate the risk associated with the supply chain and its potential dependence on specific suppliers.
762
E. Green
Logistics and transportation professionals deploy the DTM to make informed decisions about sourcing of logistics and transportation and identify hidden constraints.
7.8 Capturing Intellectual Property While we have detailed 4 major categories, a fifth blurs the distinction between use cases and benefit. This benefit doesn’t necessarily present itself nor is it explicitly revealed in many of these scenarios. Yet, it is extremely valuable to any manufacturer. As a matter of fact, this may be more important than any use case. At a minimum, it extends the value of any use case. What is this ‘thing’ then? This can be described as: the capture and institutionalization of know-how, expertise, insight, tribal knowledge, company DNA and learnings that result from the operational personnel’s years of experience working in the different roles and capacities in the factory, distribution center, planning or other operational functions. Simply put, the Digital Twin of Manufacturing models the business environment and includes these attributes. The result is the institutionalization of these quantitative and qualitative characteristics for the entire enterprise to benefit. The profound impact of generational labor shortages has driven manufacturers to deploy DTM against the use case of knowledge capture and accelerated training. In many cases, this “Operational Intellectual Property” is much more important than the produced product itself. A Manufacturing Digital Twin with this operational IP enables scaling of practices across factories and ensures consistency that ultimately improves efficiency and quality. So, where does this operational IP come from? Below are examples of how the manufacturing workforce improves and utilizes a DTM: • An operator recognizes that a machine transmits a unique sound during a fabrication process before the equipment trends out of tolerance. The DTM can recreate this scenario through capturing operational data and simulating the fabrication process to highlight when the audible noise occurs. More importantly, it can show supervisors and operators the variable combinations that result in the condition, thereby allowing: (1) preventative measures; (2) training other operators what to look/listen for and; (3) correlating quality issues to the event to correct via “what-if” scenarios. • Supply chain and production schedulers know that certain days of the year will result in poor worker turnout. The schedulers can incorporate timelines and calendars into their planning horizon within the DTM to assess the impact automatically, based on historical records. Thus, the Manufacturing Digital Twin can provide Production Schedulers the recommendations for order and work prioritization.
Digital Twins Across Manufacturing
763
• Procurement professionals develop a history with vendors. As business ebbs and flows, the procurement team knows which vendors are dependable and those that have issues adapting to demand changes. A Manufacturing Digital Twin with these characteristics improves the optimization, simulation and gaming scenarios when business planners and factory schedulers are looking to execute orders in a given time period. The result is better execution based on this knowledge. • A manufacturing engineer and worker team lead have knowledge of how a work cell or production line best operates given the worker expertise and they know through experience where worker challenges exist due to high worker turnover. Within the Manufacturing Digital Twin, the production line, worker movements, and processes and operational data come together to visualize and simulate these scenarios. Through “what-if” analysis, these workers can determine what line configuration change may be needed, how to train line workers differently, and even where automation may aid the line workers while reducing worker fears of job loss. • An executive would like to assess how to best serve a new market and source product from different factories with the goal to identify the lowest landed costs and best on-time delivery options. The executive knows how demand trends will occur based on past experience in selling another product to the same market. The Manufacturing Digital Twin can represent the demand trend characteristics and changes over time. The executive can ‘game’ different scenarios to see how best to capture market share by balancing total landed costs with customer order fulfillment and delivery. By looking at the analysis, the executive can make the best informed decision. These are just a few examples of how operational IP can be institutionalized and scaled to support the manufacturing network and how different users may benefit. These are important to any manufacturer and provide a perspective on which users would be engaged with each category of use cases. The scope of the use cases and variety of users engaged with DTM hint at the significant value of DTM. But we should go beyond hints to detail that value – read on for this subject in the next section.
8 Business Value of Digital Twins in Manufacturing Business value is realized in a number of ways. 1. Operational Innovation Simulation and optimization scenario analysis or ‘gaming’ provides accelerated new product introduction by synchronizing impending new products with manufacturing capabilities. Just as importantly, Manufacturing Digital Twins accelerate time-to-volume production by using virtual commissioning and “what-if” planning to stress test plant configurations early and by largely eliminating test
764
E. Green
production runs. Can they be manufactured with existing equipment and line configurations? Ergonomic simulation can tell whether operators can actually reach components and safely manufacture products base on new line layouts. Virtual builds can be done to assess the impact and simulate conditions – all prior to the products initial build. Finding errors and manufacturing challenges before production starts will eliminate costs and time. Moreover, this accelerates product manufacturing and new product introduction. In today’s business climate, more and more manufacturers are looking at nearshoring or reshoring manufacturing. Whether it’s outsourced parts, or shifting production capacity close to demand, Manufacturing Digital Twins enable companies to: • Assess if these strategies can be executed and the impact these strategies will have on existing factory footprints. • Determine the best approach to execute a new manufacturing strategy and avoid large capital expenditures in building a new factory vs. reconfiguring existing plants. • Evaluate possible gains of new technology or new processes that can be used in a factory or how new products can be scaled up to high production volumes. 2. Improved Agility and Resiliency (especially with Supply Chain) Most manufacturers have settled their (written or unwritten) standard operating procedures with change tolerated/encouraged only at the edges of the business. Trade wars, wide demand fluctuations, and supply chain disruptions have all demonstrated that these standard operating procedures (SOPs) were brittle and poorly able to handle disruptions. The cost of the reliance on these poorly tested and poorly analyzed SOPs has been market share loss, employee retention challenges and profitability declines. Modern simulation technologies when applied to manufacturing equipment, lines, plants and supply chains enable manufacturers to challenge their SOPs with alternatives and “what-if” analysis. What if we also made product X in plant 3 instead of plant 2 only, can the supply chain support it? What if we had automated batch release for product A? What if we bought from supplier B instead of supplier A? The more sophisticated the model, the more realistic the simulation and the more likely new insights will drive positive change. 3. Manufacturing Efficiency A primary business value associated with Manufacturing Digital Twins is improved manufacturing efficiency through the optimization of the manufacturing process. This can be done to improve manufacturing efficiency in any number of ways. Basic improvement starts with machine uptime and OEE. These metrics improve through predictive and prescriptive diagnostics of machine conditions. Prognostic predictions enable improved optimization over time. Machine and labor idle time can be reduced by providing insights needed to optimize material flows. “What-if” analysis allows engineers to evaluate line and supply chain changes and their potential impact on efficiency. Line configuration
Digital Twins Across Manufacturing
765
analysis can be done to see the impact of leveraging new automation, the addition of 3D printing machine and new cobots. What space is needed? How to optimize the factory flow or throughput? All of these types of questions can be answered to improve manufacturing processing and efficiency. 4. Improved Quality A comprehensive view of products and associated manufacturing processes enables manufacturers to understand what factors are impacting quality – in production, in finished goods and at the end user. The reality of real customer data drives improvement in design and manufacturing like few other things. Sophisticated DTM models running against streaming, near real-time data can catch adverse trends before they impact quality or, at least, minimize the amount of scrap and rework. 5. Improved Compliance Sophisticated models lead to sophisticated predictions of outcomes. Those predictions can be about the regulatory burdens in an organization. For example, the FDA requires pharmaceutical manufacturers to execute according to good manufacturing practices (cGMP) and to be able to demonstrate that each batch of pharmaceuticals has been manufactured according to the process validated as conforming to cGMP. The process to document each batch record and to sign-off the associated documentation can be laborious, time consuming and quite expensive. The industry is aspiring to move to automated release/review only upon exception. Manufacturing Digital Twins can play a key role in automated release because they can encapsulate all the key criteria for approval and can check each release against that “golden batch” modeled in the Manufacturing Digital Twin- flagging exceptions and automating approval for conforming batches.
Sidebar The Value of Manufacturing Digital Twins in the COVID-19 Pandemic DTM’s have been effective in helping manufacturers increase their agility and resiliency in the face of the COVID-19 pandemic. • The primary focus has been around employee and customer safety. Modeling physical distancing requirements/capabilities across a manufacturing line and executing “what-if” analysis around potential line changes is one of the most common use cases related to COVID-19. • Executing “what-if” scenarios across the supply chain is becoming equally important in the face of all the supply chain disruptions. • Enabling autonomous equipment maintenance and remote expert analysis. • Remote patient monitoring.
766
E. Green
LNS Research has found that manufacturers are able to significantly improve manufacturing, quality, and NPI benefits through Digital Twins in industrial operations specifically. More importantly, LNS Research has shown that manufacturers are able to improve the financial performance of the company in the form of increased revenues and operating margins and/or decreased COGS as a result of their programs. The benefits are widely realized by leaders (over 80% of leaders increased revenue and/or reduced COGS by 3%). Leaders are realizing significantly more benefit than followers. For example, leaders are 57% more likely to have reduced COGS by at least 10% compared to followers [10]. Manufacturing Digital Twins are especially powerful when empowered with simulation and optimization capabilities. Using sophisticated “first principle” models provide the core. Infusing near real-time data further builds the sophistication and the value of the virtual representation by capturing performance. Manufacturing or industrial engineers can do highly informed “what-if” analysis at the machine, line, plant, and supply chain level to assess options, trade-offs, and risks, and then deploy those strategies to improve performance. Let’s take stock. We have defined and pluralized DTM and then discussed the key elements, variations across industries, use cases, and users. We clearly saw there are significant manufacturing and financial benefits in deploying DTMs. But the phrasing about benefit realization referenced “leaders” and “followers” demonstrating that realizing the value is not automatic: there is a right and wrong way to do Digital Twins in Manufacturing. There are best practices associated with DTM. So, what are they? That is the subject of the next section. Key Point A primary business value associated with Manufacturing Digital Twins is improved manufacturing efficiency through the optimization of the manufacturing process.
9 How to Realize the Benefits of Digital Twins in Your Organization Best Practices in the Deployment of Digital Twins in Manufacturing To realize this business value manufacturers must figure out how to get started and how to execute a Digital Twin program effectively. We believe these are the keys to success in deploying Digital Twins in Manufacturing (see also our chapter on implementation).
Digital Twins Across Manufacturing
767
9.1 Start Mao Tse Tung once said: “A journey of a thousand miles starts with a single step.” Your organization may eventually have hundreds or even thousands of Digital Twins across the enterprise. The scale of what may eventually be should not intimidate you from starting your journey.
9.2 Pursue a Step-Wise Strategy: Systematically Start with Your Largest Pain and Devise a Plan for Quick Wins, Then Pursue Bigger Challenges Start by prioritizing your effort in terms of greatest business return for effort. Manufacturing Digital Twin technology can be deployed against virtually every piece of equipment and process in manufacturing. The question is not whether you can, but whether you should. And the answer is no. Gartner has summarized it fundamentally: If you have many assets, pieces of equipment or process activities it is better to initially focus only on the most essential needs for your ‘problem children’ – those assets or processes in your operations that are:
• Unpredictable or inefficient because of situation awareness blind spots. • Have a high commercial impact when behaving inefficiently or when they fail. • Can be cost effectively improved using Digital Twin technology [8]. That said, a focus exclusively on quick wins and “low hanging fruit” can be counterproductive; and a balanced use case model is needed over time. Grabbing quick wins to build momentum coupled with strategic investment for big innovation is a pairing for success. Key to understanding what is strategic is to align your Manufacturing Digital Twins effort to the overall corporate strategy.
9.3 Deploying a Business Led Technology Strategy Digital Twins are about the application of digital technologies to achieve step change improvement. The program must be business-led for success. In fact, Gartner views executing a “technology centric” strategy as one of the most avoidable mistakes. “Organizations should watch out for buying into the hype of the ‘next big thing’… it is never just doing the next big thing [11]. Put simply, DTM’s “…should be guided by the broader business strategy” [12]. If your corporate strategy is heavily weighted on growth from new products, your Digital Twin strategy should be tied closely to “connected product” use cases described throughout this book and an aligned strategy in Manufacturing Digital
768
E. Green
Twins to increase manufacturing capacity, new product introduction capabilities and agility. If the corporate strategy is aligned to cost savings, Manufacturing Digital Twins focused on efficiency is the key starting point. Both require modern approaches to Manufacturing Digital Twins and utilizing simulation and optimization to test and future-proof new strategies or scenarios.
9.4 Develop a Governance and Lifecycle Approach for Manufacturing Digital Twins As noted above, each Digital Twin model will have a full lifecycle as new insights are found and new methods of collecting and analyzing data are applied. Digital Twin of Manufacturing, in many ways, is about collecting and managing data in context of the manufacturing model and processes. This ever-widening array of data sources coupled with the representative 3D model is key to find actionable insights in the plant and across the supply chain. IT has been handling data for a long time and is getting increasingly sophisticated in data operations also called DataOps. LNS Research has summarized the challenge: Define and implement a data governance strategy – ensure those that need access to data have it. Those that own data are given the tools to manage it…Creating data governance transforms your organization... [12].
Manufacturing Digital Twin teams need to learn from their IT organization, or outside resources, the value of DataOps in industrial applications. The fundamentals for industrial applications come down to: • Ownership and management of the data and the data lake strategy for the company. • The governance model for the data and the data lake(s) associated with operations. • Definition and management of what is likely an expanding data model. • Ownership of data (typically owners are those closest to the data so it can be easily understood). • Ownership of data decisions regarding data and model lifecycle (what are the triggers for needing new models, new data collection methods/technologies etc.).
9.5 Expand Your IT Competencies Increasingly, new job titles are showing up in HR departments to deal with the change inherent in Manufacturing Digital Twin uses. We have all heard about data scientists, but “data wranglers” or “data engineers” – the people in the plants that are collecting, aggregating, contextualizing, and otherwise making meaningful OT
Digital Twins Across Manufacturing
769
data – are just as important. The job titles vary widely but the reality is the same: there is no way around “garbage in, garbage out” but to make the “data in” meaningful. LNS asserts that “…you need to put in place an organization that can support the creation and growth of the enterprise data model” [12]. Companies must focus beyond the fancy new job titles. Competency must be built around the data flows: • • • • • •
Connectivity and device management Data contextualization Integration Application enablement Analytics Security (both traditional IT oriented data security and OT oriented asset security)
One special skill that IT organizations need: “Expand your existing IT competencies (e.g., information management) to address new Digital Twin needs (e.g., Internet of Things [IoT] time-series data)” [13].
9.6 Plan for End-to-End System Integration Data is the lifeblood of Manufacturing Digital Twins. As noted above, core to this is pulling together data from business systems, operational systems, and other data sources such as weather, vendors, customers, suppliers, and other sources. The challenge is to effectively pull all those sources together meaningfully and leverage within context of the business process. And where possible, automate basic decision support. “To fully capitalize on your Digital Twin investments, integrate them with business applications to enhance business process automation” [13]. Key Point A “technology centric” strategy is one of the most avoidable mistakes. Focus on business, value, and the team for success.
10 The Time Is Now The business value of the Digital Twin for Manufacturing has been demonstrated by leaders in the deployment of these technologies. Manufacturing Digital Twins are proven and have improved manufacturing, quality and new product introduction metrics and, more importantly, the overall financial performance of the company. We saw how mining, aerospace and automotive companies are using Digital Twins to improve manufacturing performance and increase resiliency to disruptions that have been echoing around the world. Manufacturers in virtually every industry have
770
E. Green
benefitted from the deployment of Digital Twins and have improved the agility of their organizations. Computing power and largely off-the-shelf applications are now readily available for manufacturers to create Digital Twins for key classes and individual pieces of equipment. Machine behavior can be comprehensively modeled and simulated in a Manufacturing Digital Twin to reduce downtime and improve OEE. In addition, human ergonomics, production lines, material handling processes and other activities can also be cost effectively modeled enabling Digital Twins to go beyond assets to processes, powering a complete Manufacturing Digital Twin of lines, plants and even supply chains. And with the advancement of simulation, optimization, and 3D technologies, a 360 degree view of the factory, production line, or supply chain is possible and actionable. Digital Twins of Manufacturing are no longer a luxury, they are core to business. Competitive success will increasingly be dependent on an organizations’ ability to exploit this now core technology. Companies can not afford to linger and wait for the next disruption to create a reason to act. It all starts with four words. Take the first step!!! Key Point Digital Twins of Manufacturing are no longer a luxury, they are core to business.
References 1. Grieves, M., & Vickers, J. (2016). Origins of the digital twin concept. 2. Perino, J. (2019). Diverse players in the process digital twin market. LNS Research. https:// blog.lnsresearch.com/digital-twin-diverse-players 3. Dassault Systèmes and Industry Week. (2021). Virtual twins in manufacturing 2021 progress report (p. 10). 4. Dassault Systèmes and Industry Week. (2021). Virtual twins in manufacturing 2021 progress report (p. 17). 5. Perino, J. (2019). The process manufacturing digital twin (p. 14) LNS Research. 6. Comstock, T. (2021). The ‘Holy Grail’ and the ‘Puzzle’ in discrete and batch manufacturing applications. LNS Research.. https://blog.lnsresearch.com/the-holy-grail-and-the-puzzle-in- discrete-and-batch-manufacturing-applications. 7. Lheureux, B., & Velosa, A. (2021). What should I do to ensure digital twin success (p. 2) Gartner. 8. Lheureux, B., & Velosa, A. (2021). What should I do to ensure digital twin success (p. 3). Gartner. 9. Lheureux, B., Velosa, A., & Halpern, M. (2020). Companies heavily use digital twins to optimize operations (p. 6). Gartner Survey Analysis. 10. Murugesan, V., Littlefield, M., & Comstock, T. (2021). Maximizing payback with best practices from next generation leaders (p. 14). LNS Research.
Digital Twins Across Manufacturing
771
11. Pettey, C. (2020). Avoid these 9 corporate digital business transformation mistakes. Gartner. https://www.gartner.com/smarterwithgartner/avoid-these-9-corporate-digital- business-transformation-mistakes/ 12. Hughes, A. (2020). Analytics that matter in 2020. LNS Research. 13. Lheureux, B., & Velosa, A. (2021). What should I do to ensure digital twin success (p.1) Gartner. Eric Green brings 28 years of manufacturing, supply chain, and software experience to Dassault Systèmes in his global role of leading DELMIA marketing, industry, business development and customer advocacy. Prior to his current role, Eric held marketing, sales and consulting leadership positions at Apriso Corporation, a leader in manufacturing software, Blue Yonder(i2 Technologies), a leading supply chain software provider, and PepsiCo, a recognized leader in the snack, food and beverage industry.
Leading the Transformation in the Automotive Industry Through the Digital Twin Nand Kochhar
Abstract A mobility transformation is taking shape across multiple industries. The transformation is a combination of technological, regulatory, and societal changes all pushing for greater safety, sustainability, and equity in human mobility. In the automotive industry, this transformation has manifested in the immediate push for vehicle electrification, the continuous increase in automotive electronic and software content, and the continuing development of connected, automated and autonomous vehicle (AV) technology. This transformation is pushing the complexity of vehicle development to new levels in multiple aspects. Electrification, autonomy, and connectivity are driving requirements for more computing power, more electronic control units, and clever packaging of these many devices into the vehicle. Meanwhile, more and more features are being implemented through software in all types of vehicles, increasing software complexity and drawing additional focus during vehicle development. As each of these subsystems becomes more sophisticated, the task of integrating them into a robust, safe, and high-quality vehicle becomes even more difficult. The question for today, then, is how to continue to advance automotive design, features, and technologies to realize the potential of the future of mobility, despite the challenge of growing complexity. We believe a new approach to vehicle development is necessary. Automotive manufacturers must embrace digitalization and break down the boundaries that often exist between engineering domains and the stages of product development and manufacturing. Key to this approach is a comprehensive Digital Twin that captures every aspect of the vehicle design and production. Using such a Digital Twin, automotive manufacturers can connect engineering teams from across the electrical, electronic, software and mechanical domains. This means automotive manufacturers will be able to design, verify and validate entire vehicle platforms, ensuring the highest standards of safety, reliability, and passenger comfort. N. Kochhar (*) Automotive and Transportation Industry at Siemens Digital Industries Software, Detroit, MI, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_27
773
774
N. Kochhar
In this chapter, we will discuss the major trends of electrification and autonomy and examine how digitalization and the Digital Twin will be crucial to the creation of the advanced vehicles of the future. Though we focus on the consumer automotive market in this chapter, the benefits of digitalization and the Digital Twin are equally applicable to other vehicle manufacturing industries, such as heavy- equipment and off-highway vehicles. Keywords Automotive · Automotive and transportation · Automotive digital thread · Automotive OEM · Automotive Startup · Digital transformation · Digital twin · Electric vehicles · Future of mobility · Mobility · Vehicle electrification
1 Electrification Aims for the Mainstream First and most immediate, it is evident that the transition away from combustion- powered vehicles has begun, at least as the dominant form of transportation. Electric vehicle technology has undergone a rapid maturation, even when just considering progress over the last decade. When introduced in 2012, the Tesla Model S offered up to about 260 miles of driving range for a full charge [1]. The Model S is now rated for driving ranges between 348 and 520 miles on a single charge [5]. Across the EV industry, continual improvements in battery technology, battery and vehicle energy management systems, electric motors and vehicle design are making EVs more and more attractive to the consumer, Fig. 1. These improvements have helped
Fig. 1 Due to a rapid pace of innovation, electric vehicles are now able to compete with traditional combustion-engine vehicles. Today, native EV platforms offer good driving range and compelling driving dynamics, among other benefits
Leading the Transformation in the Automotive Industry Through the Digital Twin
775
reduce range anxiety among consumers while also making EVs more attractive and feature rich. The environmental benefits of mass electrification further improve the desirability of EVs for many consumers. EVs produce no emissions and use energy very efficiently, making them less impactful on the environment when compared to traditional internal combustion engines. These environmental benefits may prove to be a boon for cities around the world. City populations are growing around the world as more people choose to live in urban environments. Growing human populations usually translate to greater amounts of vehicular traffic, increasing congestion and degrading air quality from vehicle emissions. Developing urban areas around the world can look to EVs as one option for alleviating the pollution caused by a large influx of automobiles. As a result, sales of all electric vehicles have been growing around the world and are expected to continue to increase in the future. According to the International Energy Agency’s Global EV Outlook 2021, global EV stock increased 43% from 2019 to 2020 despite the logistic and economic hurdles of the COVID-19 pandemic. Europe led the way, with new EV registrations more than doubling in 2020, while the American and Chinese EV markets both saw increases in the sales share of EVs relative to overall vehicle sales [3]. As we overcome the logistic challenges of the pandemic, overall vehicles sales are likely to recover. And with an armada of new EV models to choose from, it’s likely that more and more consumers will be choosing to plug in their next vehicle. Automotive OEMs and startups do still encounter several challenges in the production and sale of EVs on a mass-scale. First, EV production requires some unique safety considerations stemming from the high voltages present in the electric powertrain. As a result, legacy production methodologies must be adapted or replaced to account for the potential safety risk posed by high voltage components. EV manufacturers also need to determine the best strategies for the recycling, reuse, or disposal of EV batteries. New materials have helped to improve battery performance and reduce the cost of battery production, but they can also be dangerous or toxic if improperly handled. The industry must continue to develop appropriate methods for processing these materials at scale. Furthermore, the EV charging infrastructure is much less developed than the vast network of gas stations on which drivers of ICE vehicles can rely. Even with the impressive range of today’s EVs, the lack of charging stations, and particularly fast charging stations, in most regions present a serious challenge for EV manufacturers and the EV industry. It’s likely that any major developments of charging infrastructure will require the cooperation of EV manufacturers, various levels of government, utilities companies and others. Ultimately, widespread EV adoption will hinge on the ability of automakers to offer consumers a compelling alternative to the gasoline-powered vehicles with which they are familiar. Today, we are seeing consumer opinion begin to shift. Contemporary EVs are coming to market with advanced features, plush cabins, and exciting styling to rival the best ICE vehicles available, as seen in Fig. 2. As drive
776
N. Kochhar
Fig. 2 Modern EVs are some of the most exciting and attractive vehicles on the market today, offering advanced features, performance, and style. (Image attribution: “Porsche Taycan” by Johannes Maximillian is licensed under CC BY-SA 4.0)
range and opportunities to recharge improve, the scales should gradually, but irrevocably tip towards the EV. Automakers that can capitalize on this shift early stand to reap the greatest benefits. Throughout the industry, we have seen that digital transformation of vehicle design, testing and manufacturing contributes to faster time to market, high product quality and greater innovation. For companies competing for leadership in the EV market, these capabilities will prove critical. Key Points • Electric vehicle technology has matured quickly in the last ten years. Today’s EVs offer compelling styling, performance, and ever-increasing drive range and are attracting a growing number of consumers to purchase their first electric car. • EVs continue to face some challenges. These include special safety considerations during manufacturing and service and a charging infrastructure that lags the density of traditional fuel stations. • Innovators in the EV space will benefit from being the first to take advantage of this burgeoning market.
Leading the Transformation in the Automotive Industry Through the Digital Twin
777
2 Automotive’s Digital Revolution Next, modern vehicles are commonly described as “computers-on-wheels” due to the recent explosion of computing power and electronic features that they contain. The world’s first automobiles, however, were relatively simple, and entirely mechanically operated. The first automotive electronic components were not even available until the 1930s, when manufacturers began offering vacuum tube radios. Over time, vehicles have become dramatically more complex due to technological advances and consumer trends. At the same time, the composition of vehicle systems has changed. Mechanical systems accounted for most of this complexity for much of the car’s history, but the electrical and electronic systems, software, and electronic components have steadily increased in sophistication. Today, most vehicle features are aided or enabled by electronic components, the embedded software on these components, and the underlying electrical and electronic (E/E) architecture. Engine management, braking, steering, infotainment, and other comfort and convenience features rely on the electrical and electronic systems. Software has also come to play a dominant role in vehicle functionality. Modern cars contain millions of lines of code that make up applications for everything from the most advanced infotainment and passive safety features to the automatic door locks. As vehicle features continue to evolve and grow in sophistication, previously unrelated subsystems will come into contact. Systems that evolved independently will begin to integrate and depend on each other to realize new functionalities. The introduction of cruise control in the late 1950s was the first integration of electrical and mechanical systems in a vehicle. Since then, cruise control has continued to evolve and interact with other subsystems to enable greater functionality. Adaptive cruise control systems allow modern cars to sense the relative position of vehicles in front of them and then use this information to slow down or speed up as needed to maintain a driver-determined following distance. Likewise, automated emergency braking systems can sense stopped vehicles and apply the brakes, bringing the vehicle to a complete stop even if the driver is not paying attention. The result of this innovation and integration is a tremendously complex system of electronic control units (ECUs) running advanced software, sensors, actuators, and wiring to connect it all together. The size and complexity of these architectures create new challenges for automotive OEMs and their suppliers. These challenges will only become more intense as companies continue to advance vehicle technologies, particularly in the automated driving space. As software and electronics become more important to vehicle functionality, OEMs have started making large investments to bring these new key areas of development, such as software, in-house. Large OEMs around the world are recruiting software engineers to develop basic software functions across their brands. With their own software teams, OEMs can improve the ownership experience with routine software updates to improve system performance and fix latent issues. The OEM may also offer entirely new functionalities that customers can purchase, thus
778
N. Kochhar
extending the life or increasing the performance and value of their vehicle, while deriving incremental revenue after the vehicle sale. This reaction by the automotive manufacturers is as clear an indication as any that the future of the automotive industry lies in the onboard software and electronics and the features they can enable. Already, fundamental (i.e., safety-critical) vehicle functions are being implemented through software hosted on increasingly powerful integrated circuit (IC) and system-on-chip (SOC) devices. Increasing levels of vehicle automation will only emphasize this trend. Likewise, consumer expectations for connected, digital experiences are growing, further reinforcing the growing importance of ECUs, computer chips, and software. Key Points • Vehicle complexity is exploding, primarily due to increasing demand for electrical, electronic, and software components. • Vehicle systems are becoming more integrated, with previously unrelated systems coming into contact to deliver new features and functionality. • Current trends in the automotive industry will only heighten the importance of software and electronics in the future.
3 The Road to Full Autonomy While EVs are gaining ground in mainstream consumer markets, autonomous vehicle technology remains under development. The idea of a car that can drive completely on its own, with no human input, is certainly compelling. Yet, these systems are a long way from the fully autonomous chauffer we expect when we think of a self-driving car. Current self-driving systems are limited to specific driving scenarios (such as highway driving) and require optimal weather conditions to perform effectively. For now, many companies are content to take a pragmatic approach to AV development by methodically improving their autonomous capabilities one level at a time. As the technology continues to improve, both in terms of the vehicle technology and the digitalization of development processes, it stands to revolutionize how we interact with vehicles and move about our world. From the standpoint of a business transformation, autonomy promises to be a foundational change in the automotive and transportation industry. First, AVs will further accelerate the changing the nature of automotive products, from essentially mechanical ones to integrated multi-domain technology products. This is because AVs will rely on a mixture of more traditional mechanical systems (such as braking and steering) in concert with advanced software and electronics to enable sense- think-act functions. For AV manufacturers, this translates to a shift in value, and thus their development priorities, towards the onboard electronics and software. As the levels of autonomy approach SAE level 5, the disruption will only grow. Many experts expect that connected, self-driving vehicle technologies will also
Leading the Transformation in the Automotive Industry Through the Digital Twin
779
change the way that consumers use or access private mobility. The budding synergy of connected autonomous, and electric vehicle technologies has led many experts to identify a fourth key trend: transportation-as-a-service (TaaS). The fundamental idea behind TaaS is that most consumers will cease to own personal vehicles and will instead hail rides when needed, from a fleet of vehicles owned and operated by a transportation company. Contemporary ride-sharing services offer a preview of the potential benefits of a mature TaaS model. These services offer customers a largely reliable and convenient method of transportation within urban areas. However, because these services still depend on human drivers, they do not offer the improvements in road safety or congestion that automated TaaS models are expected to deliver. Ultimately, the realization of a true TaaS system will be a large societal shift, as well as a dramatic departure from traditional automotive business models. Key benefits include safer roads, less congestion, reduced cost from vehicle ownership and the convenience of automated chauffeurs that can bring us from destination to destination. And it is this last point that is perhaps most exciting of all. AVs and TaaS models should, if done correctly, offer broader access to high quality mobility, enabling more people the freedom of personal transportation. Before getting ahead of ourselves, it’s important to note that there is not complete consensus on the feasibility of a true, level 5 autonomous vehicle. Some experts believe that automated vehicles will always have some constraints on their operation, such as geofencing within specific regions, limitations on the types of roads an AV will use for travel, or the need for a human driver to be ready to intervene in certain situations. To understand where this hesitance comes from, we should first discuss what distinguishes some level of automated driving from a ‘full’ or ‘true’ self-driving vehicle. The Society of Automotive Engineers (SAE) has defined six levels of vehicle autonomy ranging from 0 (no automation) to 5 (full automation), shown in Fig. 3. These levels have become industry standard for categorizing the capabilities of an automated vehicle. Level 1 and 2 cars are increasingly common due to the proliferation of advanced driver assistance systems (ADAS) such as automated emergency braking, radar-controlled cruise control and lane-keep assistance. The most advanced systems today are level 3, in which the automated driving system can manage all aspects of the driving task on the condition that a human driver is ready and able to intervene at a moment’s notice. Levels 4 and 5 continue the progressive transition of control of the vehicle away from the human and to the automated driving system. At level 5, the automated driving system must be able to manage all driving tasks under all possible conditions and scenarios, a tall task indeed. Technologically, the escalation to higher levels of autonomy requires immense onboard computing power, advanced software, sensor hardware, and networks and electrical wiring connecting all these various components together. AV manufacturers then must integrate all the electrical and electronic hardware needed for vehicle perception, decision-making and action functions within the mechanical structures and systems. The result is a significant increase in the complexity of the processes used to design, engineer and manufacture vehicles.
780
N. Kochhar
Fig. 3 The SAE has defined six levels of vehicle automation, from low-level automated warnings or assistance features to the fully autonomous vehicle that can drive everywhere in all conditions. (Image attribution: SAE International)
The most daunting task, however, is the verification and validation of these self- driving systems to ensure their safety and reliability under all conditions. A level 5 AV will need to operate safely in rain, snow, hail and more, at all times of day and in all traffic conditions. It is therefore no surprise that verification and validation of a level 5 AV is expected to require the equivalent of several billions of miles of roadway testing. This is due, in large part, to the nearly infinite number of potential corner cases that an AV may encounter in the real world. The automated driving system must be prepared to navigate such cases while protecting the safety of the human passengers inside. It is the challenge of verifying such a system that causes such skepticism around the possibility of a true, level 5 AV. And yet, research and development of AVs and associated technologies (such as artificial intelligence) is ongoing, and not just in the background. AV development continues to generate excitement across both the automotive and technology worlds. We have seen traditional automotive companies, startup carmakers, and some of the largest tech companies in the world all investing in the promise of AV technology, hoping to establish leadership positions in this new space and thus a hand in what is expected to be a multi-billion-dollar pie.
Leading the Transformation in the Automotive Industry Through the Digital Twin
781
Key Points • Autonomous vehicle technology is still under development. Even so, many experts expect that connected, self-driving vehicle technologies will eventually change the way that consumers use or access private mobility. • Yet, other experts believe that automated vehicles will always have some constraints on their operation, such as geofencing or limitations around the types of roads that may be used. • The most daunting task on the path to full autonomy is the verification and validation of these self-driving systems to ensure their safety and reliability under all conditions. This will require billions of miles of testing.
4 Recap – Industry Trends Drive Large-Scale Disruption Before moving on, lets quickly summarize the dynamics that have been taking shape in the automotive industry. A convergence of large-scale trends is driving an increased reliance on software and electronic components to enable vehicle features and functionality. As software and electronics are integrated into new vehicles, complexity in all areas of the vehicle lifecycle increases. This is due partially to greater sophistication of discrete vehicle systems, and partially to the greater degree of cross-domain integration required to create such advanced systems. Meanwhile, development cycles are accelerating to support demand for new driver assistance or other convenience features. All in all, automotive OEMs are faced with the task of creating ever more advanced vehicles on shorter timelines, while maintaining an exacting degree of quality, reliability, and safety for customers. In other words, automotive OEMs are experiencing pressure on multiple fronts, all at the same time. And in the face of these new pressures, traditional approaches to vehicle definition, design, testing, production, and maintenance will simply not suffice for the demands of tomorrow’s automotive market. In these approaches, engineering domains and functional groups are often separated and work in silos, each isolated from the other domains over most of the vehicle development process. This results in a restricted, sometimes ad hoc flow of information between the domains, limiting collaboration efforts and preventing any sort of cross-domain product engineering. Even worse, it prevents coherent management of engineering and product data, such as CAD files, test results, product requirements, or configurations, ensuring that tracing such data over the vehicle lifecycle is onerous, or even impossible. Instead, automotive companies need to rethink how they develop vehicles for a new generation. They need to remove the silos that separate engineering domains and functional groups (such as design and manufacturing). They need to ensure data is managed and flows easily throughout the organization, the product lifecycle, and even the supply chain. And they must adopt software development and electronics design as core competencies, and integrate them within the overall vehicle
782
N. Kochhar
Fig. 4 The comprehensive Digital Twin includes and supports the numerous lifecycle phases and respective models of actual product and process behavior
development lifecycle, ensuring they are considered from initial definitions through the manufacturing and in-field service of vehicles. Success in this demanding new environment requires breaking down the barriers between various engineering and process domains to design and manufacture the product. This shift is accelerating the transformation of companies into digital enterprises that share data and collaborate in the design, manufacture, and deployment of products and processes. It is critical for automakers to embrace this change and digitalize the entire vehicle design, development, validation, manufacturing, and utilization lifecycle. Digitalization can help to address the immediate challenges automotive companies face today. But it also provides the foundation for growth and success tomorrow. The automotive companies that embrace the advances in digital engineering, simulation, and lifecycle management solutions will be better prepared to overcome the challenges of advanced vehicle development, to adopt new models of vehicle sales and support, and to take a leadership position in the automotive market. At the heart of this transformation is the concept of a comprehensive Digital Twin of the vehicle, covering every aspect of the vehicle and its associated production processes, over its entire lifecycle, summarized in Fig. 4. The Digital Twin is a way to address the challenges of complex vehicle system development by building a set of highly accurate models that help predict product behavior during all lifecycle phases of the vehicle. The comprehensive Digital Twin is an evolution of this concept, supporting model-based design of the product and production process, integrated manufacturing operations management, and a cloud-based data analytics feedback loop from the product-in-use back into the Digital Twin. Such a Digital Twin becomes the backbone of product development − capable of delivering greater insight, reducing development cycle time, improving efficiency, and increasing market agility. Applied to the automotive industry, the comprehensive Digital Twin enables companies to take a truly holistic approach to the design, engineering, testing, production, sales and aftersales service and support of their vehicles. This includes a
Leading the Transformation in the Automotive Industry Through the Digital Twin
783
data-driven, closed-loop development, production, and deployment process encompassing everything from the embedded electronics devices within the vehicle to the infrastructure and systems within which the vehicles operate. Such powerful capabilities will only become more important as the industry moves towards increasing levels of vehicle automation and system integration. Furthermore, a comprehensive digital approach can bring people, technology, and processes together, enabling more flexible and agile processes. To illustrate this approach, the following sections will offer examples of the transformative power of the Digital Twin in some of the major domains and processes of vehicle design, manufacturing, and service. Key Points • Automotive OEMs must create advanced vehicles on shortening timelines, while maintaining an exacting degree of quality, reliability, and safety for customers. • Automotive companies need to rethink how they develop vehicles for a new generation. • A comprehensive Digital Twin enables companies to take a truly holistic approach to the design, engineering, testing, production, sales and aftersales service and support of their vehicles.
5 Vehicle Definition and Design In vehicle design and development, there is a growing emphasis on the software and electronic content of the vehicle, such that it they are now at least as important a consideration as the powertrain, chassis, and other mechanical components. Moving forward, the trends of vehicle electrification and automation will drive further growth in the importance of electrical, electronic and software components to vehicle features and functionality, especially as automakers seek to capitalize on the opportunities of over-the-air feature updates and service-based business models. Of course, more traditional mechanical systems like the chassis, suspension, and vehicle body are no less important than they were fifty years ago. Cars still need to be reliable, comfortable on the road, attractive to the eye, incredibly safe for the occupants and, at least for now, engaging to drive. Therefore, the challenge of modern automotive design is not just about the creation of the most advanced sensor system or largest capacity battery, but it is also about the seamless integration of the many disparate domains, engineering disciplines, and technologies required to create a competitive, compelling package for the consumer. The Digital Twin will act as the foundation for the collaborative, integrated approach demanded by modern vehicles, bringing the mechanical, electrical, electronic, and software domains together to design a complete system. In the mechanical realm, today’s CAD solutions offer robust modeling environments that support designers from initial 2D layouts through 3D models, refinement, and detailed
784
N. Kochhar
renderings of complete designs. Modern mechanical CAD solutions are also tightly integrated with adjacent engineering domains, such as electrical systems engineering, computational fluid dynamics, and more. As a result, designers can quickly create and evaluate mechanical component or system designs within a broader, vehicle-level context. For example, a mechanical engineer can leverage electrical system data to ensure proper space reservations for electrical wiring within the vehicle. Looking further into the future, we expect AVs will depend on an array of thirty or more sensors within the vehicle body to perceive the environment and thus navigate it safely. Design goals for autonomous vehicle sensors are centered on reducing size and cost without sacrificing high resolution, range, or reliability in all weather and vehicle conditions. Engineers also must ensure the signal and power integrity of all on-board processors and electronic control units (ECUs) while balancing device cooling with power consumption. Integrated electronics, thermal, electromagnetics and mechanical simulation workflow offers a system-level perspective of the sensor and its physical housing, making it much easier to engineers to meet their design goals. Such a workflow can ensure that the physical packaging and placement of sensors offers sufficient cooling for the electronics components inside. In addition, designers can analyze the interaction of rain or dirt with the aerodynamics of the vehicle and placement of sensors, ensuring these systems remain functional in adverse conditions. Generative design capabilities enable mechanical designers to conduct highly efficient design explorations (in which multiple alternatives are generated for analysis and selection), or component optimization to, for example, maximize strength while minimizing weight. Recent advances have even allowed these generative design tools to conduct multi-disciplinary optimization, ensuring the relative advantages and disadvantages of each design alternative are understood at a high level. Generative design capabilities should prove exceptionally powerful as automakers experiment with new weight reduction methods, materials, and manufacturing methods, such as additive manufacturing. Generative design tools will be able to create component geometries optimized for these new methods, ensuring the greatest performance while minimizing weight, cost, or other key measures. The trends of electrification, autonomy, and connectivity are having an even more dramatic impact on the electrical and electronic (E/E) architecture and electrical systems design process. The definition and design of the E/E systems in modern vehicles has rapidly grown to become one of the central aspects of vehicle development, especially as OEMs come under pressure to deliver innovative automotive E/E technologies to market at breakneck pace. The Digital Twin connects E/E systems design, validation, verification, compliance, manufacturing and utilization into the overall closed-loop vehicle development and optimization flow. Collaborative and integrated electrical engineering software underpins the electrical Digital Twin, allowing it to extend throughout the lifecycle of E/E development and connect with the other domains, such as mechanical and software engineering. This includes initial definition through realization and the actual use of the product in the field. With
Leading the Transformation in the Automotive Industry Through the Digital Twin
785
these capabilities, companies can manage complexity and accelerate their development cycles to stay ahead of the pace of E/E innovation in the automotive industry. E/E systems development solutions have grown in power and sophistication along with the vehicles they are used to create. These solutions have been developed with a set of core principles to support an integrated and end-to-end flow for electrical systems and wire harness engineering as part of a digitalized vehicle development lifecycle. These core principles are: • Data coherency – data should be created, managed, and integrated throughout both vehicle development and the rest of the enterprise. • Advanced automation – generative design and built-in metrics help engineers to quickly create, evaluate and optimize system designs. Change management, design rule checks and documentation are also automated to ensure the creation of accurate and optimized electrical systems. • Built to integrate – advanced electrical systems engineering solutions are built to integrate with a variety of applications including mechanical CAD systems, PLM, ALM, ERP and more. These integrations ensure a direct flow of data between engineering disciplines and business units. Underpinning each of these principles is a powerful and capable data management layer that supports all facets of the design lifecycle. This ensures end-to-end traceability throughout all disciplines of design. The result is correct-by-construction outputs at each stage of E/E system development that automatically feed the next level of design. Not only does this vastly accelerate time to market, but it also ensures a robust digital thread is maintained throughout the entire engineering process. For example, the vehicle E/E architecture can be generated from a set of functional models and used to feed downstream design phases. Logical systems are generated and then input to drive the design of electrical systems and a communication architecture that feeds the network and software designs. This design data can even be used to model plants and factories to aid in the generation of work instructions and process aids. Finally, one of the biggest changes in the automotive industry is the heightened significance of semiconductor devices (i.e., microprocessors and other computer chips). Semiconductors have been present in automotive bills-of-materials since at least the early 1970s, when computerized engine controls became an industry standard to meet new emissions regulations. Traditionally, these devices were tasked with performing a small number of rudimentary functions, and were generally a commodity sourced from outside suppliers, rather than being developed in house. Today, with key features and functionality increasingly enabled via software, onboard computer devices under more demand than ever. And as infotainment, connectivity, ADAS and other software-enabled features increasingly drive buying decisions, many experts expect these chips will become the critical brand differentiators in automotive industry. Not to be left behind, legacy automotive manufacturers have begun to invest in the creation of custom semiconductor devices (even of
786
N. Kochhar
the highly complex system-on-chip, or SoC, variety) to power their future vehicle platforms. The automotive Digital Twin must therefore extend to the realm of semiconductor design and verification. This entails the incorporation of electronic design automation (EDA) software into the fold, including tools for integrated circuit (IC) and SoC design, functional verification, physical verification, and test solutions for both analog and digital chips. Such a portfolio will enable automotive companies to create powerful SoCs that are energy efficient and highly reliable over extended service lives in harsh conditions. Of course, these chips must also be developed with respect to the overall vehicle system and the specific functions they will perform. A chip intended to run infotainment software will have dramatically different requirements to one that will host ADAS or automated driving functions, for instance. With modern solutions, custom chip designs can be emulated and tested with real software code before any chips are fabricated. Furthermore, the emulated chips can be fed synthetic inputs from simulated sensors, mechanical and mechatronic systems, traffic scenarios and more, enabling early verification and validation of chip designs in realistic driving conditions. Key Points • The challenge of modern automotive design is in the seamless integration of many disparate domains, engineering disciplines, and technologies required to create a competitive, compelling package for the consumer. • The Digital Twin will act as the foundation for the collaborative, integrated approach demanded by modern vehicles. • The automotive Digital Twin must extend to the realm of semiconductor design and verification as silicon devices become more crucial to advanced vehicle capabilities.
6 The Digital Twin Fast Tracks Automotive Manufacturing Digitalized vehicle definition and design processes result in the creation of a product Digital Twin; the product being the vehicle in this case. Next in line is manufacturing and production planning, which can also realize immense benefits through digitalization. Combining Digital Twins of the product and production processes can bridge the gap between design and manufacturing, merging the physical and digital worlds. These Digital Twins capture physical asset performance data from products and factories in operation that can be aggregated, analyzed, and integrated into product design as actionable information, creating a completely closed-loop decision environment for continuous optimization. Such a comprehensive Digital Twin enables manufacturers to plan and implement manufacturing processes for new lightweight designs and modular vehicle platforms while reducing the costs of production and coordinating across deep
Leading the Transformation in the Automotive Industry Through the Digital Twin
787
supplier ecosystems. This approach will not be optional but required for automotive companies as they transition into the dynamic and fast-paced future of their industry. Let’s examine how the Digital Twin helps solve each challenge.
6.1 Lightweight Designs The integration of new materials into vehicle architectures is key to many manufacturers’ strategies for reducing the weight of vehicles while maintaining vehicle safety. These new materials, however, introduce new manufacturing constraints. For example, including aluminum and carbon fiber in the construction of vehicle bodies requires the adoption of new joining technologies. Moreover, most vehicles will contain a mixture of traditional and new materials, with lightweight materials being employed in strategic locations. This means that new materials will join with conventional product components, requiring an additional level of analysis and verification to ensure structural integrity. A Digital Twin of the production process enables engineers to evaluate multiple methods of joining vehicle components, including joining technology and tool orientation, to identify the most accurate and efficient process. For instance, laser welding requires high accuracy especially when dealing with complex component geometry. A key challenge is achieving a smooth and continuous welding seam, without splitting the seam into multiple segments. Using digital manufacturing tools, engineers can build a simulation of the product components and the robotic welding tool, as in Fig. 5. Then, a programmer can quickly define a welding seam on the product geometry that accounts for robot collision constraints and configuration to produce a single welding seam that will maximize the strength of the joint. In addition to new materials, manufacturers are achieving lighter component weights using exciting new manufacturing technologies. Advanced technologies
Fig. 5 The Digital Twin enables manufacturing engineers to design, simulate, verify, and validate various production methods, such as welding processes, in the virtual world
788
N. Kochhar
like additive manufacturing (AM) can contribute to the reduction of vehicle weight by enabling the production of more complicated component geometries. Essentially, AM simplifies the process of building complex parts, thus making more complex geometry feasible. AM allows engineers to reimagine component design to expand capabilities and improve performance while reducing material usage and weight. AM also empowers companies to re-invent manufacturing by eliminating tooling, castings, molds, and reducing manufacturing components to simplify processes. Yet AM is still in the pilot phase due to some persistent challenges. AM is not just faster or more flexible than conventional manufacturing, it is an altogether new approach to production. AM requires specialized equipment and unique processes that may not be immediately compatible with conventional production lines. Manufacturers must determine how to integrate these specialized processes with conventional manufacturing methods and tools. Current AM technology will also struggle to meet the volume typical of automotive manufacturing. Thus, improving the scale of AM processes will be another challenge for automotive manufacturers moving forward. A comprehensive Digital Twin facilitates the industrialization of additive manufacturing by unifying product design, manufacturing design and actual production. With advanced product design and simulation tools, engineers can evaluate component designs and optimize them for AM from the beginning. Generative design and topology optimization tools will also prove critical for creating new component geometries that achieve an optimal balance of weight, material usage, strength, and other performance indicators. The component can then be validated using advanced materials simulations and prepared for the printing process. This preparation includes printing orientation and support structures, as well as slicing, hatching, and printing simulations. These solutions can even perform post-processing and inspection on the virtual component to verify the component design and manufacturing process. Such an end-to-end system can produce astounding results. AM has become a major piece of Ford’s manufacturing eco-system. One of their AM applications, according to Ford, has the potential to save the company more than $two million [2]. Other major automotive manufacturers are investing in AM as well. BMW recently announced a project to further integrate AM into its vehicle production. The company expects its new AM lines to reduce manual processes from 35% to 5%, cutting the cost of metal components in half [4].
6.2 Modular Platforms Demand Flexible Production In concert with electrification initiatives, many OEMs are adopting modular vehicle platforms that will undergird their future vehicle lineups. Vehicle assembly processes will also need to shift towards a more modular build environment as a result. Assembly methodologies, processes and tooling will evolve to support these modular build scenarios that can quickly adapt to market conditions. Manufacturing
Leading the Transformation in the Automotive Industry Through the Digital Twin
789
planning and operations management must be digitalized to become more agile and integrated in support of these more flexible manufacturing approaches. Leveraging a Digital Twin of the product, engineers can evaluate manufacturing methods virtually, analyzing multiple tools, assembly sequences and production line configurations while identifying and resolving issues. Vehicles contain hundreds of parts that need to be assembled. A planning team defines assembly processes that identify the tools and equipment needed to assemble each product, and the sequence in which this assembly should occur. Advanced process planning solutions help planners allocate vehicle parts to new assembly processes and can identify parts that have yet to be processed. Furthermore, these solutions can access libraries of processes to reuse proven process knowledge such as assembly standards, time estimates, quality checks and more. This reduces the time required to create high-quality assembly process plans, enabling rapid response to product or production changes. Next, each planned assembly process can be distributed within the context of the manufacturing facility constraints, enabling planners to define and validate the assembly sequence. Planners can select tooling based on the product requirements from tool libraries, driving standardization. They can also assign checks for tool reach and access or ergonomic feasibility to align the process with manufacturing standards. An integrated process simulation environment enables a manufacturing engineer to load the Digital Twin of an assembly sequence to perform static and dynamic checks for tool collision and other manufacturability constraints. The results of the simulation can be captured and attached to a manufacturability check, which can then be updated to a product lifecycle management (PLM) solution. The PLM solution can then use the manufacturability checks to publish a dashboard showing the adherence of processes to manufacturing standards at the station, line, or factory level to provide an estimation of process maturity. As these processes are developed, it will become crucial for manufacturers to capture as much data and knowledge from each implementation to assist production ramp-up in other facilities. The benefits of digitalization also extend to the factory level. With digital production planning and simulation tools, planners can evaluate production line configurations, and entire factory layouts to optimize manufacturing operations, see Fig. 6. Industrial engineers create a virtual model of the manufacturing facility to define and optimize factory-specific operations. To begin, the engineers can access defined assembly processes directly from the PLM solution to insert into the factory floor plan. These assembly processes form the “building blocks” with which the engineers will define a production line or factory layout. Next, the engineers plan and evaluate each production line. Each of these lines may produce several vehicle models and variants. The engineers can examine and validate the variation in work content in any given workstation for each vehicle model or variant. Furthermore, the engineers can perform line-balancing analysis to ensure that workstations and operators are neither under nor over-worked. The engineer can then quickly reconfigure the allocation of operations to workstations to resolve issues or improve performance.
790
N. Kochhar
Fig. 6 Modern manufacturing engineering and simulation tools can model entire production facilities to design, test, and optimize production lines, the layout of the entire facility, and logistics and material delivery
Finally, engineers can leverage the virtual factory and production line models to plan and optimize factory logistics and material delivery. With consideration of the rate of production, planners can identify material delivery locations and review delivery routes and corridors. Automated guided vehicles (AGVs) can be simulated in the context of the factory layout to ensure proper functionality. Engineers can even virtually commission control logic for automated systems based on the simulations. This enables the engineers to ensure that material is delivered where it is needed, and when it is needed to prevent production delays. Key Points • Combining Digital Twins of the product and production processes can bridge the gap between design and manufacturing, merging the physical and digital worlds. • A comprehensive Digital Twin enables manufacturers to plan and implement manufacturing processes for new lightweight designs and modular vehicle platforms while reducing the costs of production and coordinating across deep supplier ecosystems.
7 Digital Twin Enables Early Verification With complexity on the rise, the verification and validation of vehicles and individual sub-systems is becoming one of the foremost challenges in the development lifecycle. Growing electrical, electronic, and software content are the primary
Leading the Transformation in the Automotive Industry Through the Digital Twin
791
drivers of complexity growth, but vehicle verification and validation must also account for the cross-domain nature of new vehicle features and systems. Verifying an automated emergency braking system, for instance, requires testing electronic hardware, software, and electromechanical components to ensure the entire system functions reliably and meets design intent. What is troubling for automakers is that the complexity of vehicles is growing so much that physical testing and verification programs are becoming exceedingly costly and lengthy. Instead, companies should incorporate simulation into their verification and validation programs. Modern simulation solutions support full vehicle verification and validation, from various environmental and traffic conditions right down to the individual sensors, electronic control units, software, and computational units. These solutions allow companies to test various systems and even entire vehicles in virtual environments before committing to the expense of prototyping and physical certification. Such virtual testing is much faster and less expensive than physical testing, making it ideal for early system verification and validation. Simulation can also be used to identify and test vehicle performance under extreme conditions. When paired with digital design and engineering environments, learnings from these vehicle simulations can be quickly integrated back into the system designs. These simulation environments will become even more important with rising levels of vehicle automation. The number and variety of sensors used in AV systems continues to increase while the electrical and electronic (E/E) architectures are also centralizing, allowing for a high degree of flexibility in design choices. Added flexibility, however, also means greater complexity during the design of AV hardware and software. Here, high-fidelity simulation of the vehicle Digital Twin enables multi-attribute optimization (e.g., performance vs. energy usage and cost) in various vehicle domains, which helps engineers to make optimal design choices. For example, the combination of a virtual driving environment, complete with realistic traffic scenarios and physics models, and synthetic sensor data can simulate the performance of the sensor system and vehicle dynamics in a virtual world. These simulations provide crucial information for the development of the sense-think-act algorithms that enable AVs to navigate independently and can also provide multiple levels of fidelity for the sensor simulations to match the needs of the design activity being undertaken. Physics-based raw sensor simulations can inform sensor and perception algorithm design, such as using probabilistic and ground-truth models to design the sensorfusion algorithms and path-planning. Eventually, high-fidelity vehicle dynamics and various other specialized simulation models (such as for tire compounds) can be used to develop the high- and low-level controllers that direct the acceleration, deceleration, steering and other functions of the vehicle. Verification complexity also extends down to the component level, as engineers must evaluate and optimize component performance based on thermal, electromagnetic, or other constraints. For instance, ADAS/AV sensors often are in challenging locations on the vehicle and potentially vulnerable to external conditions. Cameras are located behind the windscreen, where proper heat management is critical to guaranteeing optimal lifetime performance. Another example is the location of radar and vehicle communication antennas. Highly detailed electromagnetics
792
N. Kochhar
simulations can help optimize the location of these components for maximum antenna gain patterns, which will only become more important as vehicles become connected. External influences like rain and dirt soiling can also degrade the accuracy of sensors. Modern simulation solutions are built to address these component-level challenges. 3D and system-level CFD simulation packages enable the design and evaluation of heat management systems for a variety of components, including sensors and ECUs. These simulation tools also offer additional capabilities that enable the optimal design of sensor packaging and location on the vehicle to guarantee optimal performance over the lifetime of the vehicle. These capabilities include the ability to simulate how water (such as road spray or rain) will interact with the exterior of the vehicle, and thus how it may affect sensor performance, shown in Fig. 7. Through the combination of electronic, mechanical, and functional simulation on chip, component and vehicle levels, automotive companies can construct a complete virtual version of vehicles and subsystems, allowing for early design verification and validation. Of course, some real-world testing and data capture will always be necessary, particularly in the realm of ADAS and automated driving systems. Real-world tests generate terabytes of valuable data, offering a direct view into vehicle performance under various test conditions. It is critical that engineering teams can make the most of this data in the shortest time possible to ensure vehicle programs proceed on pace. Commercially available combined hardware and software solutions can collect and manage massive amounts of ADAS sensor data from real-world prototype vehicles. Typically, the amount of collected data is so large that it can only be retrieved from the test vehicles on hard disks. These hard disks are then taken to processing stations, where the data is uploaded to the cloud for further analysis by teams around the world. To streamline this flow, automakers can employ software that runs on both the edge devices in the vehicles and in the processing stations to
Fig. 7 Advanced simulation software enable designers to investigate how water and other environmental factors interact with the exterior of the vehicle
Leading the Transformation in the Automotive Industry Through the Digital Twin
793
perform an initial analysis on the data. This first-pass analysis distinguishes between data that is of interest or relevant to the current objective (‘hot’ data), and data that is less interesting (‘cold’ data). This allows the team to make informed decisions on what data should be stored, where it should be stored, and when it should be uploaded. This first step can save companies significant data transmission and storage costs while maintaining the full value derived from the test drive data. Data is automatically labeled during initial processing to enable easy search and retrieval. The labeling of the data can be done by proprietary algorithms, third-party algorithms, or both to provide full flexibility and extensive search capabilities. Robust searching capabilities, including natural language processing, can help users quickly locate the data they need. For instance, using natural language processing, the user can request only data from scenarios in the rain where the car must abort a left turn due to an approaching bicycle, causing an uncomfortable maneuver. These search functions allow engineers to parse immense amounts of data and leverage it to improve ADAS and AV systems. Scenarios without any issues (so- called ‘happy flow scenarios’) can be used to train the neural networks, which act as the ‘brain’ of the AV, to navigate the driving environment. Meanwhile, critical scenarios (with undesirable behavior) can be used to find the boundaries of the designed system, for verification and validation purposes. The captured data becomes even more powerful when converted to the virtual domain, as it can enrich simulations and lead to a much higher-fidelity Digital Twin. Scenarios captured during real-world testing can be imported into the simulated vehicle environment. Once in the simulation environment, the engineering team can change all parameters of the scenario as needed, providing unlimited capabilities or design optimization and testing. For instance, a real-world test can be virtually repeated in different weather and daylight conditions to quickly test the vehicle system performance. Key Points • Rising complexity is making verification and validation of vehicles and individual sub-systems one of the foremost challenges in the development lifecycle. • Modern simulation solutions support full vehicle verification and validation, from various environmental and traffic conditions right down to the individual sensors, electronic control units, software, and computational units. • Some real-world testing and data capture will always be necessary, particularly in the realm of ADAS and automated driving systems. Combined hardware and software solutions can collect and manage massive amounts of ADAS sensor data from real-world prototype vehicles to enable efficient analysis of testing data.
794
N. Kochhar
8 Extending the Digital Twin Through Cloud-Based Analytics Integrated design and engineering solutions are only one piece of a comprehensive digitalization strategy. Such solutions help companies by enabling engineering domains to connect and contribute to a single digital thread that keeps teams synchronized throughout the product development lifecycle. This digital thread, however, does not extend through to the vehicle or other assets in the field. As vehicle complexity and market competition continue to grow, closing the loop between design, engineering, manufacturing, and real-world performance will be crucial to success. This can be accomplished with cloud-based internet of things (IoT) data analytics, particularly as more and more assets are networked. Cloud-based data analytics enables automotive companies to gain intelligence from connected vehicles, manufacturing assets, and infrastructure, often in near real-time. Companies can then use this near real-time intelligence to evaluate and improve product design and manufacturing methods, leading to higher production efficiency, optimized product design, and more. A cloud-based, IoT ecosystem facilitates this closed-loop by providing a connected analytics environment for the digital enterprise, Fig. 8. This ecosystem enables responsive, even proactive engineering, real-time product and manufacturing optimization, asset monitoring, and more. Cloud-based data analytics blend the physical and virtual worlds to bolster the comprehensive Digital Twin with real product and manufacturing data.
Fig. 8 IoT technology allows companies to monitor and manage various assets remotely, such as manufacturing machines and robots
Leading the Transformation in the Automotive Industry Through the Digital Twin
795
Rather than building a proprietary system, automotive companies have the option of working with technology partners that supply operating systems specially designed for IoT data capture and analysis. These operating systems not only help with data processing but can also make it easier to connect various assets to the cloud, despite a lack of standardized protocols or methods, through open and extensible architectures that support various communication standards. Furthermore, the availability of a wide range of programmable logic controllers and other edge devices ensure that even older assets can be connected to the cloud. Such a cloud-based, open IoT OS enables automotive manufacturers to link their machines and physical infrastructure to the digital world easily and economically. Automakers can then harness data from virtually any number of connected intelligent devices, enterprise systems, and federated sources to allow for the analysis of complex, real-time operational data. This analysis then leads to optimized processes, resource and productivity gains, the development of new business models, and the reduction of operations and maintenance costs. As discussed in the previous section, this data can also be incorporated back into the vehicle design, testing, and validation process to identify and remedy potential problems early or to find opportunities for performance improvement and optimization.
8.1 Collaboration and Supply Chain Management IoT ecosystems can also enhance collaboration and data sharing throughout the supply chain and enterprise. For example, advanced data lake and data interconnect technologies allow for the contextualization of groups of data from various suppliers, ERP systems, manufacturing sites, and more. Cross-tenancy functionality also facilitates the sharing of data between ecosystem participants in a secure and controllable fashion. For example, an automotive OEM can share asset configuration data with a vendor to ensure production efficiency. Furthermore, a cloud-based IoT OS can support integrated pricing, forecasting, and inventory management across plants. With such a comprehensive view of the supply chain, manufacturers can minimize risk and optimize for cost. Monitoring production status data in conjunction with inventory enables automated order placement and management. These capabilities can ensure that even the most complex supply chains operate efficiently and without interruption. As automotive OEMs ramp production across multiple production lines and facilities, they can also leverage the cloud to share knowledge and best practices between sites. This ensures that issues experienced at one facility are prevented as production comes online in additional facilities. Likewise, process improvements implemented in one facility are easily shared throughout the supply chain, facilitating optimized processes across the automotive enterprise.
796
N. Kochhar
Key Points • As vehicle complexity and market competition continue to grow, closing the loop between design, engineering, manufacturing, and real-world performance will be crucial to success. • Cloud-based data analytics enables automotive companies to gain intelligence from connected vehicles, manufacturing assets, and infrastructure, often in near real-time. • IoT ecosystems can also enhance collaboration and data sharing throughout the supply chain and enterprise.
9 Tomorrow’s Vehicle Designs Enabled Through Digitalization The capabilities offered via digitalization and the Digital Twin will empower automotive companies, including OEMs, suppliers, and startup manufacturers, as they design the next generation of passenger and commercial vehicles. These cars will be increasingly electrified and offer growing levels of automation, connectivity, and convenience for consumers. These enhanced vehicle features and systems will rely on an ever more complex underlying architecture of powerful computer chips, sensors, electromechanical subsystems, and electrical wiring. This complexity will be felt across the vehicle development lifecycle, from design to verification, testing, and production. In this chapter, I have attempted to offer a summary of the challenges being experienced throughout the automotive industry, as well as a look at the capabilities offered by widespread digitalization and the adoption of a Digital Twin. Digitalization is a necessary evolution for automotive companies. The industry is tipping towards a new era of vehicle design, with the most sophisticated and exciting capabilities just on the horizon. In this new era, new approaches to vehicle development will become necessary. Key to these new approaches will be the encouragement of cross-domain collaboration, data coherency throughout the product lifecycle, and the ability to capture and analyze data from testing and the field, creating a closed-loop. The specifics on how this may be achieved will vary from company to company, depending on the unique pain points and needs of each. My colleagues Dale Tutt and Tim Kinman conduct a deep exploration of how companies can embark on digital transformation and the creation of a Digital Twin in chapter 13 of this text. The key point I’d like to communicate to the reader is that strong partnerships with experts in the field of digital transformation will ensure success. Companies like Siemens have been working with customers from all industries to develop digitalization plans and to build powerful Digital Twins of products, production processes, and more. We have seen the benefits made possible by digital transformation and are ready to bring these to the automotive industry.
Leading the Transformation in the Automotive Industry Through the Digital Twin
797
References 1. Csere, C. (2012, December 21). Tested: 2012 Model S takes EVs to a higher level. Retrieved from https://www.caranddriver.com/reviews/a15117388/2013-tesla-model-s-test-review/ 2. Goehrke, S. (2018, December 5). Additive manufacturing is driving the future of the automotive industry. Forbes. Retrieved from https://www.forbes.com/sites/sarahgoehrke/2018/12/05/ additive-manufacturing-is-driving-the-future-of-the-automotive-industry/#48a416db75cc 3. IEA. (2021). Global EV Outlook 2021. IEA. https://www.iea.org/reports/ global-ev-outlook-2021 4. Jackson, B. (2019, April 17). BMW group kicks off project for serial automotive additive manufacturing. 3D printing industry. Retrieved from https://3dprintingindustry.com/news/ bmw-group-kicks-off-project-for-serial-automotive-additive-manufacturing-153665/ 5. Tesla Model S Review, Pricing, and Specs. (2021, January 11). Retrieved from https://www. caranddriver.com/tesla/model-s Nand Kochhar is Vice President, Automotive and Transportation Industry at Siemens Digital Industries Software. Prior to joining Siemens, Nand had over 28 years at Ford Motor company in a variety of senior engineering executive with extensive leadership experience in all areas of product development, manufacturing, digitalization, simulation technology development and implementation. Nand has driven various business transformations across the Americas, Europe, and Asia pacific, delivering significant cost savings, quality, and timing improvement. This has given him the cultural awareness to lead diverse global teams to pioneer and deliver large-scale projects. In a time where organizations must become more agile, Nand thrives as a leader and change agent, ensuring that innovation, technology, digitalization, analytics, and systems thinking are at the forefront of business objectives.
Digital Twins in Shipbuilding and Ship Operation Russ Hoffman, Paul Friedman, and Dave Wetherbee
Abstract Discussions of Digital Twins in ship design, construction, and operation pervade current industry literature. Although computer modeling has been used for many years to develop and analyze discrete ship products and processes, twenty- first century data management tool capability and capacity offer increased opportunity to create and examine complex shipboard and ship enterprise digital systems. Digital capability is now available for widely dispersed teams to collaborate on and demonstrate not only ship system and subsystem performance, but the efficacy of construction plans and sequences, the success of operational scenarios, and the prediction of requirements for maintenance activities. The 3D CAD models of the 1990s and early 2000s can now form the basis of comprehensive Digital Twins of entire shipbuilding fabrication and assembly facilities, and complete ship designs which can be used well beyond the design and construction phases into testing, training, operation, maintenance, and upgrade activities. This chapter presents a discussion of the current use and envisioned applications of Digital Twins in the various stages of ship design, planning, construction, operation, and eventual retirement. The entire ship life cycle is labeled the “Enterprise” and is subdivided into “Domains”, being the phases from earliest concept development through end-of-life retirement, including Concept Formulation, Design, Manufacturing, Operation, Maintenance and Disposal. Specific tasks, labelled “Use Cases”, comprise each Domain and include “Development”, “Verification”, “Work
Supplementary Information The online version contains supplementary material available at https://doi.org/10.1007/978-3-031-21343-4_28. R. Hoffman Manager, Ship Concept Design, Bath Iron Works (Retired), Boothbay Harbor, ME, USA P. Friedman Manager, Advanced Technology Development, Bath Iron Works (Retired), Cumberland, ME, USA D. Wetherbee (*) Hull Outfit Naval Architect, Bath Iron Works (Retired), Bath, ME, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_28
799
800
R. Hoffman et al.
Instructions”, and “Collaboration”, as examples. The Domains and Use Cases are described along with the Digital Twins deployed and their enabling technologies. The chapter acknowledges and addresses the several barriers to Digital Twin implementation that have been encountered. Recommendations for prioritized deployment of digital tools and twins throughout the shipbuilding and ship-owning enterprise are offered. The types of twins and associated technologies enabling success are discussed and, most importantly, an implementation strategy and recommendations for industry leaders is presented. Keywords Barriers · CAD · Digital Twin · Domain · Enterprise · Life cycle · Logistics · Naval · Ship · Simulation · Training · Use case
1 Introduction In the latter half of the twentieth century shipbuilding experienced a dramatic period of modernization that greatly improved production efficiencies and ship capabilities. Continued improvement depends upon more fully integrating the various shipbuilding functions to overcome inherent limitations of the industry’s traditional siloed business model. The emergence of Digital Twin concepts and supporting technologies provides the compelling value proposition needed to transform the industry into a better integrated process that is oriented around the ship life cycle . The well documented introduction of technologies such as Computer Aided Design, Computer Aided Manufacturing and Enterprise Resource Planning, and development strategies such as concurrent engineering, modular construction and Just-In-Time supply provided immediate benefits to the implementing functions, but those incremental gains must be leveraged and integrated across enterprise silos. In short, the technological advances incrementally improved performance within silos, but did not provide the transformative value that was envisioned. The build out of Digital Twins and their enabling technologies is the breakthrough needed to fulfill the transformative promise. The significant new capability that Digital Twins deliver is a compression of shipbuilding’s traditionally long and cumbersome lifecycle. Prior initiatives that digitized the shipbuilding process provided a repository of reference data that was useful to the industry’s highly iterative processes. The fidelity and breadth of Digital Twin models, however, extend those improvements to the entire life cycle in several critical ways. The Digital Twin concept is broad, unifying multiple enabling technologies and addressing numerous challenges. Chief among these Digital Twin innovations for shipbuilding are: • Compression of the long shipbuilding/ship operation lifecycle for more informed decision making • More effective risk management of product and project complexity • Leveraging the convergence of design Information Technology and facility and ship Operational Technology
Digital Twins in Shipbuilding and Ship Operation
801
• Overall improvement of the product by enabling collaboration among stakeholders as requirements are developed and as new requirements emerge Lifecycle Compression The shipbuilding/ship operation life cycle, especially in naval shipbuilding, is long and ungainly due to the number of stakeholders and organizations involved in a shipbuilding project, the inefficiencies associated with managing vast amounts of information and data, and the incorporation of change during the project. The predictive capabilities of Digital Twin models can compress the shipbuilding timeline by providing insight into the future effect of current decision making. This compression allows better choices in real time to improve the performance of downstream builders, and ultimately users, and will transform the industry through greater efficiencies, better quality, more comprehensive functionality and longer ship life. Managing Complexity Complexity is a dominant, defining feature in shipbuilding. Both the shipbuilding project and the ship itself, exhibit tightly coupled interactions between constituent parts, nonlinear dynamics that are difficult to analyze, and emergent system behaviors that are difficult to predict. With Digital Twins, project and product complexity are controlled by more effective modular interfaces. The unpredictable and potentially disruptive nature of complexity can be constrained by clear, well maintained interfaces between modules acting as “fire breaks”, both as functional modules for the ship and task modules for the project. Digital Twins provide a much clearer and accessible basis for understanding interfaces across silos, and for identifying and defining the most effective interfaces early in the project. Leveraging and Integrating Technologies Though not necessarily planned, shipbuilding is experiencing the effects of an emerging Industry 4.0 paradigm. Of value is the maturation of mobile computing and smart machines connected by the Internet of Things and Extended Reality applications. The combination of these technologies allows formerly separate information sources to merge into a complete picture. Models of product design and construction will be used in ship operations and maintenance. Simultaneously, knowledge of ship performance will flow back to designers and builders to make more informed decisions for future ships and ship classes. Collaboration in Developing Requirements Though the “Digital Twin” is a new concept to shipbuilders, elements of Digital Twins are deployed largely as discrete models of various types in every aspect of shipbuilding from the earliest design stages through planning, manufacturing, and operation, to eventual disposal. In the early stages of design, the predominant activities are requirements development and ship system and subsystem engineering. Developing requirements and synthesizing a specification document, currently an activity completed by the navy customer, can be extended to the supply chain, including collaboration among designers, planners, equipment vendors, and builders infusing their various expertise and progressive learning through the deployment of a Twin. As the Twin is developed, requirements can be honed resulting in a more producible and better performing ship product.
802
R. Hoffman et al.
Digital Twins used to develop subsystems are largely analytical, mathematical models to predict the behavior of the ship as a total system. Three-dimensional Computer Aided Design (CAD) models are used to ‘build the ship in the computer’ developing the ship product model to provide a basis for production data and generating manufacturing instructions. Digital Twin derivatives are used to plan construction and model material flow through the shipyard, and other tools are applied to the models from earlier design phases to test and train ship operators and plan the eventual disposal of the ship asset. This chapter describes the added value of Digital Twins including their potential to transform the shipbuilding industry and offers proposals to accelerate their adoption. It begins with a description of the shipbuilding process and some of the unique characteristics that make it an unusually rich target of opportunity. The chapter proceeds through discussions of Digital Twin types and their applications, offers several recommendations and considerations for implementing Digital Twins, and concludes with several Use Cases to illustrate implementation benefits and challenges. The discussion defines the entire ship life cycle as the “Enterprise” and subdivides the Enterprise into “Domains”, being the phases of a ship’s life from earliest requirements development through end-of-life retirement. The distinction between the full Enterprise and individual Domains is important to the Digital Twin value proposition. While significant performance gains within each domain are identified, the fullest and most profound value of the Digital Twin concept presents itself at the Enterprise level. Domains covered in this chapter include Concept Formulation, Design, Shipbuilding, Operation, Maintenance and Disposal. Characteristic tasks for each Domain are labelled “Use Cases”, and include “Development”, “Verification”, “Work Instructions”, and “Collaboration”, as examples. The Domains and Use Cases are described along with the Digital Twins deployed, their enabling technologies, and barriers to usage in each. The goal of this chapter, as with this entire work, is to provide government and industry decision makers with a comprehensive approach to developing business cases and requirements for tools and processes to fully implement Digital Twin usage.
2 The Shipbuilding Process: Concept through Disposal The goal of this section is to summarize the shipbuilding business and the process of ship design and construction as a foundation for later discussion of the development and utilization of Digital Twin technologies. Discussions of the methods and processes are widely available, and the Society of Naval Architects and Marine Engineers (SNAME) and the American Society of Naval Engineers (ASNE) are excellent sources for in-depth literature on the subject. SNAME is traditionally focused on merchant or commercial ship design, construction, and operation, while ASNE generally covers the same for military, navy, and coast guard ships. Both maintain archives of technical publications, including papers and texts, many about Digital Twin applications, and will be referenced throughout this discussion.
Digital Twins in Shipbuilding and Ship Operation
803
The Shipbuilding Process Ships are complex ‘systems of systems’ that require integration of many, often competing considerations. “For any particular set of requirements, there is an infinite number of combinations which yield the transport capability desired (i.e., capacity, deadweight, speed, and endurance).”1A ship design involves balancing those variables, usually through an iterative approach of development and checking, until requirements are satisfied. And since ship construction is almost always subject to competitive bidding among shipyards, the goal is to provide a solution that minimizes production labor and cost by involving shipyard mechanics in the design process. Ship Design Challenges Balancing the various considerations and requirements concerning mission effectiveness, capacity, arrangements, speed, and safety, to name but a few, to achieve a competitive result is the challenge of the ship design team. A successful design ultimately fits into a small box within the multivariable, highly interconnected solution space. For example, ship stability in a seaway is a driver of physical configuration in many design decisions. Where high speed would require a slender hull yielding a high length-to-beam ratio, stability requirements and regulations drive increases in beam, pushing that ratio in the other direction. The very earliest decisions have the greatest impact on the ultimate cost of the ship so the selection of deck heights, as another example, is an important producibility consideration but must be balanced against stability. Driving the center of gravity upward by running distributed systems, piping, electrical, and ventilation, beneath deck beams, requiring increases to deck height, will facilitate production but will demand that the design team find a way to provide adequate stability. Balancing radar equipment height, always more effective high in the ship, against stability is a typical trade in warship design. Design Phases A ship design proceeds through several phases from early research and development of requirements, through initial concept definition, eventually to completion of all the details necessary to define the materials to be used, the quantities of each, their location in the ship, and the data needed to instruct mechanics and drive construction machinery. “The data for early stages come from past experience (sic), and the degree of detail increases as the design progresses.”2 Thus, generally, each phase of design builds on the previous, more finely defining the ship with each iteration. Today, various elements of Digital Twins are employed in each design phase by the several engineering and planning disciplines involved, as discussed later in this chapter, some augmented as the design progresses until a product model is
D’Archangelo, Amelio M. Ship Design and Construction. Chapter 1, “Basic Design”, E. Scott Dillon, Section 3. 2 D’Archangelo, Amelio M. op. cit., Chap. 2, “General Arrangement”, Robert J. Tapscott, Section 1. 1
804
R. Hoffman et al.
Fig. 1 A spiral is traditionally used to describe the iterative process of ship design
complete. Traditionally, a design spiral3 as illustrated by Fig. 1 was used to describe the iterative nature of ship design. While the spiral of Fig. 1 is useful to describe the naval architectural activities of each phase, it ignores the attention that must be paid to other shipbuilding functions. These include the producibility considerations acknowledged above, plus requirements for collaboration among potentially several organizations, equipment selection, production planning, and life cycle considerations such as integration of the human into the design, trade study analyses of reliability, maintainability, and availability, training requirements, and eventual disposal of the ship. If these considerations are left too late the design will likely suffer rework and increased cost. Figure 2 includes those activities, with the products of each phase of design. Digital Twins in Design Elements of Digital Twins are used in many phases of design today and must be leveraged to a greater extent in coming years to further enable collaboration, enhance design and manufacturing cost and schedule performance, and improve ship performance. The complexity of modern warships, for example, demands the 3 http://www.marinewiki.org/images/c/ce/Ship_design_spiral.jpg, and D’Archangelo and Dillon, op. cit.
Digital Twins in Shipbuilding and Ship Operation
805
collaborating expertise of many organizations often geographically dispersed, as well as the oversight and approval of the customer (and potentially a classification society) which can be facilitated through use of the Digital Twin. Section 7 provides detail on Digital Twin use and advantages. The Basic Design phases described in Fig. 2 make use of analytical and arrangement model twins developed in parallel to establish the functional and logical parameters of the design. Transition Design, described in the Figure, defines the design and manufacturing products as a subset of the entire ship system enabling several zones to progress simultaneously, eventually to be built as blocks, modules, or units, depending upon the nomenclature of the builder. An example design zone/ erection unit subdivision is shown in Fig. 3 for the FFG-7, the lead ship of the OLIVER HAZARD PERRY class, built by Bath Iron Works. While several decades old, the concept of subdividing a ship in this way is common practice for international commercial and warship designs. The Functional Design phase noted in Fig. 2 includes formal presentation of the design to the customer or classification society for approval. Today, depending on the ship design program, submittals may be accomplished as facsimiles of paper products, or digital models, or a combination of the two. Soon, submission of the Digital Twins of subsystem designs and arrangements will enable the customer to review and verify that requirements are met and eliminate the costly submittal process and production of unnecessary products. The Detail Design phase produces a complete three-dimensional CAD model of each design zone which forms the basis for the product life cycle model Digital Twin. Digital Twins support separating the ship design into detailed design zones, while simultaneously maintaining whole ship system integrity, and enable further subdivision of zones to produce assembly drawings and fabrication data. As Digital Twin usage is increasingly adopted in the industry, building models will be expanded and converted into maintenance and ship operations models to support ship life cycle Digital Twins. Ship Construction As in the design phases, subdivision of the ship into units is especially conducive to Digital Twin application since a twin of each unit enables several units to progress in parallel in construction. Starting with the smallest elements of the design, pieces are fabricated, joined, and incorporated into units as in Figs. 4 and 10. The process may be described by a Manufacturing Assembly Plan (MAP) which shows the integration of the various materials such as steel plates and shapes, pipe and piping system components, mission system equipment items such as radars and weapons, and machinery items. Testing and Trials The construction period includes rigorous testing activities, especially in the case of military ships, culminating in sea trials and eventual acceptance of the ship by the customer. The concept of “build a little, test a little”, employs a methodology of discrete testing of equipment items and fabricated pieces to ensure that performance of the complete system will meet requirements. Integration of the entire ship,
806
Design Phase
R. Hoffman et al.
Sub-Phases
Activity Description
Major Products
Research and studies, economic or military effectiveness
Top level requirements documents
Concept Design, Concept Formulation, or Conceptual Design
Development of specifications Determination of main dimensions Engineering analyses to confirm that system performance meets requirements Collaboration among companies of differing expertise, especially for warships
Preliminary Design
Confirmation of the concept Model testing to confirm the powering requirement and hydrodynamic performance Analyses to confirm mission and support subsystem performance Producibility studies and considerations for the design
Depend on customer requirements but with each sub-phase comes iterations of: Specification Hull form and capacity plans Performance analyses Speed and powering Stability Seakeeping Maneuvering Arrangement drawings or CAD models of selected key compartments Structural analyses and drawings Piping and electrical system analyses and diagrams
Contract Design
Adequate design and material definition including interaction with the supply chain to permit a designer and builder to confidently price the remaining design effort and construction of the ship
Automation system reports and diagrams Purchase Technical Specifications Quotes from suppliers Human Systems Integration (HSI) Reports Reliability, Maintainability, Availability (RMA) Analyses In the case of military ships: Signature and vulnerability studies Weapons and sensors arrangement and effectiveness studies Erection Unit Break Plan Manufacturing Assembly Plans
Functional Design
Focus on subsystem and entire ship system development. Engineering analysis to confirm performance and material definition to support purchase orders. Data for purchased equipment obtained and incorporated into documentation. Calculations and modeling to further develop plans for building the ship. Presentation of the design for approvals.
Drawings for customer and classification society review and approval Training documentation
Transition Design
Some shipyards insert a phase to transition from integrated ship system to shipyard discrete product design, wherein the ship system is subdivided into design zones and blocks, units, or modules, which are the actual products of the design and manufacturing departments.
Priority routing space reservations and diagrams for distributive systems Design zone definitions
Detail or Physical Design
Focus on zones and erection unit development. Arrangement of equipment, distributive systems, and furniture, in each zone and compartment of the ship
3D CAD model of the entire ship, zone by zone Installation and assembly drawings extracted from models Bills of Material: Engineering and Production Detailed plans
Production Design
Development of data, documentation, and work instructions for the various shipyard shops and mechanics to build the ship
Fabrication and assembly data: sketches and/or digital
Requirements Definition Basic Design
Fig. 2 Several iterations are required to complete a ship design
Digital Twins in Shipbuilding and Ship Operation
807
Fig. 3 Ship design products: zones for design and erection units in construction. (Stark, Capt. Robert E., and Capt. David M. Stembel, Detail Design – FFG-7 Class, Naval Engineers Journal, April 1981. Reproduced with permission from the American Society of Naval Engineers)
Fig. 4 Ships are constructed in units or modules. (US Maritime Administration, Avondale Shipyards, and Naval Surface Warfare Center CD Code 2230 – Design Integration Tools Building 192 Room 1,289,500 MacArthur Bldg. Bethesda, MD 20817–5700 . “Manufacturing Technology for Shipbuilding”, pg. 141. 1983; and Construction units for USS DANIEL INOUYE courtesy General Dynamics Bath Iron Works)
usually on a building way in the shipyard and later in the water, enables focus to shift back again to entire subsystems and eventually the completed ship itself.
3 Commercial and Naval Shipbuilding The purpose of this section is to provide a brief comparison of commercial and naval or military shipbuilding and is included as Fig. 5 below.
808
R. Hoffman et al.
Life Cycle Phase
Commercial Industry
Military Industry
Program Management
By the shipowner
By a program office within the navy or coast guard. In the case of the US Navy, a Program Management Supervisor (PMS) is established as part of the Naval Sea Systems Command (NAVSEA).
Requirements Definition
By the shipowner, often with the assistance of a naval architecture firm
Top level requirements by the navy with build specification development often left to the shipbuilder.
Concept Formulation
By a naval architecture firm contracted by the owner or by the shipyard working directly with the owner.
Typically, by the Navy with or without the assistance of a design agent naval architecture firm or shipyard. In the case of the US Navy DDG 51 Class, the assistance of several potential builders was engaged.
Preliminary Design Contract Design
By several competing shipyards. Proposals usually funded by the prospective builder. Bid and Proposal Functional Design
By a private shipyard or design firm.
Often in two parts: preliminary functional design as part of the bid and proposal. Finalized as part of ship design contract.
Approvals
Classification Society such as ABS
By the Navy Technical Authority or Supervisor of Shipbuilding
Detail Design
By a private shipyard.
Production Design
By a private shipyard.
Construction
By a private shipyard.
Trials
Internationally, builders of government ships range from private companies through varying degrees of government ownership. For the US Navy, many ships were built by public shipyards until the 1960’s. Today, all US Navy ships are built by private yards. By the building yard.
Maintenance
Contracted by the owner to private shipyards.
In the US, by the Navy but a different office and using different funding than that for construction. Maintenance activities often conducted by US Navy shipyards. See discussion in Section 11.
Upgrades
Contracted by the owner to private shipyards.
Contracted by the USN to “Planning Yards'' which engineer, plan, and often package materials for the upgrade. Physical work accomplished by a shipyard or industrial facility selected through competitive bidding.
Sale
By owner
By Navy but a different office than the operators.
Disposal
By owner
By Navy
Fig. 5 Commercial and military shipbuilding and operation exhibit some significant differences
4 The Naval Ship Enterprise The Naval Ship Enterprise comprises all the processes described in Sect. 2, design through disposal, as well as the organizations and companies that execute them. Within the Enterprise are shipowners, shipbuilders and designers, and thousands of suppliers tasked with manufacturing the largest equipment items such as propulsion engines to the smallest components of electronic items. The phases of a ship life
Digital Twins in Shipbuilding and Ship Operation
809
Domain
Activity Description/Purpose
Stakeholders
Concept Formulation
Determine the operational needs and the most effective solutions to those needs. Concept Formulation is the initial activity in the Ship Enterprise lifecycle. The operational need is defined in sufficient detail to permit the analysis of solution concepts and development of solution designs sufficient for contractual agreements for ship design and construction.
Shipowner, or Navy in the case of military ships
Shipbuilding
Turn the solution concept into instructions for shipyard mechanics or workers and production machinery, such that when applied to structural steel, pipe, sheet metal, and purchased equipment, can be rendered into an assembled ship. It is noted that ship design is merged with the Shipbuilding domain in this Digital Twin application analysis, and in the ensuing value plots. Although ship design is a separate activity conducted independently from construction, the industry has seen a convergence of the two to ensure a more seamless transfer of knowledge from the office to the deck plates. The Shipbuilding domain includes a broad set of activities including purchasing components and equipment, and planning and executing fabrication, assembly and testing of the ship. Overlayed on design and build activities is creation and operation of the ship supply chain and operation of the shipbuilding facility.
Primary responsibility for this domain lies with the shipbuilder, but an entire supply chain of many organizations are secondary stakeholders. A design firm or “agent”, and shipyard design department typically complete the design. Emerging supply chain members are important secondary stakeholders.
Operations
Complete assigned missions or to carry cargo. There is an application of Development here, but it is relatively light and sporadic. Experience has shown that the “as-used” configuration of the ship can be substantially different from the “as-built”, indicating a fair amount of design and modification activity by the ship’s crew, and demanding that the Development model be maintained to reflect modifications to continue usefulness.
Shipowner has sole responsibility for this domain, which is the longest-lived phase of the ship's life cycle.
Maintenance
Conduct periodic upgrades and modifications to the ship to ensure she remains a viable asset throughout the period of performance. The Maintenance domain occurs in parallel with the Ship Operations domain, but significant Design use case activity is limited to periodic “maintenance availabilities”. In preparation for significant ship alterations and modifications, private or Navy engineering agents (Planning Yards) prepare designs, removal, alteration and installation plans, and supply chain deliveries for execution by maintenance facilities.
Primary responsibility for this domain is shared between the Planning Yard Engineering Agent (public or private) and maintaining shipyards (public or private).
Disposal
Retire the ship asset by the methods and for the purposes described in Section 7.
Responsibility lies with the owner with participation from shipbreakers and others
Fig. 6 The Naval Ship Enterprise domains and stakeholders
cycle are the Enterprise Domains that define the context and environment of the work to be performed. The primary stakeholders for that work are described in Fig. 6. Improving coordination and collaboration among all these players is of utmost importance to enhance the efficiency of the Enterprise. Digital Twin deployment in several industries is widely described in the literature to that improvement and is the subject of the remainder of this chapter for the Naval Ship Enterprise. As described in Sect. 2, Digital Twins are deployed in many domains today but are often in a discrete, siloed fashion. The Naval Ship Enterprise itself is not a coherent unit. Rather, it is composed of a broad, diverse series of cooperative, shifting, often competitive commercial and governmental stakeholders loosely bound through overlapping contracts and inter-dependent information.
810
R. Hoffman et al.
5 Naval Shipbuilding – Between a Production Line and a Custom Fabrication and Assembly Shop Of the several chapters in this work, those dealing with Digital Twins in Manufacturing and Construction are closest to Shipbuilding. For this section, Manufacturing implies a production line of actions or tasks repeated in quick succession typical of the automobile or aircraft industries, and Construction, a relatively long evolution to produce an individual building or civil project. Since naval shipbuilding comprises both of those industry types, one of the challenges of deploying a Digital Twin to enhance the performance of the shipbuilding industry is to correctly characterize the industry on the spectrum between those two. In the commercial shipbuilding industry, it is not uncommon for a shipowner to order a single ship, whereas in the naval market it is more typical for a government entity, a navy or coast guard, to contract for a series of sister ships, a class. And where the production line operation would create several identical models, in the world of naval shipbuilding, the demand to deliver each ship with the most current technology causes the navies of the world to upgrade the equipment or subsystems during the design and construction cycle with the result that each ship delivered is unique. The very long production timeline required to build a ship, 4 to 5 years for a destroyer and 10 to 12 for an aircraft carrier, only compounds the challenge of keeping the ship design up to date with current technology. With the very rapid pace of technological advances in electronic systems, warships can easily become obsolete between the start of fabrication and the delivery date. The US Navy’s DDG 51 Program has introduced a Post-Delivery Availability (PDA) period immediately after the ship is delivered specifically to update various systems to contemporary standards. Upgradeability as an early design consideration is a current priority for the US Navy as witnessed in several current programs. The complication of dealing with design change is therefore a common consideration for naval shipbuilders, and the challenge of configuration management throughout the design and construction process can be addressed by deploying a Digital Twin as discussed in the following sections. A Digital Twin can facilitate planning and executing an availability of a naval ship by providing a baseline model in which to test equipment and subsystem removals and replacements.
6 Types of Digital Twins As noted elsewhere, there is no single Digital Twin for a ship or class of ships. The nature and composition of a Digital Twin changes through the Ship Enterprise depending on when it is used (Enterprise Domain) and for what purpose (Use Case).
Digital Twins in Shipbuilding and Ship Operation
811
Enterprise Domains are phases of the ship life cycle that define the context and environment of the work to be performed, and the primary stakeholders, as listed in Sect. 4. Use Cases are the categories or types of work to be performed. Use Cases are organized into common classes to manage the large and rapidly expanding variety of individual cases. For example, the individual Use Cases “engineering”, “design”, and “planning”, all fit into the “Development” Use Case class. The composition of a specific Digital Twin is determined by the model types employed for a Use Case in a specific Domain. Use Case Classes Individual Use Cases for Digital Twins are numerous and constantly growing. For the sake of organizing the discussion, individual Use Cases are grouped into ten general Use Case classes with common intent and data requirements: 1. Development Class: Analysis of trade space alternatives and creation of conceptual, functional, design and operational solutions. Engineering, design, and planning are the most common activities. In this Use Case analytical and 3D models are most useful. 2. Verification Class: Examination and confirmation that the product or components thereof conform to defined requirements. 3. Work Instruction Class: Direction and guidance for task execution. Product fabrication, assembly and maintenance are the most common activities. 4. Training Class: Workforce instruction to accelerate learning, improve task execution, improve operations, and ensure safe work practices. In this, and in item 7, simulation models are most useful. 5. Supervision Class: The planning and assurance of execution of workflows. Supervision of manufacturing and maintenance workflows are most common. 6. Operations Control Class: Management of environmental and system state data, typically for industrial shop floor and on-board operations. Typically used in shipbuilding and maintenance. 7. Operations Analysis Class: Prediction, visualization and analysis of potential ship states, performance, and operating environments. Typically used in simulating ship operations for training ship drivers and operators and forecasting ship performance under changing conditions. 8. Safety Assurance Class: Enhancement of situational awareness of workforce in industrial and operating environment. 9. Supply and Logistics Class: Identification, sourcing and tracking of manufacturing and maintenance material and resources. 10. Collaboration Class: Enabling dispersed teams to collaborate interactively. This class includes the rapidly expanding Remote Expert subclass that enables ad hoc teamwork to provide remote expert assistance to front line workers, for example, support provided to shipyard mechanics by engineering and design staff.
812
R. Hoffman et al.
Fig. 7 Example of Requirements Management Model (NI). (National Instruments Corporation. “NI Requirements Gateway for Test, Measurement, and Control Applications.” Accessed April 7, 2021. https://www.ni.com/en-us/innovations/white-papers/06/ni-requirements-gateway-for- test%2D%2Dmeasurement%2D%2Dand-control-appli.html)
Model Types The specific composition of Digital Twins is unique to the intersection of Enterprise Domain and Use Case class. A supporting Digital Twin generally consists of a combination of eight types of data sets, or models. Textual Models are documents and requirements tracing databases. The documents typically consist of unstructured text that describes attributes of the product, most commonly requirements as shown in Fig. 7. Textual models are distinct from structured text such as attributes in other models. Analytical Models are primarily mathematical representations used for functional analyses and specification development. They are often physics-based models with graphical displays as shown in Fig. 8 for a Finite Element Analysis of a structural member. Spatial Models are 2D or 3D Computer Aided Design (CAD) models used for development and depiction of physical properties of the ship, Fig. 9. Visualization Models are like spatial models with a specific focus on realistic depiction of the ship and ship components renderings, for example. Often found in CAD models, these representations feature textured surfaces and realistic lighting capabilities. Visualization models are most often derived from point clouds when 3D scanning technologies are used to record as-built/as-used ship configurations as shown in Fig. 10.
Digital Twins in Shipbuilding and Ship Operation
813
Fig. 8 Example of Finite Element Analytical Model showing displacement stresses
Fig. 9 Example of ship model in ShipConstructor CAD (SSI). (SSI. “Release Information – ShipConstructor 2017.” Accessed September 22, 2021. https://www.ssi-corporate.com/engage/ release-information-shipconstructor-2017/)
Dynamic Models are simulations used for analysis of system forces, capacities, and states over time. Dynamic models can be spatial, analytical, or both as shown in Fig. 11. Kinematic Models are a subset of Dynamic Models and are 4D spatial simulations used for analysis of system physical motions in time as shown in the Fig. 12 screenshot.
814
R. Hoffman et al.
Fig. 10 Example of Point Cloud (top) to Visualization Mesh Model (above) Conversion (Kohera3D). (Kohera3D. “Point Cloud Meshing.” Accessed September 22, 2021. https://www. kohera3d.com/point-cloud-meshing/)
Process Analysis Models are 2D or 3D spatial, mathematical, and analytical models used for development of product assembly logic, facility flow, and maintenance sequencing, Fig. 13. Operational Analysis Models are 2D or 3D spatial, mathematical, and analytical models used to forecast and analyze ship operational capabilities in different configurations and environments. An example of an operational analysis model graphical output is shown in Fig. 14.
Digital Twins in Shipbuilding and Ship Operation
815
Fig. 11 Example of Dynamic Model of a three cylinder reciprocating engine (COMSOL). (Comsol. “Analyze Rigid- and Flexible-Body Assemblies with the Multibody Dynamics Module.” https://www.comsol.com/multibody-dynamics-module)
Fig. 12 Example of Kinematic Model (created with MegaCAD by Damitz Modellbau). (MegaCAD. “MegaCAD Kinematik.” Accessed April 7, 2021. https://www.megacad.de/add-ons/ megacad-kinematik/)
816
R. Hoffman et al.
Fig. 13 Example of Product Hierarchy Process Analysis Model (SSI). (SSI. “Benefits of a Digitally Captured Build Sequence” Accessed September 22, 2021. https://www.ssi-corporate. com/blog-waveform/benefits-of-a-digitally-captured-build-sequence/)
Fig. 14 Example of Supply Chain Operational Analysis Model (SCM Globe). (SCM Globe. “Lessons Learned From Wargaming.” Accessed April 7, 2021. https://www.scmglobe.com/ lessons-learned-from-wargaming/)
Digital Twin Application Analysis This section offers a tool for corporate decision makers to assess the value of implementing Digital Twin technology for a particular ship design, building, and/or operation application. Consideration of how Digital Twins are used to support each use case in specific domains gives insight into the composition of the Digital Twin and the value that it brings to the domain and the entire enterprise. Figure 15, Digital
Digital Twins in Shipbuilding and Ship Operation
817
Fig. 15 Digital Twin use case application table
Twin Use Case Application Table, displays applicable Digital Twin use cases for each domain. Each green-shaded cell represents a Use Case Class applicable in the corresponding Enterprise Domain. The number in each green-shaded cell references detailed descriptions in Attachment (A). For example, Attachment (A) shows the following for 1.1, the Development Use Case in the Concept Formulation Domain. Descriptions of the: • • • •
Concept Formulation (CF) Domain Commonly used Digital Twin Model Types in the CF Domain Value of Digital Twins to the CF Domain Value of Digital Twins to the Ship Enterprise
Thus, referencing Attachment (A) will clarify the considerations necessary for assessing Digital Twin utility in each use case and domain. It is noted that the Ship Design domain has been merged with the Shipbuilding domain in this Digital Twin application analysis, and in the ensuing value plots. Although ship design is sometimes a separate activity conducted independently from construction, the industry has seen a convergence of the two to ensure a more seamless transfer of knowledge from the office to the deck plates. Attachment (A) reflects the value judgments of the authors and is offered as an example tool that decision makers and advisers can use to develop their own assessments. Based on Attachment (A), Figs. 16 and 17 below, and the Figures in Sect. 12 were developed to graphically show the value assigned to each use case application. Value ranking was assigned by the authors using a scale of 0 to 4 representing increasing value corresponding to assessments of Not Applicable, Low Value,
818
Fig. 16 Digital Twin use case application value plot within domain
Fig. 17 Digital Twin use case application value plot across enterprise
R. Hoffman et al.
Digital Twins in Shipbuilding and Ship Operation
819
Medium Value, High Value and Critical. Figure 16, Digital Twin Use Case Application Value Plot Within Domain, displays the value of the Digital Twin to the use case for a specific domain. Figure 17, Digital Twin Use Case Application Value Plot Across Enterprise, displays the same information but for value to the entire enterprise. In both plots use case applications determined to be of highest value, or “critical”, are highlighted in green. The assessments differ because a Digital Twin can be, and often is, of more value to a specific domain and respective stakeholders than for all stakeholders across the enterprise. In a minority of cases the opposite is true. For example, in the Concept Formulation Domain, Digital Twin models focused on Development are more valuable (Fig. 16, green stack in lower left) than those focused on Collaboration (Fig. 16, blue stack in lower right), whereas when considering the entire Enterprise, the opposite is true (Fig. 17), Collaboration and Operations Analysis are more valuable to the entire Enterprise than Development. The composition and value of Digital Twins vary greatly depending on how they are used (Use Case) and when they are used (Enterprise Domain). An in-depth understanding of each twin needed throughout the ship lifecycle is essential to a successful Digital Twin development and implementation strategy. The composition of the twin informs as to the difficulty and required investments to create the Digital Twin, while the value is an indicator of the benefit to the specific domain and enterprise, in total.
7 Uses, Advantages and Goals of Using Digital Twins There are significant advantages to using a Digital Twin throughout the various phases of the Ship Enterprise. Some are used today while others are developing or are visions of what could be implemented in the coming years. The previous section described the various use cases required to design, build, operate, and maintain a ship. This, and Sect. 12, present examples of Digital Ship Twin use in many of those Use Cases, how they currently help, and how they could be put to advantage in the future. Digital Twins in the Development, Verification, and Collaboration Use Case Classes These Use Cases described in Sect. 6 comprise most of the engineering analyses and simulations involved in ship design. The following are examples of how Digital Twins are and could be used. The Digital Twin allows the shipyard team to collaborate across the “Silos” with models that are kept current and integrated, so that the collaboration is done in real time. In the past, collaboration occurred in a serial, iterative process. The table below offers a concise summary comparison of a ship/shipyard development
820
R. Hoffman et al.
process without and with a Digital Twin. This table is loosely based on an article written by Conrad Leiva in 2016.4 Process without a digital twin 1. Shipyard Engineering creates drawings and a 3D model and reviews this model and drawings with manufacturing for how the ship will be fabricated and assembled. The drawings and model are then updated after this review. 2. Shipyard Engineering creates 2D drawings and parts lists from 3D models and provides them to Quality and Procurement departments. 3. Using the 2D drawings and requirements, testing and validation procedures are created, which are provided to Production for use. 4. Shipyard production turns over the completed product unit to the customer and without updates to digital data on the specific unit to document as built configuration. Sustainment data is developed after delivery by a separate entity: the designated Planning Yard. 5. Shipyard Production updates the as-built records, such as drawings, for internal auditing purposes or to meet requirements of the contract.
Proposed process with a digital twin 1. Shipyard design department collaborates with the manufacturing department to create a 3D DT model linked to visuals for production process instructions that are integrated to meet fabrication and assembly needs. 2. Product design characteristics are linked to 3D DT models and extracted directly into conformance requirements for Quality and for Purchasing. 3. Testing and validation requirements are integrated within the manufacturing process and inspections. These instructions are developed concurrently and collaboratively by Engineering /Design , Manufacturing and Quality. 4. An updated Digital Ship Twin (DST) is delivered along with the physical ship to the customer. This DST allows the customer to continue evolving and updating the ship data during the life of the ship to reflect operation and maintenance changes and modifications. 5. Product design changes follow the same data flow and automatically update downstream models, references, and instructions.
Hull Form Optimization One of the earliest tasks in the design of a ship is the development of the hull form. Often a trade study is employed to evaluate the benefits of configurations and variations in form. Changes to a specific area of the hull and/or determinations of hull arrangements such as mono or multi-hulls, or the arrangement of appendages, have been analyzed using Digital Twins and employing
Levia, Conrad. “Demystifying the Digital Thread and Digital Twin Concepts”. Industry Week, August 2016. 4
Digital Twins in Shipbuilding and Ship Operation
821
neural networks and genetic algorithms in which the “Twin” determines hull shapes offering optimal performance such as minimal resistance and powering requirements. Resistance and Powering – The ‘Numerical Towing Tank’ The traditional methods for determining the propulsion power and propeller characteristics are to build physical models for testing in a model basin. Analysis capability has progressed significantly in recent years to the point that this expensive process can nearly be replaced by analyzing a “Twin”. Seakeeping and Maneuvering The behavior of the ship in seaway or during steering maneuvers can be loaded as inputs during the design phase. The design can be modified as needed to meet requirements. The performance of ship equipment can be analyzed to show that the equipment will work properly during high sea states. Structural Loading Local and global loading effects on ship structure can be checked both globally and locally. Small structural details can have significant impact and minor changes to the details can improve the tolerance of the structure to repeated and frequent loads at sea. Piping and Ventilation Subsystem Behavior/Performance Mechanical and piping system flows can be validated using pipe system models. Noise predictions could be made, and arrangement of equipment modified to meet the requirements. Heating and Ventilation system flows can be modeled and analyzed to improve performance and reduce the noise during operation. Interactions between vessels in a seaway offer a significant challenge to several operations which can be analyzed using “Twins”. Replenishment or cargo transfer at sea evolutions either by high line or ramp between ships can be reviewed both analytically and visually. Retrieval of a small boat with a ship’s crane or davit can be analyzed and verified against requirements. In the offshore oil industry, the motions of an offshore supply boat crane relative to a rig are typically analyzed using a Twin. Arrangement Design Confirming that adequate volume is provided in a ship is critical to the design. Prior to the introduction of 3D computer-aided design, equipment and systems were arranged and reviewed using light tables and overlaying transparent manual drawings. Full and smaller scale mock-ups were used in critical areas to prove that arrangements met requirements with no interferences, but these could be inconclusive. Though a 1/10 scale model was made of the DDG 51 helicopter hangar door, when built full scale there were still binding and interference issues. 3D CAD modeling provides a means to arrange equipment and systems within the confines of the hull and deckhouses to confirm that all will fit, can be built, and will function in service to meet requirements. Digital Twins of discrete portions of the ship design are used to analyze and confirm subsystem performance. Examples are the construction of a model of a dropping anchor to confirm clearance
822
R. Hoffman et al.
of the sonar dome in various conditions of trim and heel on a warship, or the model of a pilothouse to confirm proper human engineering considerations. Where these were physical models in the mid 1980s, Digital Twins now provide flexible, shareable alternatives. Today, three-dimensional models are reviewed to avoid system interferences, and arrangements are visualized and reviewed with the customer to verify space functionality. Ship Drawing Development CAD tools have evolved such that the goals of interference checking, drawing development, and work instruction creation are integrated within the software. Identification of Material Requirements Initially the model was used to generate consistent, accurate drawings but over time models included attributes such that the model became a true product model Digital Twin with intelligence including material definition allowing material ordering. Digital Twins as an Educational Tool across Generations The opportunity to be involved in the design of a new class of naval ships comes along only very infrequently in a career as a naval architect or marine engineer working at a shipyard. It is not unusual for one to spend an entire career on one or two classes of naval ship designs. Thus, a single career is often not long enough to apply the knowledge gained from one generation to the next. Digital Twins offer a method to accomplish or facilitate that end. Digital Twins in the Work Instruction and Collaboration Use Case Classes The goal of the Work Instruction and Collaboration Use Case is to produce a data set usable by shipyard mechanics to build the ship. To that end, design engineers collaborate with manufacturing engineers to create a 3D model linked to visuals for production process instructions. Digital Fabrication Digital information in the 3D CAD model is provided to fabrication shops to improve the accuracy of the fabrication using the actual model. Material Definition Material requirements are developed as a Bill of Material extracted from the model. The Engineering/Design Bill of Material transitions to a PBOM (Production Bill of Material) by the addition of schedule data and is updated in real time as the Design/Engineering evolves and as the result of reviews with the planning and manufacturing departments. The Engineering Bill of Material is used to Purchase Advance Major Material and then is provided for planning to use for the Production Bill of Material. The Production Bill of material is used to purchase the material and to plan how the ship will be put together. All planning can be done with the DST and the plan developed concurrently and documented within the DST as the ship is built.
Digital Twins in Shipbuilding and Ship Operation
823
Modeling Workflow Through the Shipyard Using the piece parts and block subdivision for the ship described in Sect. 2 and defined in the 3D CAD model, the Digital Ship Twin can be integrated with a Digital Twin of the shipbuilding facility to model the flow of construction through the yard. The size and complexity of modern shipbuilding projects demands that the work be planned and scheduled in detail prior to starting construction. Such plans may be modeled using various computer tools, another Digital Twin application, to ensure efficient utilization of shipyard workforce and facilities and eliminate costly rework. Digital Twins allow planning to be completed concurrently with design development. The multitude of parts and the magnitude of material required to build a ship, illustrated by Fig. 18, must be precisely scheduled, and moved through the supply chain and the shipyard. Digital Twins of the process using software to simulate material flow and sequence piece part installation is increasingly critical to efficient construction operations. Digital Twins in the Shipyard Supervision and Operations Control Use Case Classes The Supervision, Operations, Supply, and Logistics Use Cases include the planning and assurance of execution of workflows, and management of system state data for shop floor operations. Numerical Control The 3D CAD model provides data for part fabrication and numerical control of burning machines, shape nesters, and pipe cut and bending machines, as well as photogrammetry and LIDAR data for landing deckhouses, and joining units.
Fig. 18 Digital Twins enable managing the millions of parts that comprise a ship. (Debbink, Mark and Carla Coleman. Strategy for an Intelligent Digital Twin, 2019. Used by permission from Newport News Shipbuilding and Drydock Company, 2021)
824
R. Hoffman et al.
Shipyard Testing and Activation The DST can be used prior to activation to identify potential problems and then after activation to troubleshoot problems. Conformance requirements are linked to manufacturing process and inspection instructions. Testing results and processes are documented in the DST, as they are completed, so that they can be viewed collaboratively throughout the shipyard. Whole ship and whole system function models form the basis of Digital Twins of entire subsystems to enable monitoring of performance. Digital Twins in the Training, and Supply and Logistics Use Case Classes The goal here is to deliver the Twin to the customer along with the completed ship to be available for sustainment services and to continue evolving the unit’s data during operation and maintenance services. Training Digital Twins have been used to train operators for many years. Maritime academies throughout the world employ simulators to train prospective crews on how their ships will behave in operation. An example is shown in Fig. 19. Prior to activation and ship operation, the ship operators can use the DST to train crews and prepare for sea trials and testing of the ship. This is particularly useful for the first in class trials and demonstrations, particularly for new vessels. Maintenance In addition, Twins are increasingly employed as maintenance predictors. As described in a recent paper, the US Navy is researching use of Digital Twins and scanning technologies to predict requirements for maintenance of ship systems. The goal for the Digital Ship Twin during the ship life cycle is for product design changes to follow the same data flow and automatically update downstream models, references, and instructions. The US Navy is using Digital Ship Twins to prepare for repairs and maintenance when the ship returns from sea. While at sea, the crew can scan with drones or use onboard photogrammetry to create models of an area to be repaired. This allows the port engineers to prepare for the arrival of the ship. This allows the shipowner to manage the maintenance of the Digital Ship Twin. This allows the prediction of failures of individual components of the DST to
Fig. 19 Simulators have been used for training since the 1970s. (http:// today.tamu. edu/2016/04/07/realistic- ship-simulator-trains-sea- cadets/). (Photo used by permission of Texas A&M Maritime Academy)
Digital Twins in Shipbuilding and Ship Operation
825
reduce downtime of the entire ship due to sudden failures of a critical component. This also creates a “single source of truth” on what needs to be repaired.5 Digital Twins are enabling world renowned commercial ship operator Wilhelmsen to move from Planned Maintenance Schemes (PMS) which require the expense of stocks of spare parts to a Condition Monitoring System (CMS) to monitor vibration of the drivetrain aboard its Wallenius Wilhelmsen car carriers.6 Disposal Disposal of an asset is the final consideration in a system engineering cycle. Ship disposal methods are well-documented and include breaking to recycle materials, target practice to test and improve weapons performance, or scuttling to form artificial reefs to preserve wildlife or control erosion. Use of Digital Twins to analyze the behavior of a sinking ship is a relatively new application7 and will be increasingly used for forensic analysis8 and archaeological understanding.
8 Digital Twin Enabling Technologies As a real-time virtual representation of a physical entity, Digital Twins rely on a fabric of interwoven enabling technologies and practices, each in an emerging state of development maturity. Elements of the Digital Twin concept have existed for decades, and as more enabling technologies and practices evolve the Digital Twins gain greater fidelity and relevance; and the concept becomes more prevalent. As they rely on underlying technologies and services, Digital Twins symbiotically promote the application of related, enabled, and derivative technologies, as well. For example, unlike static digital representations, comprehensive and dynamic Digital Twins enable broader, more relevant applications of Artificial Intelligence and Machine Learning. Ultimately, the emergence of pervasive Digital Twins will form the basis of achieving the predicted benefits of Industry 4.09 including: • Horizontal Process Consolidation through information driven reduction in development, construction, and operations timelines across value chains (e.g., Design Spiral integration, Smart Supply Chain).
Kanowitz, Stephanie. “How Digital Twins Keep the Navy Ahead on Ship Maintenance”. May 2020. https://gcn.com/Articles/2020/05/01/navy-digital-twin-ship-maintenance.aspx?p=1 {https:// gcn.com/articles/2020/05/01/navy-digital-twin-ship-maintenance.aspx}. 6 Vessels of the future: How technology is transforming our fleet – Wallenius Wilhelmsen. 7 Kery, Sean. “On the Hydromechanics of Vessels and Debris Fields During Sinking Events”, pg. 14, SNAME Annual Meeting, 2015. 8 Garzke, W.H. Jr., Brown, D.K., Sandiford, A.D., Woodward,J., Hsu,P.K., “1996 The Titanic and Lusitania: A Final Forensic Analysis”; Marine Technology, October 1996, Vol. 33, No. 4 Pages 241–289. 9 Industry 4.0: the fourth industrial revolution – guide to Industrie 4.0, https://www.i-scoop.eu/ industry-4-0/ 5
826
R. Hoffman et al.
• Vertical Process Consolidation through tighter integration of Enterprise Resource Planning (ERP), Product Lifecycle Management (PLM) and Manufacturing Execution System (MES) systems based on common, real-time data (e.g., Remote Expert Collaboration, Autonomous Operations). • Infrastructure Integration through the merging of Information Technology (Cyber) and Operational Technology (Physical) infrastructures (e.g., Condition Based Monitoring, Smart Energy Grids) • Real-Time Quality Assurance through continuous product monitoring and machine learning (e.g., 3D Laser Scanning, Remote Sensing) Critical enabling technologies are structured into five broad categories of capability: Modeling and Simulation, Reality Capture, Extended Reality, Internet of Things and Distributed Computing. Each is briefly described below with an indication of the specific technologies most relevant to the Digital Twin concept and an indication of developmental maturity. Modeling and Simulation The M&S technology category includes the traditional tools for development and replication of physical entities and logical processes into digital representations. As the primary digital definition of the physical product, they are used to create the foundation of the Digital Twin representation. M&S must overcome two endemic barriers to improve value to the Enterprise. First, models and simulations are typically limited to static representations of the product and lose relevancy outside of its Enterprise domain. The value of these models is being extended across domain boundaries with more recent enabling technologies such as Reality Capture, Immersive Reality, Internet of Things and Distributed Computing. Second, despite steady evolution of the individual tools, this category suffers from poor data integration across process silos. There are two strategies to resolve this issue: adoption of monolithic systems or adoption of data interoperability standards for open systems. Examples of both solutions can be found in industry and the consumer electronics market but given the diversity of stakeholders in the Naval Shipbuilding Enterprise, use of open standards is the most practical solution. Computer Aided Design: CAD tools have been in use for decades and continue to evolve to provide better design fidelity. Original CAD design processes were simply digitized adaptations of legacy hard copy design processes. The relatively recent emergence of Model Based Design methodologies has taken greater advantage of CAD capabilities to improve design quality and process efficiency. Adoption of data interoperability standards still lag and represent a technical barrier to broader Enterprise applications. Computer Aided Engineering: CAE tools have also become commonplace in ship design. Although these tools continue to evolve, they are saddled with
Digital Twins in Shipbuilding and Ship Operation
827
persistent limitations. Primary among these is poor integration with CAD. Typically, CAE tools and processes cannot effectively use emerging designs to conduct engineering analyses without significant “grooming” of CAD models. Computer Aided Manufacturing: CAM has become a standard tool for digital delivery of machine instructions to the shop floor. Though fully capable tools exist, this technology is also limited by a lack of data interoperability standards. A future challenge for CAM development is an ability to provide 3D Printing-friendly instructions. Product Lifecycle Management: PLM has emerged in the past 10 years as the culmination of CAD, CAE, and CAM development. Through adoption and integration of capabilities from Enterprise Resource Planning (ERP) systems and Manufacturing Execution Systems (MES), PLM supports a digital record of the product throughout its lifecycle. For a PLM system to become the system of record for ships in the Naval Ship Enterprise, the technology must address portability challenges. Currently, PLM offerings are monolithic systems that require all Enterprise stakeholders to use a common system. A more effective approach is an open architecture PLM system that abides by data interoperability standards and can be adopted by a variety of stakeholders without creating digital silos. Reality Capture With roots in the legacy photogrammetry field, Reality Capture is a rapidly growing capability that has significant implications for the growth of Digital Twins. For our purposes, Reality Capture is primarily the utilization of time- of-flight calculations for reflected light and imagery reconstruction to generate 3D surface models of construction facilities, ship components, systems, and compartments. Both Light Time of Flight and Imagery Reconstruction technologies are mature enablers for the implementation of Digital Twins. Artificial Intelligence is a highly capable but still maturing technology.
Light Time of Flight Scanning: Though different light sources are in use, the most common scanning systems utilize lasers and, to a lesser degree, structured light. Laser-based systems, particularly Light Detection and Ranging (LiDAR) systems are used to record the surface geometry of objects by measuring the time of flight of laser pulse returns as seen in the colorized point cloud depth map in Fig. 20. This information is supplemented with other location gathering data from marker and beacon triangulation, or global positioning system (GPS) coordinates. Practical, accurate 3D scanning permits the reconstruction of Digital Twin visualization models where the digital thread of original models has been lost. Barriers to wide usage of laser scanning are quickly being lowered. Massive data sets of 3D coordinate point clouds can be rapidly gathered and efficiently rendered into textured mesh models that are compact and easily interpreted.
828
R. Hoffman et al.
Fig. 20 Point Cloud with colors indicating time of flight from sensor (pmdtechnologies ag). (Pmdtechnologies ag. “World Class 3D Depth Sensing Development Kits.” Accessed April 7, 2021. https://pmdtec.com/picofamily/)
Imagery Reconstruction: Where specialized laser scanners are not available, the use of mobile phone monocular cameras and 3D reconstruction software has proven to be an effective method of reality capture. Though the barriers to entry are much lower with this technology, accuracy is not as great as with LiDAR-based systems. Combinations of computer vision-based sensors with laser-based systems are starting to appear in autonomous vehicle development to improve reality capture fidelity. Artificial Intelligence: The primary contribution of AI to implementation of Digital Twins is in point cloud objection recognition. AI provides automated training and classification of scans to isolate contiguous point cloud data and render them into specific, recognized objects. Extended Reality XR is the technology category that encompasses new techniques for perceiving reality including Augmented Reality (AR), Virtual Reality
Digital Twins in Shipbuilding and Ship Operation
829
(VR), Mixed Reality (MR) and the supporting technologies that are critical to its delivery of information, as listed here. XR animates Digital Twins into dynamic tools bringing models to the users regardless of location and movement. While modeling and simulation are primarily artifacts of the office workplace, XR extends these tools to the shopfloor and theater of operations. Though XR is, in general, at an early stage of development, it is maturing rapidly. All the building blocks for widespread use are in place. A recent shift in development emphasis for XR from the Consumer Electronics market to industrial use will help address remaining barriers as described below. Augmented Reality: The defining characteristic of AR is overlaying virtual objects (and text) onto “actual” reality. These virtual objects respond to real surroundings the way actual objects are expected to with accurate and stable placing, realistic and responsive lighting and 3D, point of view occlusion with real objects. AR enables Digital Twins by delivering information from the digital thread to the users in their workplace. Further, AR facilitates the dissemination of information with an intuitive, 3D display controlled by user interactions in the workplace. Although barriers are rapidly falling, the most important remaining challenges are accurate and very stable placement of virtual objects, and effective hands-free delivery devices. Virtual Reality: Counter to AR, the defining characteristic of VR is immersion of the user into a virtual environment. Although many of the VR technologies are common to AR, the emphasis is quite different, as are the Use Cases. With an emphasis on training Use Cases, the focus of VR development is on user interaction, particularly gesture-based commands, haptics-based information input, pose and eye movement tracking and user comfort. Mixed Reality: MR is a combination of AR and VR and its primary contribution to the use of Digital Twins is enabling collaboration. MR uses multiple digital channels to immerse remote collaborators in a common synthetic, or synthetically enhanced space. Most of the enabling technologies for MR are already available. The greatest barriers are availability of secure communications and sufficient wireless network capacity to deliver ultra-data intensive communications. Simultaneous Localization and Mapping: SLAM is a foundational set of technologies required for AR. Using optimization algorithms, SLAM combines mobile device (smart glasses, phones, and pads) sensor data to map its immediate area and determine its current location within that map. Primary sensors are currently cameras, Inertial Measurement Units (IMU) and embedded LiDAR sensors. As with the other XR enabling technologies, SLAM techniques and access to high
830
R. Hoffman et al.
fidelity data is constantly improving and quickly approaching spatial tolerances required in precision construction. Mobile Computing: XR technologies are entirely synthetic and require ever increasing computing capacity in a form factor that supports shop floor and field use of Digital Twins. Increased computing density of mobile devices will drive the need for increased cooling efficiency. Head Mounted Displays: Although much can be accomplished using XR enabled Digital Twins on mobile devices, a breakthrough will be required in the availability of practical HMDs for hands free operation. The challenge for effective HMDs comes in many forms. They must be economical for widespread use in the workforce, ergonomically correct for extended use, physically hardened for use in the shop and field, cyber-hardened for secure communications and provide a broad field of view for safe operations. Though considerable, all these barriers are currently being addressed with significant commercial and government development investment. Internet of Things The IoT is a global mesh of digital data, constantly expanding and updating. The pervasive information flow of the IoT animates the whole concept of Digital Twins and makes it a viable proposition for the entire Naval Ship Enterprise. The IoT transforms a Digital Twin from a static digital product representation to a living product genome. Of particular interest to Digital Twin proliferation is Remote Sensing technology. Remote Sensing: The proliferation of connected (smart) devices regardless of information channel enables real time updates to product/process status and configuration. Digital Twins reflect current conditions without costly human intervention, auditing, and update. Whether through various internet transfer protocols or Radio Frequency Identification (RFID) technologies, remote sensing is a Digital Twin enabling technology that is mature and rapidly expanding. Distributed Computing A robust data infrastructure is essential for the Digital Twin paradigm to flourish. A Digital Twin foundation is an infrastructure that provides high capacity, rapidly scalable computing combined with a low latency, resilient set of networks.
Distributed Systems: A Digital Twin foundation requires a highly distributed systems and computing landscape in place of centralized, monolithic systems. Further, data sets will have to comply with open data transfer standards to securely navigate a decentralized infrastructure. Although these technologies are well estab-
Digital Twins in Shipbuilding and Ship Operation
831
lished, individual commercial vendors of Digital Twin applications will have to convert from centralized software models to more open, distributed architecture. Wireless Networks: Most of the elements of a successful Digital Twin implementation already exist within the office environment. Proliferation of the Digital Twin paradigm depends upon high bandwidth, secure, wireless access to digital data for the workforce in the field and on the move. Wireless capabilities are rapidly increasing but more work is needed in maintaining strict security protocols.
9 Barriers to Broad Implementation of Digital Twins There are several barriers to implementing Digital Twins in the Naval Ship Enterprise, many of which are discussed below in this Section. Product Model Portability Several US Navy programs of the last 40–50 years, FFG 7, CG 47, DDG 51, DDG 1000 combatants, and VIRGINIA class submarines, have operated with a lead yard/follow yard arrangement in which the lead yard typically develops the design and both yards construct ships to that design. Since production methods and work instructions vary from yard to yard, it is necessary for both yards to maintain their own models. The modeling effort has evolved such that in the DDG 51 program, for example, each shipyard currently uses a different software tool for the 3D CAD modeling and drawing development. That evolution is traced below: • 1980s – In the DDG 51 destroyer program, led by Bath Iron Works, the initial design and drawings were developed manually. When 3D CAD modeling was introduced, the plan was to have the same software used throughout the shipbuilding domain for modeling, viewing, and drawing development. • 1990s – Continuing in the DDG 51 program, the lead and follow shipyards and the Navy agreed to use the same software tool with a common model. This required “large data pipelines” to be established between shipyards and the Navy customer. Large amounts of data, and limited capacity of the systems at that time required co-location of shipyard and government teams to review and approve the design and engineering. • 2000s – In the DDG 1000 program, the lead and follow shipyards used the same software and agreed to a common modeling standard when developing the design. The yards divided the ship with each detailing a different zone. The common modeling eventually allowed Bath Iron Works to take over the design, fabrication, building and assembly from the other shipyard by using the digital model to extract the required information. • 2010s – When the Navy restarted DDG 51 production in 2012 after an intended cessation, both shipyards elected to use their own design software suite to develop engineering and design. The updated geometric models were reviewed
832
R. Hoffman et al.
jointly with the Navy customer and the 3D arrangements of the major and most minor equipment and systems were reviewed and approved. Differences between the two shipyard designs/models were noted and documented. Therefore, the need to transfer product model data across CAD suites was solved by passing the model extracts between the shipyards. Transfer of Technical Data As described above, the issue of data transfer initially was technical, as there was not enough capacity in computer hardware and network systems to store the data. As computing speed has increased and network capacity and memory cost have become more favorable, the issue is now more driven by requirements such as ITAR (International Traffic in Arms Regulations), and protection of proprietary data and intellectual property (IP), the engineering and other data used to develop the item. Transfer of Technical Data Across International Borders For the US and international navies, there are issues with transferring data out of the home country. Special licenses are required which affect what data can be transferred and details of such data. If international vendors are used, delivery of data concerning requirements to those vendors may be severely restricted. Limited Wireless Network Coverage The most compelling Digital Twin applications in the naval ship enterprise are in manufacturing facilities or onboard ships. Both environments depend on broadband wireless connectivity that can be difficult to achieve. Cyber Security of Classified Data and Knowledge Comprehensive and responsive models that comprise Digital Twins contain significant amounts of classified data. Cyber security of wireless networks and of data broadly disseminated to the enterprise lags the demand for collaboration. Lack of Comprehensive Model Configuration Management Model proliferation within the enterprise is a problem that throws doubt on the pedigree of master data. Comprehensive and compulsory configuration management must be established throughout the enterprise. Transfer and Re-Use of Models across Design and Operational Phases Initial design models developed for study, planning, or early development purposes, are often discarded as the ship design moves forward into later phases. These models should be refined over time as the design progresses, and designers need to ensure that such models are used and viewed, and creation of “local” interim models/products is avoided. In addition, naval ship funding, in the US especially, is subdivided into ‘acquisition’ and ‘operation’ categories, suboptimizing design decisions to benefit one or the other phase of a program. One result relative to Digital Twin implementation is that overall ship wide 3D CAD models, which could be the basis for an operational Digital Twin, are not delivered to the ship operators.
Digital Twins in Shipbuilding and Ship Operation
833
Legacy Supply Chain Data Digital twins did not exist in the early 1980s. Vendors typically provided paper drawing packages. Later, vendor data was provided as electronic files, often as PDFs, so that the vendor could protect their IP. With the broad use of 3D CAD, vendors model their equipment and provide “pictures”, or hard copy drawings as required. Today, vendors can provide digital representations of what they are building. The presence and reliance on data from those ‘pre-twin’ times must be overcome. Proprietary Software/Data Format/Lack of Modeling Standards Digital Twins are compilations of multiple models and model types. For diverse models and data sets to work in concert there must be modeling techniques that are common and enforced throughout the enterprise. CAD tools may be proprietary, but the objects within the CAD system must become agnostic among software suites at least when used for visualization and reviews. There are lightweight viewing formats10 that allow non-CAD users to view the status of the Digital Twin. All users need to agree on a data format that can be used throughout the enterprise. Within the shipyard, all parties need to define consistent modeling standards that work throughout the shipbuilding domain, to avoid rework and the development of multiple models of the same “object”, as the object/ model is refined. Vendors may use proprietary software, but models must be viewed in sufficient detail to work in a Digital Ship Twin. The challenge is to protect the proprietary software and engineering details that were used to develop the design. Data Rights Requirements for design data are becoming more rigorous as the US and international navies must maintain ship equipment and systems for many years, often for longer than vendors maintain a product line. Currently, vendors can provide 3D models of equipment items, which allow placement in a model and protection of the engineering behind the part, but do not allow sufficient information to maintain or rebuild the item. For shipyard-developed systems and vendor-provided equipment, the challenge is to provide adequate data to work in the Digital Ship Twin, satisfy customer maintenance requirements, and protect the engineering IP behind the scenes. Shipyard Culture The shipyard culture needs to accept the future that will come with the Digital Ship Twin. Shipyard leadership will need to establish the benefits of the Digital Twin for the shipyard and create a culture that drives to success. The challenge is overcoming intellectual inertia to seriously consider disruptive technological investment in an historically conservative industry.
Holistic Ship Design – How to Utilise a Digital Twin in Concept Design Through Basic and Detailed Design; t-H Stachowski and H Kjeilen, Digitread AS, Norway; International Conference on Computer Applications in Shipbuilding 2017, 26–28 September 2017. 10
834
R. Hoffman et al.
Incompatible Strategic Technology Development Naval shipbuilding is a unique market characterized by heavy influence by a singular government customer. The Navy mimics market pressures with purchasing policies designed to balance protecting the nation’s design and manufacturing knowledge while encouraging competition among limited providers. Industry response to these policies encourages near term cost cutting and discourages industry developed innovation. Industry technology development is sporadic, poorly funded, and uncoordinated across the enterprise. Lack of High Accuracy Digital Twin Delivery Devices Many Digital Twin applications require dimensional accuracy at the millimeter scale. Such tolerances are difficult to achieve with technologies currently available.
10 Business Model Implications of Using Digital Twins The impact of Digital Twins on the way the shipbuilding business is conducted will increase and accelerate over the coming years. At this point we have described the applications of Digital Twins, technology enablers for their implementation, and barriers to their use. This section and the next describe modifications to current business processes that will likely be required to take full advantage and realize the benefit of Digital Twins. Dealing with the barriers discussed in Sect. 9 offers significant implication for shipbuilding business models. Additional implications are presented in this section. Motivate Twin Usage from the Top Industry leaders at shipyards and in customer organizations alike must believe in Digital Twin technologies and drive business units to their use. Without leadership commitment the inertia described in Sect. 9 will obstruct success. Employ 3D CAD Modeling from the Beginning of Ship Design This is likely the easiest implementation strategy since current ship design tools are already in place to allow early implementation of 3D modeling. Adopting a strategy of putting everything in the design into the model will enable the entire collaborating design team to visualize the state of the design as it develops and will enable confirmation that design meets spatial and volumetric requirements. Train all Users and Stakeholders To take full advantage of Twin use the entire design team including planning, manufacturing, and procurement members must be trained in using visualization tools so all can see the state of the design and extract data relevant to their roles in it.
Digital Twins in Shipbuilding and Ship Operation
835
Support Delivery of Models to the Customer and Classification Societies Business systems should be established such that the product of design is the Twin. It is the collaboration tool for the design team and should be for approvals by the customer and regulatory bodies as well. As the design (Digital Ship Model) is developed, both organizations will be involved in its review and approval. The construction and testing of the ship will be managed similarly. Admittedly, with all the analytical files that support a ship design this will be a massive amount of data so achieving this goal will remain aspirational until data storage and transmission constraints are eased. At completion of the contract, the shipyard should deliver a Digital Twin of the ship at the same time to be used to support operation and maintenance activities. Create an Environment for Collaboration The importance of collaboration and the abilities enabled using Twins has been established in earlier sections. The role of industry leaders in facilitating collaboration, however, cannot be understated since the tendency of technical people is more toward independence. Establish Ownership of Data Part of the discipline to standards and processes is to avoid proliferation of questionable data, meaning that responsibility for items of data in the ship Twin is limited to stakeholders on the team. Encourage the Navy to Manage the Entire Ship Life Cycle Cost In the navy business, the purchase, building and delivery of new warships is handled by one organization; another organization handles the maintenance of the delivered ship. It would be better if these processes were combined and integrated. Develop Systems to Manage Data Releases The pitfalls of sharing proprietary or otherwise sensitive data through Twin releases has been explained in Sect. 9. This will continue to be an issue that must be addressed between suppliers and shipyards, and shipyards and ship owners. Build Trust in the Data and the Technology An incremental approach to implementing Digital Twins is a foundational premise for success. As stakeholders gain familiarity with Twin use, reliance on the data and the systems will take hold. Embrace the Internet of Things The number of connected devices continues to grow at an exciting and somewhat alarming rate. Developing an understanding of how to take advantage of that connectivity to enhance shipbuilding production and product performance will be the key to overall success.
836
R. Hoffman et al.
11 Recommendations for Future Development The proliferation of the Digital Twin paradigm presents challenges that require careful planning and a deliberate strategy. The first challenge is overcoming intellectual inertia to seriously consider disruptive technological investment in an historically conservative industry. A Digital Twin based approach to shipbuilding is a paradigm change with benefits that must be understood in terms that are clear and relevant to industry leaders. For example, in the Shipbuilding domain the proponents of a Digital Twin strategy must make clear how the technologies directly mitigate, reduce, and even eliminate chronic shipyard challenges: rework, delay and disruption. Specific examples and demonstrations of the following can be made using existing, partial Digital Twins. Rework is the inevitable result of poor design, inaccurate planning, or production quality. The ripple effect of rework is devastating to budget and schedule as corrective measures are never made as efficiently as the original plan intended. Design and Planning Digital Twin instances directly improve first time quality by using model-based design and work instructions to eliminate human error in dimensioning and measuring. Further, Digital Twins provide real time quality control to ensure products are error free before they are released to the value stream. Delay is the result of missing information, material, or services in the workplace, as well as a byproduct of rework. It is an obvious source of inefficiency and drift from the build plan. Planning, Supply Chain and Facilities Digital Twin instances ensure the shipbuilding plan is no longer static, but responsive to changing conditions and dynamically connected to the supply chain. Disruption is the umbrella term for the chaos that is created when the shipyard drifts from the build plan. An interdependent and tightly orchestrated undertaking, it is enormously difficult to regain a shipbuilding plan once the shipyard has deviated. Interactive Digital Twins of the shipyard facility, the shipbuilding plan and the ship design provide tools to take informed corrective actions to regain the plan. Once industry leaders understand the Digital Twin paradigm change and appreciate the potential it represents, the second challenge is building a practical development plan. Though Digital Twins offer clear value throughout the ship lifecycle, the nature of the Naval Ship Enterprise is not conducive to strategic, cooperative technology development. The Enterprise is not a coherent unit. Rather, it is composed of a broad, diverse series of cooperative, shifting, often competitive commercial and governmental stakeholders loosely bound through overlapping contracts and inter- dependent information. As shown in Sect. 4, each domain of the Naval Ship Enterprise is dominated by different primary stakeholder(s) with incentive to meet their own contractual obligations rather than cooperate across domain boundaries. Within this environment, significant investment decisions are often sub optimized within the constraints of
Digital Twins in Shipbuilding and Ship Operation
837
the domain responsibilities. Multi-domain technology investments based on the combined interests of the Enterprise are difficult, if not impossible. In this environment, individual domain stakeholders are the active investors with the Enterprise leaders in a guidance role. A good example in the US Navy ship enterprise is the separation of acquisition and operation budgets with different cost stakeholders for each. Such a structure drives cost consideration to one or the other silo when the benefit of the entire enterprise should be the goal. A Digital Twin investment and development plan requires an incremental strategy guided by Enterprise-wide governance. Individual investments and implementations must provide compelling value to the investing stakeholders within their domain while complying with strategic governance rules that ensure overall cohesion across the Enterprise. Two tools are offered to build and manage an incremental strategy: Enterprise Governance and Technology Investment Maps. Enterprise Governance Establish and maintain Naval Ship Digital Twin Development Governance consisting of a set of clear development and implementation standards for the Enterprise. Governance would be enabled by incentives applicable to all stakeholders to encourage compliant investments. Technology Investment Maps Develop tools to help identify and guide specific stakeholder investments in Digital Twin implementations based on highest return. Navigating the rapidly evolving, interrelated technologies of Digital Twins can be difficult and risky. Two types of prototype investment maps to help this navigation are proposed above, in Sect. 6. The analyses and recommendations, as well as the spreadsheets used to produce the maps are offered as tools for each stakeholder to make their own determinations on Digital Twin investment potential. • Domain Investment Map: Shown in Fig. 16, Digital Twin Use Case Application Value Plot Within Domain, this map highlights the most promising use cases for domain stakeholders to investigate specific investment opportunities. • Enterprise Investment Map: Shown in Fig. 17, Digital Twin Use Case Application Value Plot Across Enterprise, this map highlights the most promising use cases for the Enterprise to offer incentives for Digital Twin development. In Sect. 12, Next Steps, we propose examples of how these tools are used to create and pursue a practical, low risk Digital Twin development plan.
12 Next Steps As noted in previous sections, the shipbuilding industry has unique characteristics that shape the successful Digital Twin development strategy. The ship lifecycle is exceedingly long, as much as 40 years, but is also fragmented. The shipbuilding enterprise is compartmentalized into separate domains with different stakeholders,
838
R. Hoffman et al.
responsibilities, and goals. To realize the full potential of Digital Twins, development strategies must be implemented at both enterprise levels and domain levels. Enterprise Strategies establish standards that articulate a common vision for all stakeholders and a path of convergence for individual developments and implementations. The enterprise strategy will include governance standards that encourage individual stakeholders to move towards the common vision and dissuade them from point solutions that provide little recurring benefit to the enterprise. For example, in the Naval Shipbuilding industry, the government is the executive stakeholder of the entire enterprise and is responsible for behavior that is beneficial to the enterprise. The Navy must develop strategies to credit stakeholder level investments that comply with enterprise requirements. Encouragement will take the form of subsidizing implementation costs or offering recognition in future competitive procurements for enterprise compliant technology upgrades. Domain Strategies Within each domain individual stakeholders have responsibility for specific roles as well as the welfare of their particular entity. Though the Enterprise Strategy will influence individual decisions, business units and government agencies will invest in development that have the greatest and most immediate operational and business benefit. In Sect. 11, we described the broad range of opportunities to expand the influence of Digital Twins to improve the efficiency, quality, and safety of the Naval Ship Enterprise. We also noted that some of the opportunities encompass greater risk than others and propose methods to analyze the individual business cases and find the most valuable use cases for potential Digital Twin investments. These tools can also be used to assess potential value to the entire Naval Ship Enterprise to attract collaborative investments. Finally, investment risk can be anticipated and managed by identifying critical enabling technologies for the targeted use cases and assessing their state of development maturity. In this section, we explore three examples of likely Digital Twin expansion to illustrate how the tools can be used for identification of high value use cases and potential Digital Twin development and implementation plans. • Example 1, Work Instruction Use Case in the Shipbuilding Domain discusses a potential Digital Twin implementation. According to the authors’ value assessments, this is an example of an application that has very high potential for the primary domain stakeholder, the shipyard, but relatively low value for the rest of the Enterprise. • Example 2, Collaboration Use Case in the Shipbuilding Domain explores an implementation that is assessed as having high value for both the domain stakeholder and the rest of the Enterprise. • Example 3, Operations Analysis Use Case in the Ship Operations Domain describes the use of Digital Twins to improve operator response to emerging, volatile conditions. Example 3 lends itself to increasingly beneficial contributions as the Digital Twins experience improved analysis through machine learning algorithms.
Digital Twins in Shipbuilding and Ship Operation
839
Fig. 21 AR Based Maintenance Work Instruction (Source: Softability. “Augmented reality in remote support combined with smart expert portal helps in service and maintenance work.” Accessed April 7, 2021. https://softability.fi/en/blog/ar-in-remote-support-helps-industrial- maintenance-workers/)
Example 1, Work Instruction Use Case in the Shipbuilding Domain This implementation extends existing design, engineering, and planning models to generate 3D digital work instructions. Fig. 21 illustrates a Digital Twin being used via Augmented Reality powered smart glasses to aid a mechanic in the bolting sequence for a filter unit change. An example of Digital Twin-enabled Work Instruction is installation or maintenance of a fluid system component. Figure 21 illustrates how installation information from a Digital Twin is displayed using AR enabled smart glasses. Installation instructions such as bolting sequences, fastener torque requirements and filter orientation are presented in 3D and in place and are much more informative than legacy, hard copy methods. As illustrated above, Digital Twin based work instructions can significantly increase efficiency and quality when extended throughout build and maintenance domains. Digital Twins eliminate reliance on archaic, error prone practices of design dimensioning and deck plate measuring for component installation. The benefit is particularly high in naval combatants, with extensive system interconnectivity and extreme outfitting density. • Value Assessment: In Attachment (A), “Digital Twin Application Analysis Table”, work instruction (use case 3.1) is described as a dominant activity in the shipbuilding domain. The authors designate it as a critically high value target implementation. The purpose of work instruction is central to the domain purpose, design information into actionable, efficient, and safe direction for supply, distribution, construction, maintenance, and test activities.
840
R. Hoffman et al.
• Technological Risk/Cost Assessment: Having determined that a Digital Twin implementation in this case will provide superior benefit, the corresponding risk and collateral cost must be assessed. As noted in Attachment (A), a work instruction Digital Twin instance depends primarily upon four types of models: Spatial, Visualization, Kinematic and Process Analysis models. Apart from Visualization, these model types are already in use and considered to be mature technologies. Therefore, risk is largely focused on implementation and utilization of Visualization models. Visualization models are dependent upon several enabling technologies, as described in Sect. 8, Digital Twin Enabling Technologies, specifically Modeling and Simulation, Reality Capture, Extended Reality and Distributed Computing. Modeling and Simulation, and Distributed Computing are mature technologies that constitute relatively low risk. Reality Capture and Extended Reality are the focus of technological risk assessment in this scenario. Reality Capture technologies are essential to bridge the considerable gap between “as designed” representation of the product and the actual, emerging “as built” configuration. Extended Reality is required by the Digital Twin to digitally deliver work instructions to the frontline workers to ensure the anticipated efficiency and workplace safety gains. Though both developmental, these technologies are already productive solutions and rapidly evolving. Nevertheless, a careful examination of realistic capabilities in the implementation time frame will be conducted. Sensitivity analysis on the cost/ benefit calculations will be required. For example, it must be determined if the cost/ benefit analysis is still positive if head mounted displays are not available for another 3 years and workers are constrained to handheld tablets for their Digital Twin derived work instruction. • Implementation Risk/Cost Assessment: As noted above, two of four enabling technology categories, Modeling and Simulation and Distributed Computing, are considered mature. Though they do not represent technological risk, they could be sources of implementation risk that must also be considered. The implementing stakeholder must evaluate the compatibility of its existing systems, infrastructure, and user base to develop, maintain and utilize a Digital Twin instance. • Implementation risks are non-trivial challenges. Many shipyard infrastructures are patchwork systems built over time with little thought given to data sharing. They can be the product of sub optimized technology investment decisions producing data silos not easily overcome. Further, fundamental changes such as Digital Twin adoption represent implementation risks such as leadership challenges of converting a workforce to a new set of tools, processes, and skills. • Enterprise Investment Opportunities: As noted in Attachment (A) the work instruction use case in the Shipbuilding domain offers lower value to the Enterprise than to the domain, scoring only a “2” and shown as a red column in Fig. 22. Though this Digital Twin is a data-rich solution for the domain, only documentation of the as-built configuration of the ship is of value to the enterprise. Most shipbuilding activities are unique to the domain and have little relevance beyond it.
Digital Twins in Shipbuilding and Ship Operation
841
Fig. 22 Value to Enterprise of Work Instruction use case in the Shipbuilding domain
Fig. 23 AR based collaboration enables assistance from remote technical staff
Example 2, Collaboration Use Case in the Shipbuilding Domain The purpose of the collaboration use case is to release the content of a Digital Twin from a specific authoring application for effective communications across the entire Enterprise. Fig. 23 illustrates a mechanic troubleshooting a hydraulic installation while being guided by a remote member of the technical staff.
842
R. Hoffman et al.
The specific use of the collaboration Digital Twin shown here is rapid resolution of a problematic shipboard installation. Quick and correct in-field resolution is required to avoid production delay and disruption. A 3D view of the applicable design models is overlaid on and aligned with live video of the incident. These views, supplemented with visual and audio commentary, are broadcast to remote engineering experts for real time collaboration. • Value Assessment: In Attachment (A), “Digital Twin Application Analysis Table” the authors assert that the potential value of Use Case 10.2 is immense. Activities in the Shipbuilding domain create a vast amount of data to define the ship and the network of actions required to produce the ship. A key to efficient shipbuilding, collaboration among the large team of domain stakeholders has historically been difficult due to separation in time and location. A Digital Twin compresses gaps in time and space by providing instant access to essential technical and work details. • Technological Risk/Cost Assessment: As indicated in Attachment (A), for collaboration a Digital Twin has a broad set of demands and is built upon most of the available models: Textual, Analytical, Spatial, Visualization, Dynamic, Kinematic and Process Analysis. Except for Visualization, all these model types are in wide use already. However, they are not ready for the collaboration use case without fundamental technological upgrades and the ensuing risks. • Depending on the state of the shipyard’s infrastructure and product definition practices, the technology challenges may be difficult and pervasive. In addition to enabling Visualization models (described in Example 1, above), a collaboration Digital Twin depends upon models that are compatible with each other and prepared for real time use by mobile devices on wireless networks. The models must be upgraded to mobility qualities: compact, application agnostic, responsive to real time data updates and compliant to data integration standards. • Implementation Risk/Cost Assessment: As noted in Example 1, technological risk is not the only challenge to consider for a Digital Twin investment. The Digital Twin models must operate within a computing environment that supports connected models. High bandwidth, secure wireless networks must be available throughout the shipyard and within the evolving ship. The workforce must have access to and be trained to properly use hardened mobile devices. Finally, a robust data governance scheme must be enforced to ensure modeling standards support interoperability. • Enterprise Investment Opportunities: Attachment (A) indicates that collaboration (Use Case 10.2) offers high value to the enterprise, as well as the domain. This is illustrated by the column highlighted in red in Fig. 24. It is reasonable to expect that the Navy, on behalf of the enterprise, would encourage investment in the implementation of such a Digital Twin. Example 3, Operations Analysis Use Case in the Ship Operations Domain Building on the example of General Electric’s Digital Transformation – Digital Twins presentation by Dr. Colin Parris, VP Digital Research, GE Global
Digital Twins in Shipbuilding and Ship Operation
843
Fig. 24 Value to Enterprise of Collaboration use case in the Shipbuilding domain
Research Center,11 Digital Twin use and benefit can be extended beyond the shipbuilding domain to ship operations. An example of the application is a combat scenario in the naval ship enterprise. In this scenario, upon receiving a damage control report a commanding officer is presented with a Twin of his/her ship indicating damage conditions. The Twin performs the necessary firefighting, flooding, stability, structural integrity, and human condition analyses, and presents options on how to react with counter-flooding and other damage control actions. In addition, the Twin confirms the reported symptoms of damage as they occur and evaluates potential causes to confirm the correct problem is addressed. • Value Assessment: With reference again to Attachment (A), this example is described as case 7.2, the Operations Analysis Use Case in the Ship Operations Domain. The value of such a Digital Twin application shown by the red highlighted column in Fig. 25, is clear by enhancing the safety of the ship and her crew, perhaps saving the ship from loss. The current emphasis on autonomous ship development will drive a stretch from expecting a human decision on damage control measures to informing the commander of what action the ship itself took, closing the loop with a “self-healing” approach. This reactive Digital Twin application will, over time, evolve into a proactive mode to evaluate the suscep-
11
How A 10-Minute Conversation With A Machine Saved $12 Million | GE News.
844
R. Hoffman et al.
Fig. 25 Value to domain of operations Analysis use case in the ship operations domain
tibility/vulnerability chain to direct ship maneuvers and countermeasure use when faced with incoming threats as well. • Technological Risk/Cost Assessment: Developing damage control measures has been fundamental to military ship design for decades. Navy crews are provided with all the diagrams and data necessary to take proper action when their ship is damaged. However, automating the process of sensing damage, analyzing the situation, and recommending and implementing the appropriate response is a significant task accompanied by development cost and risk. • Implementation Risk/Cost Assessment: Similar to the cases discussed above, implementing such an automated approach demands fast and accurate assessments and data transfers. Again, reliance on such a system will be attained only after substantial testing and practical proofs. • Enterprise Investment Opportunities: Given the current demands on naval crews, the required high training load, difficulty retaining trained crews, and the need to put more ships in service quickly, such an application provides moderate value to the Enterprise. Each example of Digital Twin development presented here demonstrates the immense potential for performance improvement based on leveraging existing tools and datasets. They also demonstrate the risks inherent in any significant technology development and how objective, logical analysis can assess and mitigate the
Digital Twins in Shipbuilding and Ship Operation
845
challenges. Ultimately, we offer tools and methodologies to establish a practical roadmap of Digital Twin expansion in the shipbuilding industry.
13 Conclusions Digital Twins are poised for significant increase in deployment in the Naval Ship Enterprise. The Naval Ship Enterprise and its traditional methods of ship design and construction have been described. The Digital Twins in use, their advantages, and the technologies available today are all in place to build and operate the ships of the future. The ‘siloed’ uses of digital models that have benefited the industry for so long are at a point where a major leap characterized by integration and collaboration through Digital Twin implementation will transform the shipbuilding and operation business. This chapter has established the value of Digital Twin implementation in the following primary areas: • Improvement of the product by enabling collaboration among stakeholders as requirements are developed • Compression of the long shipbuilding/ship operation lifecycle • More informed decision making through collaboration • More effective risk management of product and project complexity • Converging design information technology with facility and ship operational technology In addition, tools and processes are presented in Sections 6, 11, and Attachment (A), along with ‘worked examples’ in Sect. 12, for corporate decision-makers to use when assessing Digital Twin implementation costs and benefits for their own organizations. What remains is the will of the Naval Ship Enterprise stakeholders, customers, and builders in the discussion, to assess the outlined benefits and values of Digital Twin implementation, and take the first step, small though it may be, toward integrating efforts, models, and workforces, and improving ship production and operation. Russ Hoffman is a 1974 graduate of Webb Institute of Naval Architecture on Long Island and holds a BS in Naval Architecture and Marine Engineering. He spent his entire career designing ships and ship’s equipment for both commercial and naval customers. Upon graduation, and after completing several months sailing on a merchant container ship, Mr. Hoffman joined Peck and Hale, Inc., a manufacturer of cargo securing equipment and spent six years designing securing systems primarily for the commercial container ships of U.S. and European owners. During his tenure he developed a computer model for analyzing ship motions and applying classification society rules to securing system design. Subsequently joining the New Jersey office of J.J. Henry
846
R. Hoffman et al. Co., a naval architecture firm, he supported U.S. Navy and commercial customers in the designs and modifications of minesweepers, tankers, coal carriers, and LSD-41, the lead ship of a new class, and participated in the transition from full scale compartment mock-ups to computer-aided 3D modeling. Mr. Hoffman’s 36 years at Bath Iron Works in Maine saw the shipyard evolve from pencil, paper and calculator analyses, and ink on mylar drawings and overlays, to 3D CAD modeling of all ship structure and systems. His tenure at the yard included shipyard support naval architecture for end launchings and inclining experiments, and management of the Weight Control, Naval Architecture, Hull Engineering, and Systems Integration Sections of the Engineering Division. As Technical Director for BIW’s Ship Concept Design Group, Mr. Hoffman led the engineering and design efforts for several early stage designs, most notably, the BIW entries in the U.S. Coast Guard Offshore Patrol Cutter, and U.S. Navy FFG(X) Frigate competitions. Other credentials included a PE license from the State of Delaware, and a US Coast Guard Third Assistant Engineer License. Russ was made a Fellow in the Society of Naval Architects and Marine Engineers in 2016 and retired from Bath Iron Works in July 2020. Paul Friedman has in-depth experience in naval ship design and construction, information technologies and shipbuilding technology research and development. After earning a BS degree in Mechanical Engineering from the University of Virginia, Mr. Friedman worked for the US Navy developing propulsion systems for naval combatants. He then joined shipbuilder Bath Iron Works where he was Program Engineering Manager for the DDG51 Arleigh Burke Class Destroyer design and construction and manager of Computer Aided Design development and implementation. Mr. Friedman then served for 10 years as Information Technology Director at IDEXX Labs, a developer of veterinary software, pharmaceuticals and medical devices. He returned to Bath Iron Works to initiate and direct shipyard research and development with a focus on computer vision, augmented reality, 3D printing and system dynamics modeling of complex systems. Mr. Friedman retired from Bath Iron Works in 2020. Dave Wetherbee graduated from Carleton College with B.A. in History in 1976. He started his career in shipbuilding as a Shipfitter at Fraser Shipyard in Superior, Wisconsin. He returned to school to pursue an engineering degree and in 1984 graduated from the University of Michigan with a B.S.E. in Naval Architecture and Marine Engineering. Dave was employed at Bath Iron Works from June 1984 to July 2020, when he retired. While at BIW, he has worked as Systems Engineer, Hull Outfit Engineer, Functional Engineer and Program Manager on a variety of Programs. He retired as a Principal Engineer in Hull Outfit Engineering supporting both DDG 51 and DDG 1000 Programs. Dave has been a member of ASNE (American Society of Naval Engineers) since 1988 and SNAME (Society of Naval Architects and Marine Engineers) since 1983. He has served the
Digital Twins in Shipbuilding and Ship Operation
847
ASNE-Northern New England Section as Chairman, Vice- Chairman, Treasurer, Programs Lead, Member of Symposium and Symposium Papers Committee. Dave currently is ASNE-NNE Section Treasurer. Dave (Maine PE # 7831) has been a Professional Engineer in Maine since 1994.
Digital-Age Construction – Manufacturing Convergence Sir John Egan and Neculai C. Tutos
Abstract Construction, the largest industry in the world, contributes 13% of the global GDP and is responsible for the infrastructure that supports the whole economy. It is troubling that this great industry faces decades of stagnant productivity and an alarming level of delays and budget overruns in the delivery of projects. Construction faced decades of growing construction-manufacturing gap in business performance. It all comes to one significant difference that explains this condition, the level of supply chain integration along the whole project process. Advanced digital technologies enabled an extraordinary level of business process integration. Construction continues to be the least digitized industry; its digitization level is a little above hunting and agriculture. The dominance of the project-centric approach to business is the main obstacle to construction business transformation. This document is about the way to overcome this obstacle. Keywords Supply chain integration · PLM · BIM · Digital Twin · Components and processes repeatability · Construction-manufacturing convergence · Digital age construction · Digital age engineering education · Manufacturing-style production systems · PLM versus BIM · Product · Process Digital Twins
1 Preamble: Rethinking Construction Is no Longer an Internal Affair Construction: Its Greatness and Its Failures Construction, the largest industry in the world, contributes 13% of the global GDP and is responsible for the infrastructure that supports the whole economy. For centuries, construction engineers proved capable of dealing with the impossible in building great roads, bridges, houses, S. J. Egan British Industrialist, London, UK N. C. Tutos (*) Klaro Consulting, Houston, TX, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_29
849
850
S. J. Egan and N. C. Tutos
cathedrals, and factories. Construction engineers did make possible the Suez Canal, the Panama Canal, Hoover Dam, and the recent Channel Tunnel, the Eurotunnel.
It is troubling that this great industry faces decades of stagnant productivity and an alarming level of delays and budget overruns in the delivery of projects. Large projects across asset classes typically take 20% longer to finish than scheduled and are up to 80% over budget [5]. Rethinking Construction Is Not an Internal Affair Troubles in delivering construction projects translate into much bigger problems for the whole economy. Governments worldwide suffer delays, substantial cost overruns, and low quality in delivering national infrastructure projects. The 2021 Report Card for America’s Infrastructure rated America’s infrastructure at C and D levels versus the desirable A level [10]. McKinsey & Company estimates that $57 trillion will have to be invested from 2013 to 2030 to meet the worldwide minimum demands for infrastructure programs. If construction productivity were to catch up with the entire economy, the industry’s value-added could rise by $1.6 trillion annually [4]. The Proven Way to Digital-Age Business Performance Decades of successes in all industrial sectors proved that the digital-age supply chain management combined with the digital way of exploiting the repeatability of components and processes is the way to revolutionize the making of complex products. This is the digital-age business process integration through “disintegration” by opening the business to embrace long-term partnerships with suppliers of components and services. Why Not Already Done in the Construction Industry? The dominance of the project-centric approach to business is the main obstacle to rethinking the construction business process. Viable business transformation is a long-term cross-project process. According to McKinsey & Company, if construction were to depart from entirely project-based approaches to more consistently employ a manufacturing-like system of mass production with much more standardization and manufacturing of modules and parts in factories off-site, the productivity boost could be an order of magnitude greater [4]. Construction-Manufacturing Convergence Is an Unstoppable Trajectory After more than a century of progress in the off-site fabrication of components and assemblies, most construction projects are, to a substantial extent, manufacturing projects. All fundamentals of digital-age manaufacturing apply to construction. According to
Digital-Age Construction – Manufacturing Convergence
851
a recent study, a 5-10x productivity boost is possible for some parts of the industry by moving to a manufacturing-style production system [4]. Unleashing the Extraordinary Power of the Repeatability, of the Economies of Scale A long-term cross-project business transformation strategy unleashes the power of the repeatability of construction components and work processes, the extraordinary power of economies of scale. The belief that no repeatability exists in construction results from looking the wrong way for it. No repeatability exists at the project level. A very high level of repeatability exists at the level of building components and work processes. The Egan Report did make known that an estimated 80% of construction components and work processes are repeatable [1]. Construction Business Digitization by Project Does Not Work Construction faces decades of falling behind the digital economy in business performance. Construction continues to be the least digitized industry; its level of digitization is slightly above the levels reached in hunting and agriculture [5]. Successful business digitization is a cross-project process along the repeatable components and work processes. Starting from “pilot digitization projects” does not work.
Construction-Manufacturing Convergence Is Becoming a Desirable Business Condition “KBR has taken a manufacturing and commodity approach to facility design to enable speed to market and maximize value.” “Assembly required: Construction is the new manufacturing,” Charles Towers-Clark, Forbes, January 14, 2020. “The construction industry is shifting to manufacturing and mass production,” Trevor English, March 7, 2020, interestingengineering.com. “Refabricating architecture, how manufacturing methodologies are poised to transform building construction” [10]. “Disruptive examples show how manufacturing and construction are converging,” Mark Smith, March 16, 2018, Redshift Autodesk. “A 5-10x productivity boost is possible for some parts of the industry by moving to a manufacturing-style production system” McKinsey Global Institute, “Reinventing construction: A route to higher productivity,” February 2017.
852
S. J. Egan and N. C. Tutos
Fig. 1 1995–2015, productivity evolution in the USA
2 Construction Faces Decades of Falling Behind the Digital Economy In the USA, from 1995 to 2015, the compound annual productivity growth rate went 1.76% up for the whole economy and 1.04% down for the construction industry, [4], Fig. 1. Two of the world’s most advanced economies, Germany, and the USA, faced decreasing production productivity in construction from 1989 to 2009, while the total economy went up about 40% in both countries [7], Fig. 2. A study covering 41 countries that generated 96 percent of the worldwide GDP determined that productivity growth from 1995 to 2014 was 3.6% per year in manufacturing versus 1% per year in construction [4], Fig. 3. According to a McKinsey & Company study, a 5–10x productivity boost is possible for some parts of the industry by moving to a manufacturing-style production system [5].
3 Construction-Manufacturing Convergence, What Does It Mean? 3.1 It All Comes to One Question Construction-manufacturing convergence does not mean an equal level of business performance or productivity. Construction will never reach the level of business performance of the automotive and aerospace sectors. Undeniably, construction-manufacturing convergence is the natural future of the construction industry. After more than a century of progress in the off-site
Digital-Age Construction – Manufacturing Convergence
853
Fig. 2 1989–2009 productivity gains in the USA and Germany
Fig. 3 1995–2014 productivity gains, construction versus manufacturing, in 41 countries
fabrication of components and assemblies, most construction projects are, to a substantial extent, manufacturing projects. Construction sites are assembling sites, the final assembling of the manufactured structures, systems, equipment, and components. Calls for learning from manufacturing are not new. The Egan Report, the Rethinking Construction report [1] requested by the British Government, was the first strong call. At almost the same time, the Civil Engineering Research Foundation
854
S. J. Egan and N. C. Tutos
(CERF), with Dassault Systemes’ support, conducted an extensive “learning from manufacturing” initiative. CERF membership included more than one hundred of the best American construction organizations. These days, McKinsey & Company, one of the most successful management consulting organizations, is a powerful voice calling for “reinventing construction.” According to a recent McKinsey & Company study, a 5–10x productivity boost is possible for some parts of the industry by moving to a manufacturing-style production system [4]. It Comes to One Single Question What are the construction-manufacturing differences that explain decades of the growing gap in business performance? The way to get the response is to figure out what digital-age manufacturing is and what explains the extraordinary progress reported by manufacturing sectors.
3.2 The ABC of the Digital-Age Manufacturing The centerpiece of the business in making complex physical products is the collaboration with an extensive network of suppliers within the supply chain. The supply chain consists of an immense number of suppliers of engineering and design services and suppliers of parts, systems, and assemblies. Advanced digital technologies enable a revolutionary transformation in business integration, including real-time collaboration with suppliers in product planning, engineering and design, fabrication, assembling, and after-sale services. These days, the makers of complex physical products make no any parts of the product. They figured that suppliers could make them better, faster, and at a lower cost. The supply channel for a large project is distributed worldwide, Fig. 4.
Fig. 4 Business integrator organization and its worldwide distributed supply chain
Digital-Age Construction – Manufacturing Convergence
855
Business integrator organizations are the keepers of the know-how that make this business model possible: (1) Know-how in project planning, product engineering, final product assembling, product operations, maintenance, and after-sales services. (2) Know-how in supply chain management and long-term collaboration with suppliers based on clear project risk and profit sharing. (3) Know-how in advanced digital technologies that enable collaborative product engineering, fabrication, final assembling, and after-sales services. The capability to create a digital product replica and to simulate product fabrication, assembling, operations, and maintenance processes before launching any actual production activity is a revolutionary way of making superior products. Supply Chain: The Makers of All Product Parts, Assemblies, and Systems There are hundreds of suppliers engaged in producing product parts, assemblies, and systems that make complex products. About 2.3 million parts make a Boeing 787 aircraft. Boeing’s supply channel includes over 6000 tier 1 suppliers based in the 50 American States and more than 100 countries. Ford’s supply channel has about 1200 tier 1 suppliers. About 500 contractors and suppliers collaborate in making a large cruise ship, a giant floating hotel. A large number of suppliers contribute to the execution of construction projects. Bechtel Corporation states that suppliers from 120 countries support annual expenditures exceeding $16 billion. The Heathrow T5’s supply chain included 150 first- tier suppliers in a contractual relationship with BAA, 500 sec tier, 5000 fourth tier, and 15,000 fifth tier. The successful making of Terminal 5 is the making of 50,000 people from 20,000 companies and the courage, ground-breaking management thinking in project planning and delivery management. Succesful business integrator organizations have the expertise to dynamically integrate the product planning, engineering, manufacturing, and assembly processes in collaboration with suppliers. Successful collaboration is enabled by (1) long-term partnerships based on clear profits and risk sharing and (2) a digital-age collaborative process in all project phases, from planning to final product assembly and later in after-sale services. This level of business process integration addresses the three main concerns in making complex industrial products. First, all parts perfectly fit in assembling the product; second, the final product provides the planned performance and reliability in operations; third, the final product can perform with a minimum maintenance cost.
3.3 Explaining the Construction-Manufacturing Gap in Business Performance With the above understanding of what digital-age manufacturing means, it becomes easier to explain the decades of growing construction-manufacturing performance gap. It comes to one significant difference; the construction business suffers from
856
S. J. Egan and N. C. Tutos
Fig. 5 The construction business process continues to be largely disconnected
inferior supply chain management. Construction faces a very poor collaboration among engineering and design disciplines and with the suppliers of equipment, systems, and components along the engineering, design, and construction processes, Fig. 5. The whole economy is based on digitally enabled collaboration with suppliers. Construction continues to be the least digitized industry; its level of digitization is slightly above hunting and agriculture [5]. Project-Centric Dominance Is the Main Obstacle to Rethinking the Construction Business Process For years, too many industry consultants pointed to a long “to-do list” for addressing the construction industry conditions. A too-long “to-do list” does not work. In reality, one obstacle that keeps the construction industry behind the digital economy is the dominance of the project-centric approach to business. Business process transformation is a long-term cross-project process; short-term project-centric thinking is the opposite of a long-term business transformation strategy. However, if construction were to depart from entirely project-based approaches to more consistently employ a manufacturing-like system of mass production with much more standardization and manufacturing of modules and parts in factories off-site, the productivity boost could be an order of magnitude greater. (McKinsey & Company, Reinventing construction, A route to higher productivity, February 20, 2017)
A long-term cross-project approach to business transformation unleashes the power of business repeatability, the power of the economies of scale. The “we are not manufacturing” is a belief motivated by the misconception that business repeatability benefits manufacturing while no repeatability exists in construction. A very high level of repeatability exists in construction; an estimated 80% of construction components and work processes are repeatable. The Egan Report [1] is the first study that revealed the high repeatability of construction components and work processes. We have repeatedly heard the claim that construction is different from manufacturing because every product is unique. We disagree. Not only are many buildings, such as houses, essentially repeat products that can be continually improved but, more importantly, the
Digital-Age Construction – Manufacturing Convergence
857
Fig. 6 Long-term cross-project way to a digital-age performance in construction process of construction is itself repeated in its essentials from project to project. Indeed, research suggests that up to 80% of inputs into buildings are repeated. Much repair and maintenance work also uses a repeat process. The parallel is not with building cars on the production line; it is with designing and planning the production of a new car model. The Egan Report (London 1998)
Exploiting the repeatability of components and work processes is the primary condition for sustainable success in the business digitization process, Fig. 6. “We’ve been able to take 80 years of airplane development knowledge and build that into CATIA V5 design tables that the engineers can use again and again.” Jim Strawn, Engineering Specialist, Cessna Aircraft Company
KBR, a very successful organization, made known the benefit of creating reusable digital definitions. “Our reference design approach allows our clients to benefit from a range of digitally enabled EPF-ready designs. KBR has developed reference design configurations, leveraged previous E&C expertise, and standardized competitive designs in collaboration with licensors, fabricators, and suppliers.” KBR, Inc website
4 Construction Business Digitization Went Wrong for Too Long Digital-2D for Drawing Production, a Misleading Fast Success Construction was among the first industries to invest in developing and implementing computer-aided engineering solutions during the early 1980s. Embracing 2D for drawing production went quick and easy; 3D for product and process modeling continues to be at a shallow level, Fig. 7. Embracing digital-2D for drawing production was a quick, easily quantifiable success. The implementation went easy with very little required training and with an affordable investment for hardware and software. The 2D implementation process is directly managed by designers responsible for drawing production; it does not change the process of creating and using drawings.
858
S. J. Egan and N. C. Tutos
Fig. 7 Construction business digitization, 2D versus 3D evolution
Pilot 3D Modeling Projects Did Not Work and Will Never Work Having trouble to figure how to make 3D-modeling part of the design and construction processes, construction organizations embraced the “pilot project” approach. A typical pilot project is a limited scope of 3D modeling to test its viability. A small group of CAD (computer-aided design) experts use traditional drawings to create the pilot 3D models. And this is the end of it; there are no end users, Fig. 8. No end users means no value. Pilot projects have zero impact on the actual design and construction efforts. At best, pilot 3D modeling may detect a few design errors and produce good promotional material. Pilot projects are isolated from facility engineering and construction processes, making them low-value investments. Construction Business Digitization, the Right Way The development of digital definitions of the repeatable construction components and processes is the main street to sustainable construction business digitization. Exploiting repeatability is the way of generating short-term and long-term success in the business digitization process. Design limited to facility design for construction has to be replaced by the facility lifecycle design by including (1) design for the manufacturability of components, (2) design for constructability, (3) design for operability, and (4) design for maintainability, Fig. 9. The Supply Chain Can Contribute More Than 40% of the Business Digitization Success Suppliers of equipment, systems, and components have the means to contribute significantly to construction business digitization. Suppliers can provide component 3D definitions and digital specifications in support of construction projects, Fig. 10.
Digital-Age Construction – Manufacturing Convergence
859
Fig. 8 3D-modeling pilot projects for no end users!
Fig. 9 Design for product lifecycle leads to digital-age business performance
Fig. 10 Supply chain collaboration makes almost half the business’s digitization success
What’s First, Business Process or Technology? The “chicken or the egg” Dilemma The digital age revolution is a profound rethinking of the business process; the advent of advanced digital technologies did make the rethinking possible. Is the digital-age business performance resulting from technology or business process transformation? Which one comes first? Forty years of successful business digitization proved that the driving strategy for success is the business rethinking strategy, Fig. 11.
860
S. J. Egan and N. C. Tutos
Fig. 11 The two components of a successful business transformation process
5 Industries that Perfected the Digital-Age Production System 5.1 Concorde, a Pre-digital Great Success in Distributed Production The making of Concorde, the first great success in making possible supersonic commercial flights, was a success of the distributed business model at the time when no digital 3D modeling was available, Fig. 12. Aircraft production started in 1966. Airframe construction was conducted by Aerospatiale (60%) and the British Aircraft Corporation (40%), with engine manufacture divided between Rolls-Royce (60%) and SNECMA (40%). An extensive network of sub-contractors, believed to number over 800 in two countries, produced the airplane parts delivered to main contractors. The final aircraft assembly did take place in France and UK. It was considered politically unacceptable to have this done by any of the two partners! Despite this costly duplication, the supersonic Concorde was a great technological success. The first flight did take place in 1976; the last flight is dated 2003, a total of 27 years of service.
5.2 Automotive, the First Sector to Embrace Digital-Age Distributed Production The automotive industry practiced distributed production for a very long time. This long-time experience offered an excellent start for the fast deployment of advanced digital technologies, making the automotive sector the first to embrace digital-age product lifecycle management. The “Automotive Supply Chain Management A2Z”
Digital-Age Construction – Manufacturing Convergence
861
Fig. 12 Concord, Production manufacturing breakdown by suppliers of assemblies. (https://www. heritageconcorde.com/concorde-production-and-construction)
study dated 2014, authored by Rahul Guhathakurta, is an excellent exemplification of a typical network of suppliers contributing to making a car model. (http://about. me/rahul.guhathakurta). This study includes a convincing illustration of the complexity of the supply channels in making modern cars, Fig. 13. The study mentioned above includes an excellent way of explaining the superiority of distributed integration in making complex products. A supply chain is a set of relationships among suppliers, manufacturers, distributors, and retailers that facilitates the transformation of raw materials into final products. Although the supply chain comprises several business components, the chain itself is viewed as a single entity. In the construction and identification of this integrated logistics model, we identified the primary processes as logistics processes concerning all the participants of the integrated value chain. It is reasonable to consider transfers and transactions of products and information as primary processes when logistics functions are focused. It shares operational and financial risks by having these suppliers build with their capital and operate their employees, facilities that are usually part of the OEM’s span of control. When demand is strong, they share the profits with Automotive Manufacturers because these suppliers receive a payment per unit that covers variable material and labor costs and contributes to overhead and profit. When demand is high, this relationship is profitable for all. When the economy is slumping, they share the loss with Automotive Manufacturers because Automotive Manufacturers buy what it needs to meet consumer demand. Automotive Manufacturers have less invested capital and therefore lower fixed costs.
862
S. J. Egan and N. C. Tutos
Fig. 13 Distributed manufacturing, 2011 Chevrolet Volt. (Source: “Automotive Supply Chain Management A2Z” published by Rahul Guhathakurta)
5.3 Fifteen Million Parts Make a Gigantic Floating Hotel Shipbuilding, commercial and military sectors, make enormous floating facilities. Meyer Werft is one of the world’s largest and most modern shipyards. Meyer Werft designs and builds cruise ships, river cruise ships, and ferries. According to Meyer Werft’s website, a cruise ship is a floating metropolis with about 15 million parts. Each of these 15 million parts must come in precise dimensions and configuration and be delivered on time. Meyer Werft uses digital engineering to break the ship down into Lego-like modules. Advanced digital 3D modeling, CATIA, supports the design, fabrication, and assembling. Each module is pre-assembled, including all the necessary fittings, such as cable shafts, air conditioning ducts, and even balconies. About 80 blocks, about 1200 tons each, make one ship. These blocks are welded together at the dry docks at the end of the process, Fig. 14. Modern shipbuilding uses modular construction processes. Our engineers use computer programs to break the ship into small, Lego-like pieces. Each module is preassembled – including all the necessary fittings, such as cable shafts, air conditioning ducts, and even balconies. The individual building blocks – up to 80 per ship – are only joined, welded, and wired at the dry docks right at the end. Different specialists can work simultaneously on one ship to drastically reduce the construction time. https://www.meyerwerft.de/en/technologies/optimized_processes/ind
Digital-Age Construction – Manufacturing Convergence
863
Fig. 14 Meyer Werft, modular concept, schematics
5.4 Aerospace, Digital-Age Distributed Production at Its Best Boeing 787 is an excellent example of success in fully embracing the digital-age distributed business model. The following diagram, Fig. 15, is a simplified map of the distributed integration in making the 787 commercial jets.
6 Digital-Age Construction Performance: It Can Be Done 6.1 Successes in Construction Manufacturing and Supply Chain Management The following success stories show significant progress in advanced manufacturing and supply chain management in the delivery of construction projects. Bechtel’s Curtis Island LNG Project Is a Great Success Story The construction of the gigantic Curtis Island LNG facility in Australia is a great success story in advanced modularization, of very advanced manufacturing in delivering construction projects. More than 260 modules, many of which weighed over 5000 tonnes, were manufactured in the Philippines, Indonesia, and Thailand, Fig. 16. With a growing demand for energy and industrial construction across the globe, developers increasingly face challenges that can put their project delivery at risk. Increased uncertainty related to safety, quality, cost, and schedule can seriously affect economics and financing options for projects. Modular construction has emerged as an effective solution to mitigate risk, particularly on projects in remote and/or harsh-climate locations, facing limited availability of skilled workforce or extensive labor costs, in need of extensive quality testing, using high-density piping design, or requiring schedule certainty. https://www. bechtel.com/services/modularization/
End-to-end supply chain management significantly contributes to the Bechtel Corporation’s success.
864
S. J. Egan and N. C. Tutos
Fig. 15 Distributed integration in the making of the Boeing 787
Fig. 16 Advanced modularization for the Curtis Island LNG project With annual expenditures exceeding $16 billion, we support large, complex projects in remote locations of the world using suppliers from 120 countries. We have the right processes, automation tools, market data, volume, and skilled professionals to meet our commitment to our customers: the responsible purchase and safe delivery of quality goods and services from reliable and diverse suppliers and subcontractors, where they are needed, on time, and at the lowest total cost of ownership. https://www.bechtel.com/services/ procurement/
Digital-Age Construction – Manufacturing Convergence
865
Fig. 17 Qingdao McDermott fabrication site in China. (https://www.mcdermott.com/What- We-Do/Fabrication-Services/Fabrication-Facilities/Qingdao-China)
McDermott Advanced Manufacturing Sites All Over the World McDermott is a premier, fully integrated provider of engineering and construction solutions to the energy industry. McDermott’s module-specific operations based in Lake Charles, Louisiana, U.S., Fort Saskatchewan, Alberta Canada, both Sattahip and Kasempol, Thailand, and through McDermont Wuchuan, a 50–50 joint venture McDermott and China Shipbuilding.” Figure 17 is the view of the Qingdao McDermott fabrication site in China. McDermott’s expertise in supply chain management spans from sourcing, purchasing, and transportation logistics to inventory control, inspection, and quality assurance. https://www.mcdermott.com/What-We-Do/ Procurement-Supply-Chain-Management Effective supply chain management is no longer a “manufacturing only concept”; it is becoming a must for business success in the construction industry. Mammoet, Making Possible the Transportation of Huge Modules and Assemblies Mammoet’s (www.mammoet.com) activity indicates the move to advanced manufacturing in the delivery of construction projects. Mammoet company, headquartered in The Netherlands, is the world leader in the heavy lifting and transportation of large modules for construction projects, oil & gas, petrochemical, offshore, power generation, and civil engineering sectors. The company works across the globe on all continents, with about 90 offices and branches. Mammoet’s factory-to- foundation services optimize the transportation of oversized and heavy components by managing the complete logistics chain from the factory to the destination. In 2003, Mammoet helped Unilever to lift its headquarters office and assemble it on top of the existing factory in a single weekend with zero impact on the production process in the factory. The whole four levels office building named The Bridge was manufactured and, when finished, placed on top of the factory. The traditional construction process was estimated to take months, with a costly impact on production.
866
S. J. Egan and N. C. Tutos
Fig. 18 Ottawa’s Lees Avenue bridge replacement. (Source: https://www.mammoet.com/news/ mammoet-completes-heaviest-lift-over-a-freeway-in-north-america-in-record-time/)
The case of Ottawa’s Lees Avenue bridge shows the “manufacturing way” of replacing old bridges in highly congested cities without paralyzing city traffic for a long time. Weighing 2254 tons and measuring 87.5 m long, the flyover is the city’s main traffic artery, expanding to accommodate its new Light Rail System. The biggest challenge was the bridge’s curved geometry and the graded roads on which it was transported, Fig. 18. The mine owner, Vale, was looking for an efficient way to build a nickel processing plant in a remote location in Newfoundland, where not enough people were available to build the plant within the desired time frame. Instead of relocating an entire workforce, Vale decided to build the plant elsewhere and deliver most of it with barges. Mammoet committed to transporting and installing the processing plant in modules. In a highly complex logistic operation, Mammoet delivered 600 parts in a predetermined order from numerous locations to the secluded site. Chevron has approved Mammoet as their Gorgon partner for the engineered heavy lifting and transport activities. Among other qualifications, deep experience with Australian quarantine practices made Mammoet the right choice for the Gorgon team. Mammoet is responsible for lifting, transporting, and installing all modules, the biggest of which weigh as much as 6200 tons. Building a vast LNG plant – encompassing natural gas production, liquefaction, and shipping – in one of the most inhospitable places on the planet presented the Yamal LNG consortium with colossal challenges. The site is shrouded in darkness for much of the year, and temperatures can fall to −57 °C. A modular prefabricated approach enabled large-scale, year-round construction in such an environment. Around 142 modules weighing up to 7000 tons were assembled off-site ready for transport and installation. Mammoet’s website
Accelerated bridge construction (ABC) and rapid bridge replacement (RBR) are techniques that can shorten infrastructure project schedules and reduce disruption through the off-site construction of new road or rail bridge elements. The RBR process enabled the new bridge before the old one’s demolition.
Digital-Age Construction – Manufacturing Convergence
867
FirstEnergy selected Bechtel to replace the two “once-through” steam generators and the reactor vessel closure head at Davis-Besse Nuclear Power Station near Oak Harbor, southeast of Toledo, Ohio. Each of the new steam generators for Davis- Besse weighs 550 tons. Mammoet successfully transported the large and heavy steam generators, Fig. 19. DLT Engineering: Technologies and Services That Enable Advanced Modular Construction “We develop, manufacture, and operate the specialist technology for modular construction of bridges, dams, refineries, power stations, offshore structures, buildings, and manufacturing plants. We also offer construction engineering and site services to help our clients get the best value from the equipment we supply.” https://www.dlteng.com
DLT Engineering, the sister company of Cleveland Bridge, specializes in developing, manufacturing, and operating advanced technology for modular construction of bridges, dams, refineries, power stations, offshore structures, buildings, and manufacturing plants. DLT Engineering brings advanced manufacturing to construction sites. DLT Engineering’s brochure summarizes many incredible project successes. https://www.dlteng.com/Download_files/DLT%20Brochure_7.2.pdf
Fig. 19 Transporting the new steam generators for the Davis-Besse nuclear power station. (https:// www.bechtel.com/projects/davis-besse-nuclear-power-station/)
868
S. J. Egan and N. C. Tutos
Fig. 20 DLT technology for the Riyadh Metro Project
The following is a selection of the DLT projects that indicate significant progress in construction-manufacturing convergence. Erection Gantries, Transporters & Skidding System – Riyadh Metro Project, Saudi Arabia DLT was sub-contractor to FAST Consortium for the design and supply of two DL-SE500/35 bridge span erection gantries, two DL-TLC500 transporters, and two DL-SC260/21.2/25 straddle carriers. The project scope included transport, and final erection of 500 tones, 35 m long precast concrete bridge deck beams on the 16.7 km long elevated section of Line 4 of the Riyadh Metro project, Fig. 20. The equipment was designed to deliver and erect the bridge beams at an average rate of 1 per day, to erect bridge beams on a plan radius of 1200 m and to pass through in-situ cast sections with a plan radius of 100 m. Launching Gantry – Temburong, Brunei DLT Engineering designed and supplied a 2 × 870-ton bridge deck erection gantry for the Temburong Bridge Project in Brunei. The gantry was used to erect two 50 m long 870-ton precast concrete bridge deck girders, which is believed to be the first ever bridge erection gantry of this type. The gantry erected the first pair of bridge girders in January 2017 and operated on the project for 2 years. The gantry was designed to erect viaduct sections up to four decks wide in some sections at an erection rate of one bridge span every two shifts. Honam High-Speed Railway Project, Korea DLT was a sub-contractor to Samsung C&T for designing and supplying a DL-SE1000/35 bridge span erection gantry (also known as a launching gantry or launching girder) for erecting the entire length of sections 4-1 of the Honam High-Speed Railway project. The DL-SE1000/35 can erect precast spans weighing up to 1000 tons and up to 35 m long at a rate of up to two spans per day and is a self-launching into the next span, Fig. 21. Detailed Erection Engineering – Heathrow Airport, New Terminal 5, London DLT was responsible for the detailed erection engineering for the 18,500-tonne steel roof structure of the new main terminal building and a 1100-tonne air traffic control tower, Fig. 27. Both were fabricated off-site erected on-site using strand jack lifting systems. The roof box girders, purlins, and cladding to the terminal building roof were erected in six 2000-tonne lifts. The control tower was pre-assembled off-site into seven modules and assembled on-site using a unique vertical jacking technique.
Digital-Age Construction – Manufacturing Convergence
869
Fig. 21 DLT bridge deck erection gantries. (https://www.dlteng.com/en/Products/bridge_launching_gantry.html) Fig. 22 DLT’s rising factory. (https://www. dlteng.com/en/projects/ Jump_Factory.html)
Rising Factory – East Village Plots 8 & 9, Stratford, London In 2016 DLT Engineering was awarded the contract by Mace Limited of the United Kingdom to supply 2 Pinned Climbing Jack Systems and Engineering support for the construction of twin 30-story residential towers at East Village, Stratford, London, Fig. 22. The project involved the construction of two high-rise residential towers using precast concrete. The “rising factory” concept was used to create a waterproof factory environment for the construction of each floor. Construction of each tower generally
870
S. J. Egan and N. C. Tutos
progressed at one floor per week. Four 230 mm-diameter hydraulically operated pins supplied by DLT support the factory weight in operations. After finishing each floor, the rising factory is lifted about 3.3 m using four DL-CP250 pinned climbing jacks. The lifting of the rising factory, about 900 tons weight, was completed in a little over two hours.
7 Convincing Successes in Construction Business Digitization The following selection of digital success stories reported by large and small organizations prove that digital business digitization is not an impossible mission.
7.1 The ITER Digital Story: Digital-3D, the “Common” Engineering Language for 32 Participating Countries The ITER story started long ago during the mid-1990s in San Diego, California, and is now under construction in France. The International Experimental Reactor (ITER) project is a worldwide scientific and engineering effort to make possible nuclear fusion on Earth. Nuclear fusion is the process that generates the energy of the Sun, of the stars. Three-dimensional modeling became the “common engineering language” for the 35 countries cooperating in this extraordinary scientific and engineering effort, Fig. 23. Decades of experiments proved that the most effective magnetic configuration is toroidal, shaped like a doughnut, in which the magnetic field is curved around to form a closed loop. The most advanced solution of this type is the TOKAMAK reactor. TOKAMAK comes from a Russian acronym for “toroidal chamber with magnetic coils.” First developed by Soviet research in the late 1960s. TOKAMAK is now accepted worldwide as the most promising configuration for a magnetic fusion device. In 1985, President Ronald Reagan and President Mikhail Gorbachev signed an agreement to collaborate in developing nuclear fusion technology, and the ITER project was launched. Dr., Tutos, one of the authors of this document, had the opportunity to coordinate the development of the first digital simulation in 3D of the ITER TOKAMAK assembly process. The first 3D-based simulation of the TOKAMAK construction process was developed in 1997 by the ITER Central Team, located in San Diego, California, at that time, assisted by Dassault Systemes of America. ITER TOKAMAK is the nuclear reactor of ITER, the most complex machine ever engineered. TOKAMAK will weigh 23,000 tons, as heavy as three Eiffel Towers. Approximately one million components will make this complex machine, Fig. 24.
Digital-Age Construction – Manufacturing Convergence
871
Fig. 23 ITER, 3D models of the Tokamak building. (Source: https://www.iter.org/album/ Media/7%20-%20Technical)
Fig. 24 ITER, 3D models of the Tokamak nuclear fusion reactor. (Source: https://www.iter.org/ album/Media/7%20-%20Technical)
TOKAMAK temperatures will reach 150 million degrees C, ten times the temperature at the core of our Sun. The temperature at our Sun’s surface is 6000 degrees C, and at its core, 15 million degrees C. TOKAMAK temperature must be higher than the Sun’s to compensate for its inability to reach the level of pressure in the Sun. Figure 30 shows two simplified 3D views of the tokamak. Dassault Systemes’ CATIA is in use for 3D modeling at ITER.
872
S. J. Egan and N. C. Tutos
A consortium of 35 countries —China, the 28 states of the European Union, plus Switzerland, India, Japan, Korea, Russia, and the USA, collaborate on the ITER project. ITER will not capture the energy it produces as electricity, but—as the first large scale fusion experiments in history to create a net energy gain—it will prepare the way for the future machines that will make this possible.
7.2 “Digital Gehry” Is an Extraordinary Success Based on Practical Business Strategy “And by chance, it was precisely at the time when Gehry architecture firm first verified that the irregular forma of the building sculpture could be constructed with the methods used for aircraft or automobiles. All who participated in the design and construction of the Dancing Building were balancing on the very boundaries of the possibilities of contemporary architecture” Extras from the introduction to the “Dancing Building” book by Frank Gehry and Vlado Milunic
What is now named “digital Gehry” is the story of how a great architect who did not need computers to create architectural concepts figured out that advanced digital technologies would make the error-free construction of his designs possible. Frank Gehry was the first architect to embrace advanced digital technologies to make construction easier and communicate design intent to engineers, designers, fabricators of components, and construction organizations. Frank Gehry’s extraordinary success in creating architectural magic worldwide is the result of his architectural imagination and creativity and his early understanding that digital 3D modeling makes possible higher levels of artistic freedom in architecture. Implementing advanced digital technologies is a significant investment in equipment, software, and most personnel training, and Gehry Partners, LLP, figured out how to make this a cost-effective investment. The first digital project was the Fish Sculpture project in Barcelona, Spain. The Fish Sculpture is 56 m long and 35 m high, Fig. 25. The project ran into trouble when errors in the fabrication of the structural steel ribs for the sculpture put timely project delivery for the opening of the Olympic games in 1992 at very high risk. Looking for a last-minute solution, Frank Gehry learned about Boeing’s success in using Dassault Systemes’ CATIA modeling for airplane design. He decided to hire CATIA experts to save his project. Great thinking: the project was finished in time. The consequence? Error-free fabrication of all components. This first digital success motivated the larger-scale implementation of CATIA 3D modeling solutions for the design and construction of the Guggenheim museum in Bilbao, Spain, Disney Concert Hall in Los Angeles, Der Neue Zollhof office buildings in Dusseldorf, Germany, Experience Music Project in Seattle, Washington State, MIT Computer Science Building in Cambridge, Massachusetts, the Dancing Building in Prague, Czech Republic, and many others.
Digital-Age Construction – Manufacturing Convergence
873
Fig. 25 The Fish Sculpture, Barcelona, Spain. (Source: barcelonatyriesme.com)
Fig. 26 Disney Concert Hall, Los Angeles
Disney Concert Hall, Los Angeles, California, Fig. 26, had a difficult start but ended as a great success. The initial design was a split process that failed. Gehry Partners provided the conceptual design in three dimensions based on CATIA, and a different architect took over the detailed design based on 2D drawings. Almost everything went wrong; the detailed design was based on traditional drawings and failed to respect the architectural intent. The estimated structural steel fabrication cost reached shocking levels, forcing the project to stop in 1994.
874
S. J. Egan and N. C. Tutos
Two years later, in 1996, the Disney family and the city of Los Angeles agreed to restart the project and to allow Gehry Partners to provide both conceptual and detailed design. This time, the 3D-based design resulted in a great success, the worldwide known Disney Concert Hall. A highly significant project is the construction of Gehry’s Zollhof Towers in Dusseldorf, Germany, along the Rhine River, Fig. 27. Zollhof Towers is a set of twisted buildings at almost the cost of traditional boxy buildings. Numeric- controlled machines cut large concrete forms in Styrofoam to make complex concrete structures easier. The slightly higher construction cost paid off; the investment intended to create medium-priced office space, but it became high-priced office space. Great architecture continues to be appreciated and continues to be in high demand. Gehry Partners, in partnership with Dassault Systemes, jointly invested in developing 3D-based solutions for architects. Gehry Technologies, LLC, a fully independent organization, was created. Frank Gehry allocated some of his best people to put together a highly skilled team with expertise in architectural design and CATIA. Frank Gehry’s success motivated the architectural world to embrace digital engineering, construction concepts, practices, and manufacturing production models. It is no longer shocking to read book titles such as “Manufacturing Architecture” by Dana K. Gulling and “Designing and Manufacturing Architecture in the Digital Age” by Branko Kolarevich, Dean of the College of Architecture and Design, New Jersey Institute of Technology.
Fig. 27 Gehry’s Zollhof Towers, Dusseldorf, Germany
Digital-Age Construction – Manufacturing Convergence
875
7.3 3D Is Creative Freedom: Sydney Opera House Versus Disney Concert Hall Thirty-one years apart, the Sydney Opera House and Disney Concert Hall are two great constructions dedicated to music. Architects and engineers can imagine a great building with impressive forms and excellent functional arrangements. The risk of building it imposes severe limitations. Centuries ago, it did take decades to create fantastic architecture; the digital economy does not accept this luxury. The story of the Sydney Opera House versus Disney Concert Hall, Fig. 28, tells the extraordinary power of the advanced digital technologies in reducing project risk in creating great bridges and roads, airports, office buildings, and all kind of service and industrial facilities. The Sydney Opera House, designed by Jørn Utzon, a Danish architect, is one of the most recognizable buildings of the twentieth century, Australia’s number one tourist destination. This building is also a painful example of the difficulty of constructing great architecture. Construction took 10 years longer than planned and was 1357% over budget: $102 million versus the initial budgeted $7 million. Construction started in 1958; the opening took place in 1973. As Pritzker Prize judge Frank Gehry said when awarding architecture’s highest award to the Opera House’s architect in 2003: “Jørn Utzon made a building well ahead of its time, far ahead of available technology… a building that changed the image of an entire country.”
7.4 CSA, Decades of Digital Successes for 129 Nuclear Plants/Units Beginning in the mid-1970s, providers of CAD solutions for the construction industry focused on creating 2-dimensional functionality to produce drawings and limited 3-dimensional functionality. Construction Associates, Inc. (CSA) is the only CAD solutions provider that focuses on the need to digitize the existing drawings as
Fig. 28 Sydney Opera
876
S. J. Egan and N. C. Tutos
the basis for creating digital 3D models. CSA developed a very effective solution for digitizing existing drawings to address this need. For over 40 years, CSA has had a tremendous competitive advantage in helping customers deal with high-complexity construction and plant modification projects, mainly in the nuclear power sector. CSA worked on about 129 nuclear plants/units around the world. It started by writing computer software to detect design errors and interferences among components for a nuclear power plant. As a young engineer at Daniel International beginning in 1971, Amadeus Burger developed an effective way to fast digitize construction drawings to address this need. CSA, established in 1976, launched its Product Definition Language (PDL), a specifications-driven method of digitizing drawings to create a product/plant database. The design disciplines supported were piping, HVAC, electric trays, supports, steel, concrete, equipment, embedment, rebar, etc. CSA did figure that design drawings and documentation did not correctly support construction and licensing processes and offered a speedy way of creating intelligent 3D models and graphical and non-graphical output on demand. No other CAD solution provider showed a similar easiness in satisfying plant owners faced with the reality of using traditional drawings and documents. Figure 29 is a simplified functional diagram of the CSA solution. CSA added the integration of laser scanning technology with digital 3D modeling to properly support plant modification projects. Laser scanning enables the capture, in digital format, of the as-built conditions. 3D object-oriented scanning of as-built conditions, combined with 3D-based engineering of facility modifications, is the solution of choice for highly congested production and service spaces. Figure 30 shows two moments of an old heat exchanger removal process. Clashes with the existing structures are shown in red color. This combination of advanced digital technologies is the solution of choice for equipment replacement in nuclear power plants and petrochemical facilities. The River Bend Nuclear Station project was the first large-scale digital 3D model of more than 350,000 components. This level of digital modeling was exploited to
Fig. 29 CSA solution, functional diagram
Digital-Age Construction – Manufacturing Convergence
877
Fig. 30 3D scanning integrated with 3D modeling to support retooling projects
Fig. 31 High-value applications of digital 3D modeling, nuclear power plants
support very high-value applications supporting the construction process, licensing process, and later in plant operations and maintenance, Fig. 31. Composite 3D models provide a global understanding of the project progress and visualize the work done by subcontractors and engineering disciplines, Fig. 32. The most significant value of 3D modeling is the ability to generate on-demand graphical and non-graphical documentation in support of daily construction, inspections, and plant licensing efforts, Fig. 33. The (a) view illustrates the association of graphics with component specifications. The (b) view explains instructions for an in-service inspection task. The (c) view shows a design change request due to an error in pipe routing. The (d) view
878
S. J. Egan and N. C. Tutos
Fig. 32 River Bend, nuclear station, reactor building. Left side image: view V24, a color coding by engineering discipline. Right side image: view number V47 East, a color coding by subcontractors
Fig. 33 detailed graphical and non-graphical views of the River Bend project
illustrates the checking the space arrangement in a mechanical room. The (e) view is an image from a simulation of a retooling project. 3D process simulation enables the determination of what should be removed to make space for the heat exchanger removal and the installation of the new heat exchanger. It is puzzling that after decades of 3D successes, drawings continue to get strong support in the construction industry. Fig. 34 is a segment of a 6′ × 9′ construction drawing; this image motivates asking how in the world the construction business
Digital-Age Construction – Manufacturing Convergence
879
Fig. 34 A segment from a 6′ × 9′ construction drawing
continues to be based on drawings as the means of communicating the design intent. The high complexity of confusing drawings is behind the high volume of design errors detected during the construction process. The following case illustrates the solid attachment for drawings in the construction industry. A large institution, well known for its success, invested in a 3D-modeling system to support its business. They succeeded in creating a detailed 3D model for a petrochemical facility. All went well until the engineering team decided to test the 3D-modeling system in making drawings. The chosen test was to create a projection of the whole 3D model of the entire facility. It did take a lot of processing time to produce a drawing consisting of millions of overlapping lines, curves, symbols, and text, a drawing no one can read. They missed the point that most values were in creating 3D views to support design reviews and, later, the construction process.
The Three Mile Island 2 Clean-up Project CSA was hired to develop 3D modeling for the Three Mile Island 2 nuclear reactor building clean-up, a $1-billion project. In addition to 3D modeling in support of engineering the clean-up program, CSA developed 3-dimensional radiation mapping (3D RADMAPS) to monitor radiation levels. According to the United States Nuclear Regulatory Commission (NRC), the Three Mile Island Unit 2 reactor near Middleton, Pa., partially melted down on March 28, 1979. This was the most serious accident in commercial nuclear power plant operating history, although its minor radioactive releases had no detectable health effects on plant workers or the public. Its aftermath brought about sweeping changes involving emergency response
880
S. J. Egan and N. C. Tutos
planning, reactor operator training, human factors engineering, radiation protection, and many other areas of nuclear power plant operations. A three-dimensional computer model and radiation mapping system of the damaged Three Mile Island 2 nuclear reactor building will give clean-up officials a complete statistical database of the plant for the first time. The model is the “missing link” needed to move from options to operations for the four-year $1-billion four-year clean-up program, says John C. Devine, Jr., technical planning director for plant owner General Public Utilities Nuclear Corp. “….” Without the computer model, clean-up would be complicated by the need to corelate the large volume of data and design drawings, says Schauss (Richard D. Schauss), system manager in GPU’s technical planning division). CSA will combine information from as-built drawings with non-graphic data, such as equipment performance specifications or vendor identification numbers, into one 3-D database. Each reactor component’s position will be identified by coordinates, says CSA vice president Thomas Duncan. The model will first be used to generate detailed maps for workers to use during radiation surveying prior to clean-up, says Duncan. Computer aids TMI clean-up, ENR/Aprille 1985
Advanced 3D Modeling for the Sizewell B Nuclear Plant in the UK This is one of the most successful large-scale implementations of advanced 3D modeling to support highly complex nuclear power plant design and construction efforts. UK Nuclear Electric, in charge of plant design, decided to deploy advanced 3D modeling for a critical component of the Radwaste building project to address the complexity of the fast construction process and the demanding licensing conditions. The 3D model developed in 3 months supported the production of thousands of piping iso drawings. This project benefitted from plant 3D modeling of cable routing, Fig. 35, a premier in dealing with this level of complexity. The list of applications included detailed modeling of the control room, including all instrumentation boards. Figure 36 shows a typical detailed view, piping system, that integrates 3D with the associate component specifications. The success of this initial pilot motivated Nuclear Electric to deploy 3D modeling to support a lot larger scope for the completion of the Sizewell B project. It resulted in developing one of the most extensive 3D-modeling efforts in the construction industry, with about one million construction components fully defined geometrically with all the associated design and construction specifications, Fig. 37. The modeling included welds, cables, instruments, and penetrations. Thousands of design errors have been detected in the early design process, a gigantic level of savings in construction.
8 Calls to Action Construction-manufacturing convergence used to be a forbidden concept. Faced with the troubling construction-manufacturing gap in performance, the “we are not manufacturing” was an easy way to feel better. In time, calls to learn from manufacturing got stronger and stronger, and the industry accelerated the progress in off-site prefabrication in manufacturing construction components and assemblies. Ironically,
Digital-Age Construction – Manufacturing Convergence
Fig. 35 Sizewell B, electrical trays
Fig. 36 Piping system detail with the associated design specification
881
882
S. J. Egan and N. C. Tutos
Fig. 37 Sizewell B Nuclear Plant in the UK
despite the fast-growing manufacturing component of the construction industry, the “we are not manufacturing” was kept alive. It did not help the industry. The Eagan Report, London, U.K. 1998, and the Civil Engineering Research Foundation (CERF) in the USA were the first voices calling for construction- manufacturing convergence. The Egan Report is the work of an interdisciplinary task force funded by the U.K. Government to address the troubling construction business performance and safety records. The “learning from manufacturing” initiative (1997–2004) was a team effort, CERF with support from Dassault Systemes and Gehry Partners, LDT. CERF membership included more than one hundred of the best American construction organizations. Each of the CERF conferences attracted hundreds of construction executives from all over the world. These days, McKinsey & Company, a worldwide leader in management consulting, is a compelling voice calling for “reinventing construction” for construction business digitization and embracing a manufacturing-style business model [4–9].
Digital-Age Construction – Manufacturing Convergence
883
8.1 The Egan Report Was the First Convincing Call to Rethink Construction As CEO of Jaguar for 10 years and British Airports Authority (BAA) for 10 years, Sir John Egan succeeded in addressing two very complex business challenges. One in manufacturing, by saving Jaguar from bankruptcy [3]. The second one in construction is the Heathrow Terminal 5 extension, a $5 billion project. When CEO of BAA, John Egan chaired the Construction Task Force charged by the UK Deputy Prime minister to study ways of improving the construction industry. Alarmed by overspending, delays in delivering construction projects, and poor safety records, the UK Government demanded recommendations for ending this troubling condition. The task force included top-level executives from construction, steel industry, manufacturing sectors, retail, and banking. The task force produced the “Rethinking Construction Report,” London 1988, the report now known as “The Egan Report” [1]. The Egan Report is the first to promote construction-manufacturing convergence and significant improvements in supply chain management. We have repeatedly heard the claim that construction differs from manufacturing because every product is unique. We do not agree. Not only are many buildings, such as houses, essentially repeat products that can be continually improved but, more importantly, the process of construction is itself repeated in its essentials from project to project. Indeed, research suggests that up to 80% of inputs into buildings are repeated. Much repair and maintenance work also uses a repeat process. The parallel is not with building cars on the production line; it is with designing and planning the production of a new car. The Egan Report
The 37-page Egan Report outlines straightforward ways to improve the construction business process. Sir John Egan had the opportunity to apply many of these recommendations in planning the Heathrow Terminal Five project. The famous T5 Agreement was a breakthrough in managing project risk and profit. Based on the history of the airport extension projects, BAA faced a “one billion pounds overspent and a year late” risk in the T5 construction. T5 is a great success story in redefining the role of the client in managing a large construction project and in defining a new contractual system. “One billion pounds overspent, and a year late were predictions if we followed the industry norm. A tightly regulated BAA could afford neither. Coming from the car industry, I had been astonished by the waste and poor practice that I found in the construction industry in the 1990s, an industry of time and cost overruns that was killing over 130 people a year. As I chaired the Rethinking Construction government-sponsored think tank, it became clear to me that the role of the client had to be fundamentally different. We needed to think differently about risk management and about a long-term relationship with profitable suppliers, who worked with us to change the approach to computer-based design, design for manufacturing, supply chain management, and safe and efficient processes, thus getting the best out of the well-trained workforce.” Sir John Egan foreword to Sharon Doherty’s “Heathrow’s T5, History in the Making”.
884
S. J. Egan and N. C. Tutos
After facing a large-scale construction disaster, the collapse of NATM station tunnels on the Heathrow Express Rail Link Project at Heathrow Airport in London in October 1994, BAA was planning the $5-billion Heathrow T5 project. T5 was, at the time, the largest European construction project. The Health & Safety Executive (HSE) described the accident as “the worst civil engineering disaster in the UK in the last quarter century.” Repairing work because of the collapse cost £150 m, three times the cost of the original work. The accident caused widespread disruption to flights at the airport and months of delay to work on the London Underground until engineers were granted permission to continue using the same tunneling method. The HSE called for a culture change in construction in which health and safety were paramount. Heathrow T5 was a great success in testing the validity of the Egan Report. Heathrow T5 was a combination of 147 projects, $2.5 million to $350 million each. The Heathrow T5’s supply chain included 150 first-tier suppliers in a contractual relationship with BAA, 500 second tier, 5000 fourth tier, and 15,000 fifth tier. The successful making of Terminal 5 is the making of 50,000 people from 20,000 companies and the courage, ground-breaking management thinking in project planning and delivery management [2]. Each one of the 50,000 T5 workers was told that T5 was history in the making and that one day they would be proud to say: “I built Terminal 5.” This is proven true; in 2019, Heathrow T5 won the World’s Best Airport Terminal at Skytrax Award. Passengers voted Heathrow Terminal 5 the best in the world at the Skytrax World Airport Awards for the sixth time in its 11-year history.
8.2 CERF: Ten Years of Promoting “Learning from Manufacturing” Civil Engineering Research Foundation (CERF) was a strong promoter of “learning from manufacturing” for more than 10 years to improve performance in delivering construction projects dramatically. CERF membership included over 100 highly successful American construction organizations and attracted thousands of worldwide participants to CERF conferences and technology tradeshows. The“ learning from manufacturing” initiative was conducted in collaboration with Dassault Systemes. The 2000 Symposium and the associated trade show attracted a larger international audience: over 800 participants worldwide. The symposium’s agenda included several Dassault Systemes and Gehry Partners presentations and demonstrations of projects, Fig. 38. Dr. Tutos, as Dassault Systemes’ vice-president and member of CERF’s Corporate Advisory Board (CAB), oversaw CERF’s “learning from manufacturing initiative” for 10 years. CERF’s Corporate Advisory Board was launched with direct support from some of the most successful construction organizations, such as Bechtel Corporation, Arup USA Group, and CH2M Hill Corporation. The white
Digital-Age Construction – Manufacturing Convergence
885
Fig. 38 Symposium 2000, CERF with support of Dassault Systemes and Gehry Partners
paper defining this initiative was the subject of the CERF/CAB Executive Conference, Spring 2005, in Virginia, Fig. 39. This conference attracted over 400 participants, an international; attendance. For years, Dassault Systemes’ promoted “learning from manufacturing” all over the world through presentations and demonstrations of convincing projects. The INCITE 2004, “World IT for Design and Construction”, Langkawi, Malaysia, attracted a very large international participation. Dassault Systemes presented the “AEC: On the Road from CAD to PLM; A Road Paved by the Aerospace,
886
S. J. Egan and N. C. Tutos
Fig. 39 CERF “Construction industry transformation initiative”
Automotive and Shipbuilding Industries”, Neculai C. Tutos, Vice President, Dassault Systemes.
8.3 McKinsey & Company: The “Reinventing Construction” Imperative These days, McKinsey & Company is a very strong voice in promoting “Reinventing Construction”, manufacturing-style production system, and defining national infrastructure programs as the springboard for construction business process transformation [6, 8–10].
Digital-Age Construction – Manufacturing Convergence
887
“A 5-10x productivity boost is possible for some parts of the industry by moving to a manufacturing-style production system” (McKinsey Global Institute, “Reinventing construction: A route to higher productivity,” February 2017). “National infrastructure programs are ideal springboards for business transformation.” (“Governments can lead construction into the digital era,” McKinsey & Company, April 10, 2019.
McKinsey & Company estimates that $57 trillion will have to be invested from 2013 to 2030 to meet the worldwide minimum demands for infrastructure programs. If construction productivity were to catch up with the entire economy, the industry’s value-added could rise by $1.6 trillion annually [4]. McKinsey & Company makes a very strong case of the need for construction industry digitization. The London Summit 2018 was dedicated to this subject: Major project Delivery and Digital Transformation “, Outcomes Report, December 2018 [8].
8.4 Governments, the Springboard for Construction Industry Transformation “National infrastructure programs are ideal springboards for business transformation.” (“Governments can lead construction into the digital era,” McKinsey & Company, April 10, 2019
National and regional infrastructure programs make a perfect springboard for an accelerated move to a digitally enabled distributed production system and product lifecycle digital integration. Taxpayers primarily fund national infrastructure programs, and governments are in an excellent position to demand business process improvements in approving this massive spending. An intelligent move to a digitally enabled distributed production system is the way to create a national manufacturing sector capable of providing advanced manufacturing in the fabrication of construction components and modules in support of all infrastructure projects. This cannot happen overnight, but the gigantic size of most national infrastructure programs offers the necessary critical mass for accelerated business process transformation. Highways, railways, power transmission lines, national power grid, water treatment and distribution facilities, harbors, airports, etc. will have to become intelligent facilities in support of optimal operations and monitoring performance, reliability, security, and early detection of pre-failure conditions, Fig. 40. There are extraordinary benefits from embracing 3D-based investment planning for infrastructure projects; first, this will dramatically simplify estimations of cost, schedule, and environmental impact of infrastructure projects. This is a process that takes years for most such projects.
888
S. J. Egan and N. C. Tutos
Fig. 40 No modern economy is possible without a modern national infrastructure
The cost of approving the most extensive infrastructure projects is heavily underestimated, generating massive budget overruns and years of delay in delivering these projects. In too many cases, this also leads to low-quality construction. Most infrastructure programs are huge investments providing the critical mass to launch successful construction–manufacturing convergence at the national level. McKinsey & Company estimated that $57 trillion will have to be invested from 2013 to 2030 to meet the worldwide minimum demands for infrastructure programs. The world can benefit from $1 trillion annually in savings from a viable 60% improvement in project delivery.
8.5 Digital-Age Education of Engineers, Technicians, and Project Managers Construction faces a growing difficulty in attracting engineering talent and labor. The image of “a low-tech industry” is not an attractive choice for a young generation that dreams of being part of the digital revolution. Universities that teach construction will have increased difficulty in attracting talent. It is already happening. The imperative need to develop digital age teaching methods was powerfully expressed by the call for action signed by personalities representing great universities, Gehry Technologies, and Dassault Systemes, Fig. 41. Universities and technical schools in construction teach disjointed engineering and construction topics. Advanced digital technologies make now possible integrated teaching. Makes possible the teaching of how it all connect and interact, Fig. 42. The following is a provocative outline of what digital-age teaching in AEC can address. 1. Facility lifecycle management concepts Universities must dramatically improve the teaching of lifecycle management of how facility engineering impacts the construction, operations, and maintenance processes.
Digital-Age Construction – Manufacturing Convergence
889
Fig. 41 Signatures on the first page of the Frank Gehry, Architect book, January 25, 2004, Santa Monica, CA, Frank Gehry office
Fig. 42 From fragmentation to integration in teaching engineering and construction methods and processes
Construction engineering teaching will have to focus on the Digital Twin concepts that can revolutionize the construction industry. Universities will have to teach the benefits of spending more time and money in the project engineering phase to dramatically reduce the lifetime cost and shorten the project delivery schedule. 2. Physical-functional integration Production and service processes are characterized by space arrangements and by the systems supporting these processes. Systems mean production/service systems and utility systems such as the power system, water supply system, material handling system, and HVAC (heating and air conditioning) system. Failure to adequately teach the ways to deal with the engineering of space arrangements and systems engineering is a significant weakness in construction engineering teaching. 3. Construction manufacturing concepts Maximization of the manufactured content is a way to minimize traditional on- site efforts exposed to adverse weather conditions and low productivity conditions.
890
S. J. Egan and N. C. Tutos
Teaching innovative project engineering for maximum manufactured content must include the engineering of modularization as necessary for optimal fabrication and on-site assembly. 4. Requirements management Managing requirements is a very complex engineering process; it has to include the production/service process requirements regarding space and system specifications. The teaching of the requirements management must clarify the specificity of the key industrial and service sectors such as health facilities, discrete manufacturing facilities, power generation facilities, national infrastructures, housing, etc. Requirement management must include the environmental, safety, security, budget, schedule requirements, and city, regional, and national regulatory requirements. 5. Project Management Universities will have to improve teaching the planning of the project engineering and construction processes. Teaching project management should include teaching supply chain management (SCM) concepts. 6. Structural engineering The advanced structural analysis enables faster evaluation of structural alternatives. Failure to adequately teach how to meet architectural requirements and optimal fabrication of structural components and modules is a weakness in teaching construction engineering. Loads management is another weakness in teaching structural engineering and a significant flaw in project structural engineering. 3D-based project engineering provides the means to correctly identify and quantify passive and dynamic loads.
9 From Wooden Mock-Up to Digital Twin: An Amazing Story of Progress 9.1 When the Product Twin Was a Wooden or Plastic Mock-Up For centuries, drawings have been the only way to communicate the 3-dimensional intent in the design of all products, as required to make possible their fabrication. Using wooden and plastic 3D mock-ups became a common practice in designing highly complex products such as airplanes and nuclear power plants, which face very high safety requirements. Aerospace favored the use of full-size wooden models. The nuclear power sector used reduced-scale plastic models; it was no way to build a full-scale plastic model for a nuclear power plant. The cost of building a reduced-scale plastic mock-up for a nuclear power plant was $16 million, a very high price for the 1980s.
Digital-Age Construction – Manufacturing Convergence
891
Fig. 43 Concorde, full scale wooden mock-up. (https://www.heritageconcorde.com/concorde- production-and-construction)
The Concorde story, considered the most glamorous airplane ever built, illustrates the 3D-based engineering evolution. UK and France collaborated on the engineering and production of Concorde. The collaboration treaty was signed in 1962. The first flight did take place in 1976; the last one is dated 2003, a total of 27 years of service. A full-scale wooden mock-up was created to validate the design, Fig. 43. Additional mock-ups were built for various purposes during the project, including the flight deck used for instrument equipment layout arrangements.
9.2 The Advent of Digital 3D Modeling CADAM (Computer-graphics Augmented Design and Manufacturing) began as an internal mainframe application referred to as “Project Design” within Lockheed’s Burbank, California operation in 1965. By the end of the 1970s and the early 1980s, many organizations started to deliver digital Computer-Aided Design (CAD) solutions for the manufacturing and construction sectors. In 1974, CAD Centre, a British organization, offered the Plant Design Management System (PDMS), a 3D-based plant design solution. In 1978, Dassault Aviation, France, offered its Conception Assistée Tridimensionalle Interactive (CATI) solution, later renamed CATIA, when Dassault Systemes was established as a subsidiary of Dassault Aviation.
892
S. J. Egan and N. C. Tutos
In 1982, Lockheed started to use the first 3D CADAM solution for its aircraft design. In 1981, Intergraph launched its 3-dimensional plant design system; in 1985, Intergraph was the second largest CAD vendor after IBM. At that time, IBM was the vendor of Lockheed’s CADAM solutions and Dassault Systemes’ CATIA. Bentley Systems started to offer its MicroStation solution in 1984. Autodesk began to provide its 3D-based solutions in 1985; Autodesk was founded in 1982. In 1980, CALMA began to provide its Computer-Aided Design/Drafting/ Documentation for Engineers and Contractors (CADEC) solution for the construction industry. During the early 1980s, major construction organizations firmly committed to embracing digital 3D modeling. According to Engineering News-Record, on January 10, 1985, Bechtel Power Corporation, Gaithersburg, MD, developed its 3D software based on the Intergraph 3D engine. Fluor Corporation, Irvine, CA, committed $14 million in 1985 to begin a staged conversion to 3D design; CALMA hardware and software represented $6 millions of this investment.
9.3 Boeing 777, a Giant Step to Digital-Age Aerospace Engineering The Boeing 777 was the first paperless aircraft, the first enormous step to digital-age aerospace engineering. By the end of the 1980s, Boeing Corporation decided to use CATIA 3D modeling for the design of a new commercial jet, the 777 aircraft. At that time, CATIA was a mainframe application sold by IBM. At the peak of the 777 design effort, more than 2200 CATIA workstations were networked into a cluster of eight IBM 3090-600 J mainframe computers, the largest cluster in the world at that time. This networked environment supported interactive collaboration with suppliers of parts, assemblies, and systems worldwide. The digital modeling effort for the 777 commercial jets, launched in 1990, proved to be an extraordinary success. At the beginning of the digital modeling effort, Boeing was not entirely convinced of the accuracy of digital modeling and decided to build a full-scale wooden model (a wooden mock-up) of the jet’s complex nose. This experiment proved that CATIA was better at 3D modeling; Boeing stopped creating wooden mock-ups for the 777 and future aircrafts. CATIA proved its value and won the day!
9.4 River Bend Nuclear Station: 40 Years of Digital Success! In 1981, Entergy, regionally known as GSU (Gulf States Utilities), took the initiative to deploy 3D digital technology to support the River Bend Nuclear Station construction. Arguably this is the only digital 3D model that continued to be used
Digital-Age Construction – Manufacturing Convergence
893
after 40 years and proved helpful in design, construction, licensing, and operations. Construction System Associates of Atlanta was retained to provide the 3D-modeling services. At the time, 3D plastic models were used to detect design errors and prevent costly modifications during nuclear power plant construction. Building plastic models were far from a practical solution; the modeling cost was very high, and it was almost impossible to make the changes to the plastic model when errors were detected. The superiority and flexibility of 3D digital modeling motivated GSU to discontinue the plastic modeling effort. The computerized model effectively detected and corrected design issues and supported complex engineering applications such as construction process planning, welding inspection programs, automated structural steel analysis, and jet impingement simulation scenarios. Over 40 years later, the River Bend 3D-computerized model continues supporting plant maintenance and modification projects. GSU was right to launch the first in the world, large-scale 3D-computerized modeling for a nuclear unit in the construction process. These days, 40 years later, River Bend’s 3D modeling technology has expanded to a model of more than 350,000 components to support plant reliability and performance in operations, Figs. 44, 45, and 46. The 3D-computerized model was the best way to generate on-demand construction drawings, composites, and detailed views and cuts, as required to support the Fig. 44 River Bend, nuclear station, reactor building, view number V47East. Color coding indicates subcontractor
Fig. 45 River Bend, nuclear station, reactor building, view number V24. Color coding by engineering discipline
Fig. 46 River Bend, nuclear station, reactor building, view number V10-4, to include the crane structure. Color coding by engineering discipline
Digital-Age Construction – Manufacturing Convergence
895
construction phase. River Bend’s 3D modeling system was the first of its kind for a nuclear power plant. The complex 3D views are of great value in assessing global conditions for the design and any construction in progress. More critical is the ability to create detailed on-demand 2D and 2D views required to support daily engineering, construction, and licensing activities. It takes only a few minutes to generate any view from any direction, and any selection of components, as needed for a global assessment of the work in progress and possible engineering or construction concerns.
9.5 From 3D Product Modeling to PLM By the early 1990s, CAD solution providers had begun to add product lifecycle management functionality to the 3D capabilities and embraced the term Product Lifecycle Management (PLM) in promoting the new concept. PLM provides superior functionality in support of product components manufacturing and the engineering and construction of production lines, including tooling and robotics. Digital 3D modeling is the language that drives modern tooling in manufacturing product parts and assembly processes. Digitally driven machinery and robotics “understand” digital instructions; the secret is to directly generate process instructions from the product’s digital definition, Fig. 47. The digital manufacturing business model is now a reality in aerospace, automotive, shipbuilding, and all manufacturing sectors that deal with complex products. The making of the Boeing 787, the Dreamliner, is a great example. Near error-free large assemblies are delivered worldwide for a final assembly process that takes only days. The 787 success motivated Boeing to invest in 3D modeling of the old 747 to determine ways to reduce the airplane assembling duration. The first delivery of a Boeing 747 was in 1968, when no advanced digital technologies were available. To enable “advanced manufacturing for the 21st century”, Boeing completed a five- year effort to modernize the 747’s design/build process. More than 10,000 engineering drawings for the airplane’s huge fuselage were digitized into data sets, enabling the production of highly accurate parts. These data sets also allow for laser-guided
Fig. 47 Digital manufacturing, from human-controlled equipment to digitally controlled robotics and tooling
896
S. J. Egan and N. C. Tutos
assembly of skin panels in all-new tooling. 747–400 uses state-of-the-art assembly processes to ensure high product quality, reduce delivery cycle times, and lower maintenance and production costs. Source: “The Boeing 747-400 Family: The Right Choice for the Large Airplane Market”, May 2006, www.boeing.com
9.6 Digital Twin: the Culmination of Four Decades of Digital Progress It started with the introduction of 3D-modeling capabilities during the early 1980s. Boeing 777 was the first paperless commercial jet engineering and production project. Product Lifecycle Management (PLM) solutions did bring the digital revolution to a lot higher level. PLM was the gigantic step from product engineering to lifecycle engineering. PLM’s success motivated the development of the Digital Twin (DT) concept applicable to physical and non-physical products and to services. In simple terms, the digital twin is the product and process replica that fully validates the project concept. It may look that the creation of the digital product/service adds to the duration and the cost of creating the actual ones. Forty years of success show that the total project duration is shorter and less expensive. And in addition, it did show a dramatic reduction in the project risk. The cost of creating the digital twin is dramatically reduced by exploiting the repeatability of the product/service components and processes. Digital Twin, Its Product and Process Components The creation of complex physical products is about the engineering of the product, the manufacturing process, and the operations and maintenance instructions. The making of the product includes the manufacturing of parts, the pre-assembling of product modules, and the final product assembly. To support this level of engineering, the Digital Twin must consist of the product definition and the definitions of all the processes involved. The product definition must also include the product’s physical and functional configurations, Fig. 48. Today’s physical products are no longer digital-free entities; they incorporate digitally controlled functionality and alerting capabilities. They include instrumentation that makes possible real-time remote detection of pre-failure conditions and malfunctioning. Real-time remote monitoring of the physical systems and equipment adds a new dimension to the integration of the material and digital realities of the product. Acknowledgments The development of this document is based on decades of extensive experience in construction and manufacturing. Mostly from dealing with projects of extreme complexity. Sir John Egan is known for unparalleled success in dealing with challenging business transformation projects in manufacturing, as the CEO of Jaguar, and in the construction of airports, as the
Digital-Age Construction – Manufacturing Convergence
897
Fig. 48 Digital Twin components for complex physical products CEO of the British Authority of Airports. The about $5 billion Terminal 5 project is one of the most successful projects in constructing modern airports, delivered on time and within budget. Terminal 5 is an extraordinary success in redefining the client’s role, investor role in construction projects, and risk management for a very complex project. Sir John Egan chaired the Rethinking Construction task force commissioned by the British Government. The Egan Report is a compelling call for construction–manufacturing convergence for adopting a manufacturing-style production system. Later he was President of the Confederation of British Industry and the British Institute of Management. Dr. Tutos has construction site experience and decades of experience in the development of digital-age engineering applications and the implementation of these solutions all over the world. As vice president of Stone & Webster Engineering Corporation, he coordinated advanced engineering systems developments and implementation for petrochemical projects, nuclear power plant projects, and extensive decommissioning and decontamination programs. As vice president of Dassault Systemes, Dr. Tutos coordinated the development of applications for the construction industry. For ten years, as a member of the Corporate Advisory Board of Civil Engineering Research Foundation (CERF), he managed the Learning from Manufacturing initiative supported by major US construction organizations. Comments and recommendations have been provided by Professor Daniel H. Halpin and Robert (Bob) Brown. Daniel W. Halpin, Ph.D., Dist. MASCE, professor emeritus Civil Engineering, Purdue University, is generally recognized as one of the teaching authorities in the world on the use of simulation in studying construction processes. He is an elected member of the National Academy of Construction. Robert (Bob) Brown pioneered Production Process Simulation for manufacturing and construction. He was the president of ABB Robotics in North America and then president of a software start-up, Deneb Robotics, a simulation software company. At Dassault Systemes, he helped world- class aerospace and automotive companies such as Boeing, Lockheed Martin, Northrop Grumman, General Dynamics, Chrysler, General Motors, and Ford launch process simulation programs. Amadeus Burger, founder, and president of the Construction Systems Associates (CSA) Inc., Atlanta, GA, contributed significantly to the development of this document. From the early 1980s, for more than 40 years, CSA has provided solutions and services for extensive digital 3D-modeling programs for nuclear power plants in the USA, Europe, Japan, and South Africa.
898
S. J. Egan and N. C. Tutos
McKinsey & Company, one of the top management consulting organizations, is these days a powerful voice for “reinventing construction” and for “embracing manufacturing style” business model. A number of the McKinsey & Company studies are referenced in this document.
References 1. Rethinking construction, report. https://constructionexcellence.org.uk/wp-content/ uploads2014/10/rethinking_construction_report.pdf 2. Doherty, S. (2008). Heathrow’s T5, History in the Making. Wiley. 3. Egan, J. (2015). Saving Jaguar. Porter Press International Ltd. 4. McKinsey Global Institute. (2017). Reinventing construction: A route to higher productivity. 5. McKinsey & Company. (2018). Imagining construction’s digital future. 6. McKinsey & Company. (2016). Voices on Infrastructure: Rethinking engineering and construction. 7. McKinsey Global Institute. (January 2013). Infrastructure productivity: How to save $1 trillion a year. 8. McKinsey & Company. (2018). London 2018 Summit: Major project delivery and digital transformation. Outcomes Report. 9. McKinsey & Company. (2019). Breaking the mold: The construction players of the future. 10. A Comprehensive Assessment of America’s Infrastructure. (2021). Infrastructure report card. American Society of Civil Engineers (ASCE). www.infrastructurereportcard.org Sir John Egan, BSc. Petroleum Engineering, Imperial College, London; MSc. London Business School. As Chief Executive of the British Airport Authority plc., (BAA), lunched the most successful construction industry transformation program for an over $5 billion airport construction project, the Heathrow’s Terminal 5. This is the only large airport project finished in time and on budget. Sir John Egan chaired the Rethinking Construction government-sponsored British thinktank that produces what is named the Egan Report. This was the first strong call for rethinking the fundamentals of the construction industry, and for construction manufacturing convergence. Terminal Five of the Heathrow airport project involved over 50,000 people from 20,000 companies, was finished on time, on budget, and safely. This is considered a great success in the history of the construction industry. The T5 Agreement was the first contractual environment that did make fundamental changes to the role of the client in project planning and delivery. “A committed, highly involved intelligent client who over the years has pioneered new thinking and different ways of operating in partnership with their supply chain”. Anthony Morgan, Partner, PricewarehouseCoopers when assessing the project success. As CEO and Chairman of Jaguar plc for ten years, he saved Jaguar from bankruptcy.
Digital-Age Construction – Manufacturing Convergence
899
Sir John Egan held major leadership positions in a number of great British industrial organizations and high education. This includes: • President, Confederation of British Industry • President of the British Institute of Management • Chancellor, Coventry University Neculai C. Tutos, Ms in structural engineering, Ph.D. in computer science. Decades of experience all over the world in the construction and in the development of advanced computer – aided engineering solutions. Dr. Tutos was for ten years member of the Corporate Advisory Board, Civil Engineering Research Foundation (CERF). The foundation membership included over 100 top US engineering and construction organizations. Dr. Tutos represented Dassault Systèmes, the world leader in 3D-based modeling and simulation solutions for the aerospace, automotive, shipbuilding, and construction industries. CERF International Symposiums attracted the participation of over 2000 executives from all over the world. Dr. Tutos was in charge of promoting construction – manufacturing convergence based on digital age engineering. After graduation. Dr. Tutos started as a site engineer in construction, and in time promoted as Chief Engineer of Technologies for a construction company that employed 11,000 people. Dr. Tutos was General Director of a national level Computer Institute for the construction industry. In that position he acted as member of the Board for the National Department of Industrial Constructions. Upon arriving in the US as a political refugee, Dr. Tutos was hired as Manager of Projects for Construction Systems Associates in Atlanta GA, in charge of 3D-based engineering applications for nuclear power plants. In that position he introduced for the first time in the USA, 3D-based construction projects scheduling. Dr. Tutos was hired as Consulting Engineer by Stone & Webster Engineering Corporation in Boston MA, was later promoted as Vice President for the Development of Advanced Systems and Services. In this position he managed the development of 3D-modeling and simulation for nuclear power plants and petrochemical facilities. Dr. Tutos was for ten years Vice President with Dassault Systèmes, the world leader in 3D-based modeling and simulation for the aerospace automotive, shipbuilding and for the construction industry. In this position Dr. Tutos managed developments and world-wide implementation of computer-aided engineering for the construction and shipbuilding industries. Dr. Tutos was on the board of directors for Gehry Technologies (GT). In collaboration with Dassault Systèmes, GT developed and implemented the most advanced Digital Twin solutions for architecture. Frank Gehry buildings are admired all over the world. The implementation of smart digital twin technology allowed for the construction of architecture that had previously been considered of being impossible to build.
900
S. J. Egan and N. C. Tutos Recently, as co-founder and president of Klaro Consulting, Dr. Tutos promoted remote real-time monitoring solutions for mission critical facilities including data centers. Dr. Tutos did teach, university level, structural engineering and computer-aided construction engineering.
Thriving Smart Cities Joel Myers, Victor Larios, and Oleg Missikoff
Abstract Sustaining our cities by providing them with the tools to flourish is a key objective if we are to safeguard humankind and ensure that the places we live, work and play in remain resilient. The ecosystems that provide our urban livelihood and quality of life are highly complex and extremely fragile. City leaders strive to keep up with the growing population, limited resources, and expectation of society. Digital Twins, where the physical world is reflected and mirrored with technology with real time data, are already demonstrating how invaluable they are in providing resilient and sustainable solutions and helping to deliver the promise of smart cities. In their most basic form, digital twins of cities, i.e., in this case interactive models of cities, are already being used by city stakeholders for the understanding, communication and simulation in key areas such as urban planning, mobility and resource management. They drastically reduce the cost, not only during design, but provide a far greater likelihood of success and achieving community engagement and adoption. As technology evolves and open urban data becomes readily available and interoperable, the future of digital twins promises to deliver decision support systems. Platforms that will model not just the physical world but integrate the complexity that reflect the many facets and knock-on effects of our cities: from its differing social, cultural, political and multidisciplinary flavours. These decision support systems will provide local authorities with digital twins of their cities. The core tools to help them on both strategic and operational levels to: build evolving strategies; simulate and communicate what they wish to achieve to stakeholders and local communities involved; analyze the budgets and resources required to achieve success; monitor solutions based on KPIs; predict pain points and resolve before J. Myers (*) CEO of DOMILA Limited, Dublin, Ireland Chair of the Smart Cities WG for the IEEE IoT Initiative, Dublin, Ireland V. Larios Director of the Smart Cities Innovation Center at the University of Guadalajara, Guadalajara, Mexico O. Missikoff Chair of Advisory Board, Earth 3.0 Foundation, London, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_30
901
902
J. Myers et al.
they become reality. The goal of digital twins for cities is to help bring about resilience and sustainability and stimulate smart growth to remain competitive and thrive. Keywords Data · Communications · Communities · Decision-support · Digital Twins · Mirrors · Monitoring · Prediction · Problem-solving · Resilience · Simulations · Smart cities · Stakeholders · Sustainability · Technology · Visualization DIGITAL TWINS are the KEY to unlocking the complex and highly intricate mechanisms that underly the cities and towns where we live, work and visit. The adoption of Digital Twins is critical for thriving Smart Cities of the future.
1 Introduction Today, over half of the world’s population live in cities and that number is growing day-by-day, as people flood towards urban areas to get a higher standard of education, find remunerative work, or look for a better quality of life (Fig. 1). The UN has estimated that by 2050 over 68% of the world population is projected to live in urban areas [1]. Cities are growing exponentially, and we now use the term “megacity” in order to distinguish between just a “city” and urban agglomerations having over ten million inhabitants. And, by the way, we currently have 33 megacities, with another 10 on their way in the next 8 years [2].
Urban population
(% of population,2014) 0 to 20 20 to 40 40 to 60 60 to 80 80 to 100
Fig. 1 Urban population. (Image from World Mapper, map 297, 2015)
Thriving Smart Cities
903
Fig. 2 Evolution of New York City’s skyline over a century. (Image from lee, tier1dc.blogspot. com, 2010)
1.1 Cities Are Evolving (Fig. 2) It is an evident statement of fact, but most probably the hardest factor for city leaders and stakeholders to manage is that their cities are “evolving”, all the time. It summarises a “growing” city with an inherent amalgamation of urban mechanisms, complex interactions, and hard decisions to be made, from long-term to real- time, where technology and data can ease the pain, reduce the hours and difficulty required, and guide the decision-making. However, evolution is one thing, and growth is another. The path that towns and cities take, are all very different. History demonstrates that urban evolution is not always a positive experience. Instead of growth, evolution may lead to decline and the dying out of the places we live and work in. Take the example of New York State, and its northern tier. Places like Albany, Rochester, and Buffalo in the west have all suffered declines, but were once thriving, growing. Many cities that came to prominence in the industrial age, then turned into the rust belts of their nations. Cities in the United Kingdom like Liverpool or Manchester, suffered enormously in the post-industrial age. From the mid 1980s mining towns across the nation were hit with closure. The coal industry which, at its height, had
904
J. Myers et al.
once gave work to over one million people across the United Kingdom were deemed by government to be inefficient, with a move to depend more on imported coal, oil, gas and nuclear energy. Globally, the decline of urban areas has recurrently been a result of losing resilience and their historic competitive edge, coupled with an inability to make necessary change happen. Even when urban growth is achieved, we also recognize that it does not necessarily lead to positives for all segments of society. Growing cities quickly attract poorer populations to settle to satisfy the needs of the more affluent. Urban development cannot always keep up with these transitions, leading to poor housing development and services, fragile shanty towns and favellas. Towns and cities do not live in isolated vacuums. Additional complexities have to be worked into decision-making, where national, regional, and even global dependencies need to be considered. Cities such as Beijing in China or the State of California in the United States get their water supply from 100 s of kilometres away. If this supply were to be blocked or reduced drastically it would affect the lives of 10s of millions of people in a matter of just a few days. The resilience and sustainability of our cities and its people, us, rely on ensuring these dependencies are factored into all equations. State governments and municipalities leaders need help to better understand, monitor, reinvent and nurture these urban areas to mitigate changing circumstances and drive growth. The reality of evolving towns and cities is a very real and complex path for leaders and stakeholders to deal with, on a daily basis. Digital Twins of Cities are already becoming the game-charger in dealing with evolving economies, societies, supply chains, education and public safety.
1.2 This Is Where Digital Twins Kick In Over the past few years, Digital Twins of Cities have started popping up all over the globe, from Singapore (Virtual Singapore), South Korea, China and Australia in Asia; to the European Union (via the DUET and Living EU programs); the UK, through its government-funded National Digital Twin Programme; and the US, driven by the Digital Consortium. Unlike a washing-machine or apples, i.e., any other products we need to sustain or improve our lives, a digital twin cannot be simply bought off the shelf. Digital Twins of Cities Are a Journey Technologies, processes, data, and toolsets provide some of the core building blocks for digital twins of a city. However, it is a long road, which requires investment, confidence, and perseverance to develop Digital Twins of Cities to the levels of practice that will bring real value and benefits. You can get immediate results from each step, but to get the ultimatum power and value you must embrace the adventurous journey.
Thriving Smart Cities
905
It is highly worthwhile, and we hope that this chapter will help you get started and see the incredible benefits that it can give society in achieving sustainable and thriving cities. Here is an outline of what is to come on Digital Twins of Cities, below (Fig. 3):
Fig. 3 A Guide to the journey of smart cities. (Image from Joel Myers, 2022)
906
J. Myers et al.
2 Digital Twins of Cities It is generally believed that physical entities, virtual models, data, connections, and services are the core elements of digital twins [3]. The essence of digital twins is a bi-directional mapping relationship that exists between physical space and virtual space. This bi-directional mapping is different from the unidirectional mapping, which only maps data from physical entities to digital objects. In simplistic terms it is the consolidation of three components (Fig. 4):
2.1 Digital Twins of Cities: What Do They Look Like? For the past few years Digital Twins of Cities have been going through a metamorphosis and receiving a lot of media attention. Type in these three words “digital twin city” into a Google search and you will be inundated with articles and reports that are building with global momentum, with headlines like “Virtual cities: Designing the metropolises of the future” (BBC, 18 January 2019), “Can Digital Twins Change Our World?” (Forbes Magazine, 15 February 2021), and “Digital twins set to save cities $280 billion” (Cities Today, 9 September 2021). The reason we are seeing so much written in the international media, is that countries and cities have already made important investments in the development of smart cities and Digital Twins of Cities. Today, if you look at the results, there have already been some amazing successes. However, we should tread with a touch of caution. Some initiatives have received blowback, and there are some abject failures. The media is not simply publishing hype, as the journey has begun, but it is a learning curve. The ability to share and understand lessons learnt is invaluable. Over the past few years, Digital Twins of Cities have started popping up all over the globe, from Singapore (Virtual Singapore), South Korea, China and Australia in
Fig. 4 Snapshot of digital twins of cities. (Image from Joel Myers, 2021)
Thriving Smart Cities
907
Table 1 Digital twins of cities: 3 levels of intelligence
Image from Joel Myers (2021)
Asia; to the European Union (via the Local Digital Twins and Living EU programs); the UK, through its government-funded National Digital Twin Programme; and the US, driven by the Digital Consortium. Today, most of these digital twins are pilot projects, but according to ABI Research, a global tech market advisory firm, within the next 4 years over 500 cities will deploy full-scale digital twins across multidisciplinary use cases. Chairman and CEO of CityZenith, Michael Jansen, goes as far as to say that by 2025 the world’s most important cities will be guided by their digital twins in taking decisions to many common city functions [4] (Table 1). The journey of investing and developing in Digital Twins of Cities takes you from the bottom to the top of the above pyramid. The enormous advantages, lesson learnt, and cost-savings for a city can be felt at each level. The aim is to achieve Digital Twins of Cities that not only integrate the physical elements of a city, but instead, incorporate and reflect all the elements that constitute a city: its services, processes, and social systems. This culmination leads to a Smart City Digital Twin (SCDT). The most powerful decision-making tools that a city leader or stakeholder can be provided with for dealing with the complexity and critical need for sustainability in today’s cities.
908
J. Myers et al.
2.2 Digital Twins of Cities: Bringing Incredible Value? Digital Twins are already bringing highly critical value to city leaders and stakeholders, helping to untangle and resolve some of the complexity within cities, with future impact on a limitless number of areas of city life and decision-making areas to come very soon (Table 2): The first generations of digital twins for cities are already being used, by urban planners, for example, as a set of practical technology tools which reduce by a fraction the manual hours spent in processing information and simplified decision- making. However, these are just the first baby steps on the journey that Digital Twins promise to soon deliver. Giant steps necessary to unravel the mysteries of how cities function and can be best managed for humankind in delivering resilience and sustainability.
2.3 Digital Twins of Cities: 7 City Challenges So, apart from growing city populations, what else should we be looking out for when developing and delivering a Digital Twin of a City? Cities are extremely complex ecosystems. There is a constant constraint on basic resources that are available, from water, energy, funding, and as we said earlier in this chapter, the physical, social economic sides are continuously evolving. Through Digital Transformation, technology and data can help enormously to deal with these Table 2 The value of digital twins in cities
Image from Joel Myers (2022)
Thriving Smart Cities
909
constant challenges. Through Digital Twins of Cities, these challenges can be simulated, assessed, monitored, and predictions can be made. Add to these constraints, the core social systems that underly our cities, where social well-being sets the standard for whether to live in a city or not; our growing expectations of what we require from our cities and their governments; and a highly fluid ranking of our priorities. All of these challenges make decision-making by city leaders and stakeholders for now, the near and long-term future, a huge complexity. This is where technology, data and digital twins are in their very essence. So, here are the list of “7 City Challenges” that a full-scale “Smart City Digital Twin” (SCDT) will help resolve for national and local governments, and urban stakeholders, as a Decision Support System (Table 3):
3 Smarter Cities To drive and deliver the intelligence required for the needs of today’s cities and its people, three main ingredients are key to form what are known as “Smart Cities” (Fig. 5): It is critical to ensure a People-Centric approach to building Smart Cities and Digital Twins of Cities to gain success. Local populations must be engaged and highly involved in not just the adoption of technology solutions for the city but be an integral part of the process. Only with this framework in place can digital transformation and of any kind have an ongoing future. “We define digital maturity as aligning an organization’s people, culture, structure, and tasks to compete effectively by taking advantage of opportunities enabled by technological infrastructure, both inside and outside the organization.” Gerald C. Kane, Anh Nguyen Phillips, Jonathan R. Copulsky and Garth R. Andrus, “The Technology Fallacy: How People Are the Real Key to Digital Transformation”, The MIT Press, 2019.
3.1 Smart Cities: The Core Building Blocks Although there is no standard set of technologies used for building Digital Twins of Cities, the above provides a package of its main components. Digital Twins can go from simple to highly complex multidisciplinary system that requires systems thinking and the tools for distributed systems analysis (Fig. 6):
910
J. Myers et al.
Table 3 The 7 city challenges
Image from Joel Myers (2021)
3.2 Smarter Cities: The “People-Centric” Approach Through Digital Twins Whatever terms or definition we use for “Smart Cities”, we know that technology and access to intelligent data is at the core of providing sustainable and resilient cities. Not simply for providing efficiencies for a city’s limited resources, but in
Thriving Smart Cities
911
Fig. 5 Ingredients for smarter cities. (Image from Joel Myers, 2021)
Fig. 6 Core Building blocks of smart cities. (Image from Joel Myers, 2021)
stimulating local economies, providing social well-being to citizens, and to define the type of city we want for future generations (Fig. 7). Digital Twins for cities already play a fantastic way of making digital transformation and smarter cities more “people-centric” by offering a direct means of communications between stakeholders affected by an issue and its solution. Digital twins in the form of visual aids, i.e., digital models, such as of new urban developments, allow locals to join the conversation on the effects and outcomes of possible solutions, by comparing what they see today with visual options of tomorrow.
912
J. Myers et al.
Fig. 7 Smart cities: what are they and how do these intelligent cities function. (Image from Leyton, 2021)
3.3 Smarter Cities: Initiatives Smart City initiatives have now been around for over 15 years. That is, if we do not count Amsterdam’s virtual digital city project, which began back in 1994 [5]. CISCO and IBM created the initial traction in the mid 2000s with their global drives of the and the “Connected Urban Development programme” (CISCO, 2005–2010), the “Smarter Planet initiative” (IBM, 2008), and the “Smarter Cities programme” (2009) [6]. Then in 2011, Barcelona held the first Smart City Expo World Congress and the smart cities boom began. There are now 100 s of cities termed, ranked or indexed by institutes such as the IMD, as “smart”, and the huge surge in smart city projects and solutions is is predicted to reach over US$2.46 trillion in business opportunities by 2025 (Fig. 8) [7]. Over the past few years, the tendency for local governments has been to switch from a “resource efficiency” based Smart City framework to a more “people- centric” framework, where community and citizens engagement has become core, and a focus on social well-being. This new trend is the result of the changing relationship between people and governance due to the digital revolution that has, today, created a hyper-connected and collaborative society.
3.4 Smarter Cities: Areas of Impact Smart City frameworks, strategies, pilots, and commercial projects are already providing 1000s of success stories for the value of technology and data for urban digital transformation, and preparing the groundwork for the next step, their Smart City Digital twins. Most projects have followed a “silos” approach, without multidisciplinary integration, in high-impact areas of cities, covering (Fig. 9):
Thriving Smart Cities
913
Fig. 8 Smart city: building tomorrow’s cities. (Image from Bees Communications, 2019)
3.5 Smarter Cities: People-Centric Success Stories Just some of the “People-Centric” smart city example projects ongoing globally, include (Table 4):
4 Urban Data The quantity of data being created globally is astronomical. Hundreds of exabytes in data are generated ever year. Every 2 days we produce more data than all of history prior to 2003 [8]. This surge in data generation is led by the omnipresence of ICT in our everyday practices and IoT sensors within urban environments. The drive for smart cities has resulted in the dawn of a wealth of urban data. Yet, we are in the early stages of urban data production. By 2025 we are expected to reach close to 35 billion IoT devices in cities, producing 79.4 Zettabytes of unstructured information [9].
4.1 Urban Data: Where from? When it comes to big data, in the technical world usually only three Vs are key: 1. Volume: amount of data generated
914
J. Myers et al.
Fig. 9 Impact areas of smart cities. (Image from Joel Myers, 2021)
2. Velocity: the speed with which data are being generated 3. Variety: structured and unstructured data, generated either by humans or by machines However, in order to develop digital twins of cities, we need to go beyond these 3 keys to ensure a qualitative approach to big data: 4. 5. 6. 7.
Veracity: trustworthy data source, type, and processing Visibility: context is key – what is the data telling me? Variability: the number of inconsistencies and inconsistent speed of access to data Value: the monetary cost to gather, store and curate data, and its relative added value (Fig. 10)
Thriving Smart Cities
915
Table 4 Examples of smart city success stories
Image from Joel Myers (2021)
Data Sources In cities, data is gathered from a variety of sources: –– static data from historical urban resources and digital documentation, such as geospatial; demographics through census taking; and municipality taxation –– in real-time, through networked IoT sensors positioned around a city to track mobility (such as on street lighting, traffic lights, and parking spots) and smart meters for utilities such as water, electricity, and gas –– online data, from social media platforms and local eGovernment services, such as parking applications, waste and recycling, and citizen feedback forms
916
J. Myers et al.
Fig. 10 The 7 vs of big data for digital twins of cities. (Image from Adam Drobot, 2021)
4.2 Urban Data: Shared Open Data Is Fundamental Urban data needs to be shared. The same data collected for one purpose ends up being important for a myriad of uses, across multiple disciplines that affect a city and to external locations and organizations outside of the city. Governments are already embracing open data policies and building data stores and library platforms that offer a transparent approach to data sharing. Let’s take a hypothetical but very real example of data available on a specific bridge, where commuters and logistics cross into a city, throughout the day and night. One a first level, the data available is crucial to understanding its physical state and anticipating maintenance required to ensure the bridge’s purpose, access to the city, remains open. Issues to this bridge have important effects on local business, food supply, and community movement. On other levels, this same data can be used to understand traffic and mobility flow and levels at different time periods; the effects on the city’s environment, pollution, and noise generation; and logistics. Multiple disciplines, sectors, and organizations. Open data is not always possible, as data is also owned and used for commercial purposes or to achieve competitive advantage by industry and businesses. However, it is important for municipalities to lead open data strategies and solutions for their citizens, as well as share data across nations and regions with other cities, to benefit from results globally. More and more cities around the world are recognising the value of Open Data and building open data platforms to centralise their own urban data, gathered from
Thriving Smart Cities
917
Fig. 11 Analytical report 6: open data in cities 2. (Image from European Data Portal, July 2020)
the public realm. These Open Data portals are increasingly backed by solid Open Data policies. The European Union is one of the global regional leaders in guiding, promoting, and funding its cities to work with Open Data and Portals that publish a lot of data on topics such as urban planning, tourism, and increasingly real-time data in the transport and mobility area, such as datasets on available parking spots. Moreover, cities also benefit from the use of Open Data to tackle typical urban challenges such as congestion and pollution, and to improve the quality of urban public services and the interactivity between the local government and citizens. Currently there are strong Open Data initiatives in Amsterdam, Barcelona, Berlin, Copenhagen, Dublin, Florence, Gdansk, Ghent, Helsinki, Lisbon, Paris, Stockholm, Thessaloniki, Vienna, and Vilnius. All these cities have Open Data strategies and portals in place, which are not stand-alone initiatives but are embedded in broader digital or Smart City strategies (Fig. 11) [10]. With regards to Open Data strategies, the majority of EU cities kick-started their Open Data journey top-down driven, initiated and guided by the political leadership of the city. Most of the portals are not only focused on the core task of publishing data, but also incorporate community-led data initiatives aimed at engaging with users, such as news items, event sections and feedback mechanisms. Initiatives to reach out to citizens are often centred on the practical application of Open Data (Fig. 12). There is, however, still a critical and growing need to resolve the limits of having mainly silos-based urban data; integrating existing solutions for security and privacy into all personal data; and the trustworthiness of their source, through a unified vision that data fusion can help provide.
918
J. Myers et al.
Fig. 12 Analytical report 6: open data in cities 2. (Image from European Data Portal, July 2020)
Fig. 13 Hoozie smart city currency platform. (Image from DOMILA LTD, Ireland, 2021)
4.3 Urban Data: Use a Carrot Not the Stick! Hoozie™: Real-Time Data via a Smart City Currency As two of the authors of this chapter, Joel Myers, and Victor Larios, we have first- hand experience in the development of a pioneering smart city currency platform called Hoozie™. A local digital currency that is bringing a completely new approach to gathering real-time data on how cities and their social, economic and mobility systems function (Fig. 13). On 21 October 2021, the Metropolitan City of Guadalajara, Jalisco, led by the University of Guadalajara (Jalisco), HP Guadalajara Inc., and part-funded by the State of Jalisco Ministry of ICT, launched the Hoozie™ digital currency platform to its population of 5.2 million locals. Fixed to the Mexican peso, Hoozie™ is the first
Thriving Smart Cities
919
local digital currency to be used city-wide as an incentive for social-economic recovery, support, and growth. Based on a new approach to building people-centric smarter cities, the Internet of People (IoP), the Hoozie™ rewards local spending by offering Hoozie discounts and cashback from local businesses, such as restaurants and fast food, hotels, cafes, beauty salons, hairdressers, …. Locals earn Hoozies for taking public transport, like buses, or using cycling lanes to/from work, by reducing the carbon footprint and traffic in the city. It also promotes a healthy lifestyle: when locals cycle or run within the city limits, they earn Hoozies. Through HP GDL Inc’s participation, Hoozies are earnt by their staff which they can spend or donate to local NGOs for community services, recycling, and equality in talent development. Hoozie™ uses a carrot, not the stick approach, to incentivise citizen and community engagement. The platform gathers anonymous encrypted data in real-time, stored in blockchain, which is already providing invaluable insight into the city’s demographics and its relationship to local business transactions, mobility, education, and community impact.
5 Urban Digital Twins (UDTs) Now, before we delve into some REAL practice cases from around the world with examples of what has already been brewing in the realm of Digital Twins for cities around the world, we need firstly to focus down on Urban Digital Twins (UDTs). To-date, UDTs are the highest level of digital twins that exist as deployments outside of the lab (Fig. 14).
Fig. 14 ARUP neuron: digital twins of water cube pilot. (Image from ARUP, 2019)
920
J. Myers et al.
UDTs are a unique interplay between static and dynamic models, that is, three- dimensional models of a city’s physical assets combined with dynamic predictive models that source real-time data like IoT sensors. Digital twins proliferated from their industrial origins to cities via UDTs, offering an integrated approach for the design, management, and operation of urban assets. From single-device scenarios in areas such as manufacturing, UDTs have uncovered how digital twins can bring enormous value in sophisticated ecosystems with complex interactions and data sources. From public community engagement, traffic flow planning, building energy management, flood risk modelling, climate adaptation, resilience planning and disaster recovery, UDTs are proving to be incredibly valuable tools in urban planning, scenario planning and governance.
5.1 Urban Digital Twins: A Growing Global Market According to research carried out by the firm Prescient & Strategic Intelligence in their “Digital Twin Market Research Report” (September 2021) the global digital twin market generated revenue of USD 3.2 billion in 2020 and is forecast to grow to USD 73 billion by 2030. This is due to the “growing adoption of the internet of things (IoT), artificial intelligence (AI), 5G, and machine learning (ML) technologies, increasing penetration of Industry 4.0 standards, and rising demand for cloud services are supporting the growth of the market.” (Fig. 15). A few words of caution though on assessing the market for digital twins: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”. Roy Amara, past president of The Institute for the Future.
This rule of thumb, coined by Roy Amara, can be seen very clearly in examples such as CISCO IBSG’s 2011 report that by 2020 the world will have 50 billion IoT devices installed and connected. In 2012, IBM forecasted 1 trillion connected devices by 2015. By 2016 we were nowhere near 1 trillion IoT devices, or even 50 billion for that matter. Gartner estimated it at around 6.4 billion, not including smartphones, tablets, and computers (Gartner Symposium/ITxpo 2015, November 8–12 in Barcelona, Spain) [11]. Since then, Dave Evans, who is now CTO of Stringify, says he expects to see 30 billion connected devices by 2020 [12].
Thriving Smart Cities
921
Fig. 15 Global digital twin market. (Image from Digital Twin Market Research Report, Prescient & Strategic Intelligence, 2021)
It has been a roller-coaster ride, with these figures been thrown around and published as invaluable insights. However, to promote adoption and sales it is not uncommon to see published reports by corporations and analysts that project the incredible. The reality is not always so rosy, where the learning curve is usually much harder than anyone ever appreciated. This is usually due to the complexity of the problems we are dealing with. However, not achieving these predictions does not mean failure. Most of the time the targets were simply set too high, and the real results still indicate strong and important success. In the case of what is going on today already on digital twins in cities, what is certain, is that we are well underway and the rate of adoption by municipalities cities and the budgets, at state and local levels, is indicative of a growing global market and future.
5.2 Urban Digital Twins: Huge Cost Savings Cost savings can be obtained in key areas, such as energy and utilities, transportation, safety and security, and infrastructure (roads/buildings).
922
J. Myers et al.
Urban digital twins also offer many other advantages for cities, in terms of supporting and improving sustainability, circularity, decarbonisation, and the overall quality of urban living. In urban planning, a report by global tech market advisory firm ABI Research, indicates that cost benefits alone from digital twins could be worth US$280 billion by 2030 [13].
5.3 Urban Digital Twins: Key Advantages Digital Twins of Cities are already proving to be an incredible asset for delivering cost savings, especially in urban planning. But the best is still to come. To deliver better citizen value to cities, by providing a strong and secure economic environment, a good environmental footprint, and communities that are well-connected, well-serviced, and feel safe and secure, municipalities need to be able to take the best decisions. This is where digital twins will continue to deliver more and more critical advantages, working closely alongside decision-makers in local government within multi-disciplines (Fig. 16).
Digital twins will become the ultimate tool for city governments to design, plan and manage their connected infrastructure and assets in an efficient and cost-effective way. Dominique Bonte, Vice President of End Markets at ABI Research.
Digital Twins help commercial property and infrastructure owners reduce operating costs by a massive 35%, improve productivity by 20% and cut emissions by 50–100%. “Digital twin: the Age of Aquarius in construction and real estate”, Ernst and Young, May 2021.
Thriving Smart Cities
923
Fig. 16 The potential benefits of UDT’s. (Image from Joel Myers, 2021)
5.4 Urban Digital Twins: What International Organizations Say Urban Digital Twins “make it easy to understand the complex interrelation between traffic, air quality, noise and other urban factors. Powerful analytics model the expected impacts of potential change to help you make better evidence-based operational decisions and longer term policy choices. DUET, The European Union.
City digital twins can improve planning activities such as public engagement, scenario planning, and zoning and development. They have the potential to assist planners in reaching local climate resilience, economic development, and housing goals. “Smart City Digital Twins Are a New Tool for Scenario Planning”, American Planning Association, 2021.
924
J. Myers et al.
Real-time 3D models of a cities’ built environment allow scenario analysis through the simulation of the potential impact of natural disasters like flooding, adopt generative design principles for new city developments optimizing energy savings and solar capacity, and saving costs by operating cities more efficiently and effectively. Dominique Bonte, Vice President of End Markets at ABI Research, 2021.
5.5 Urban Digital Twins: Areas of Application Since the first UDT deployments by companies like Dassault Systèmes in cities such as Singapore in 2015, the areas of applications have been growing (Fig. 17):
Fig. 17 Areas of application for urban digital twins. (Image from Joel Myers, 2021 [14])
Thriving Smart Cities
925
5.6 Urban Digital Twins: Built Layer-by-Layer An Urban Digital Twin is built on a number of layers, one on top of the next for terrain, buildings, infrastructure, mobility, and IoT devices (Fig. 18).
Fig. 18 Layers required to develop a digital twin smart city. (Image from Trinity College, Dublin, Ireland, 2021)
926
J. Myers et al.
5.7 Summary: Maturity Models of Digital Twins in Cities The following summarises just a selection of some of the most interesting international levels of practice for Urban Digital Twins, with common use cases and project solutions across a global wealth of growing resources for those seeking to better understand real development and adoption. For further details on each of the national and city-levels of practices mentioned below, please refer to the final section of this chapter, before “References”, called “The State of Digital Twins in Cities” (Figs. 19, 20, 21, 22, 23, 24, 25, 26, 27 and 28). UDT Levels of Practice: Singapore.
Nation/City: Project Name: Initiative: Organizations: Start/end: Funding: Use cases:
Singapore Virtual Singapore [15] Smart Nation Prime Minister’s Office, Singapore; The Government Technology Agency of Singapore (GovTech); National Research Foundation (NRF) of Singapore; and The Singapore Land Authority (SLA) 2015-ongoing US$ 73 million Urban planning Mobility planning Disaster Management Telecommunications Coverage
UDT Levels of Practice: The European Union – Local Digital Twin Ecosystems.
Cities: Initiative: Funding: Organizations: Start/end: Use cases:
Rotterdam Helsinki Local Digital Twins (LDTs) [16] European Union Horizon 2020 European Union DG CONNECT, Unit C5, Technologies for Smart Communities; City of Rotterdam (The Netherlands); and the City of Helsinki (Finland) 2021-ongoing Predicting extreme weather events Urban planning Crisis management
Thriving Smart Cities
Fig. 19 Virtual Singapore. (Image from Dassault Systèmes, 2015)
Fig. 20 Virtual Singapore. (Image from Dassault Systèmes, 2015)
UDT Levels of Practice: The European Union – DestinE.
Initiative: Organizations:
EU Destination Earth (DestinE) Initiative [17] European Commission, in close collaboration with the Member States, scientific communities and technological expertise.
927
928
J. Myers et al.
Fig. 21 Kalasatama wind analysis in Helsinki. (Image from Helsinki 3D+, 2019)
Fig. 22 Digital transformation of the Port of Rotterdam. (Image from the Port of Rotterdam Authority, 2019) The initiative will be jointly implemented by three entrusted entities: The European Space Agency (ESA), the European Centre for Medium-Range Weather Forecasts (ECMWF) and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Start/end: Use cases:
2021-ongoing Environment and sustainability: monitoring and prediction
Thriving Smart Cities
929
Fig. 23 Destination Earth: a digital replica of our planet. (Image from ECMWF, 2021)
Fig. 24 The city of Boston’s digital twin – proposed buildings and impact. (Image from David LaShell, Esri, June 2021)
UDT Levels of Practice: United States of America. Cities: Organizations:
Arlington, Texas – Tulsa, Oklahoma – Boulder, Colorado – Chattanooga, Tennessee – Boston MA – Las Vegas NV – New York NY Cityzenith [18]; Energy Department’s Oak Ridge National Laboratory, the National Renewable Energy Laboratory from the U.S. Department of Energy, and the City of Chattanooga [19]; Boston Planning & Development Agency (BPDA) [20].
930
J. Myers et al.
Fig. 25 Digital Twin of Boulder, Colorado. (Image from ArcGIS Urban, Esri, 2021)
Fig. 26 Data for the public good. (Image from the National Digital Twin programme (NDTp), 2018) Use cases:
Car ridesharing Funding underserved communities Telecommunications: fiber optic procurement Road optimization and safety Noise pollution Water management Carbon emissions from buildings Urban planning Parking Waste
Thriving Smart Cities
931
Fig. 27 Virtual Shanghai. (Image from 51 World, 2020)
Fig. 28 Virtual model of Jeonju. (Image from All4Land, 2020)
UDT Levels of Practice: United Kingdom. Project Name: Initiative: Organizations: Start/end:
National Digital Twin programme (NDTp) [21] Data for the Public Good HM Treasury, National Infrastructure Commission; Department for Business, Energy and Industrial Strategy, University of Cambridge’s Centre for Digital Built Britain (Cdbb) 2018/2022
932 Use cases:
J. Myers et al. Smart infrastructure for safety and reliability across the rail network; Smart Hospital of the Future: Digital technologies, service innovation, and hospital design; Smart Mobility Living Lab, London; Digital Twins: Driving business model innovation
UDT Levels of Practice: China.
Project Names: Initiative: Organizations: Use cases:
Virtual Shanghai [22] Beihu Sewage Treatment Plant [23] Data for the Public Good Municipality of Shanghai and 51 World (Shanghai); and the China Railway Shanghai Engineering Bureau (CRSEB) City modeling Sewage treatment plant planning
UDT Levels of Practice: South Korea.
Cities: Project Names: Start/end: Organizations: Use cases:
Jeonju Seoul Mobility as a Service – Jeonju [24] Virtual Seoul 2.0 [25] 2020-ongoing Municipality of Jeonju; Leica Geosystems; All4Land; and Seoul Convention Bureau Mobility as a Service Destination marketing
6 Smart City Digital Twins What does the future promise? Urban Digital Twins have been a huge first step in what the future holds. The next steps, Smart City Digital twins, are already on their way. To achieve far greater sustainability and resilience, we need a full-scale digital twin of cities that works alongside city leaders, department staff and stakeholders to provide decision-making tools. Diversely to Urban Digital Twins, this next generation, the Smart City Digital Twin, must work with the incredible complexity of BOTH a city’s physical and social assets. It’s multi-layered multi-disciplinarity and highly intricate correlations – the cause-and-effect mechanisms that are our cities.
Thriving Smart Cities
933
Smart City Digital Twins must help cities face the growing challenges (see the “7 City Challenges” section) for their expanding melting pot population with access to greatly limited resources. To achieve this, we must first accept that cities to not operate as isolated vacuums. In order to develop in-depth urban understanding and provide decision-making tools under the guise of SCDTs, we must be inclusive. The process and solution for SCDTs must include all city actors, from government to citizens, from industry to academia and research. No sector nor discipline involved in the urban ecosystem can be left out.
6.1 Smart City Digital Twins: We Need a Common Language This means developing a new common language that communicate across urban stakeholders. A NEW Smart City Ontology. Whether we need to understand recycling of waste, transport, education, or the environment (to name a few), we need to tackle the technical, social, political, and financial languages of each aspect as described by its engineering, urban planning, technology, cultural heritage, environmental impact, governance, and so on. We also need to ensure that the ontology communicates across international languages, cultures, and traditions, political systems, and economies. Critically, this smart city ontology must also embrace the social perspectives of our cities and our lives if it is to represent the true nature of a city’s main asset. Its people.
6.2 Smart City Digital Twins: Building the Future The Global Observatory for Urban Intelligence (GOUI) [26] As two of the largest technology associations in the world, IEEE and ITU, in collaboration with international organisations like United Cities and Local Governments (UCLG), have taken up the challenge to provide the tools required to build Smart City Digital Twins and help achieve sustainable and resilient cities through technology and data. Pursuant to their respective missions and objectives to support the urban dimensions of the Sustainable Development Goals, in mid 2021 the IEEE and ITU launched their 3-year collaboration to develop an “Global Observatory for Urban Intelligence” (GOUI). It is based on the wealth of crowd-sourcing city stakeholders from government, industry and academic experts and students on a global scale, through the international membership and regional outreach that both international organizations provide. The objectives of the “Global Observatory for Urban Intelligence” are to provide an ongoing understanding of cities and how digital transformation can best serve
934
J. Myers et al.
cities in developing social, economic, and environmental dimension of urban growth, for sustainability and resilience., through: • Smart Cities Ontology: develop a common language to communicate smart cities across cities, nations, global regions, based on a multi-disciplinary approach • Correlations: build relationships between the ontology’s objects to best represent the complex behavior of a city’s ecosystem, based on a multi-disciplinary approach and using AI/ML to automate much of this process • Create international hub of city observatories: run as a collaboration between local authorities and academia that will gather and upload data on an ongoing basis to the GOUI’s cloud-database • Develop SCDT open-source tools: for querying, modeling, and using AI/ML for understanding, sharing, and comparing smart cities, for policymaking, strategic decision-making, piloting, and monitoring, as well as prediction and risk-analysis. • Create and share a “How to build SCTDs” playbook: with best practices on how to develop Smart City Digital Twins using GOUI tools. • Share and network SCTDs: provide the international community with a network platform to openly share models, data and collaborate. Target Audience The “Global Observatory for Urban Intelligence” will offer a unique set of tools for carrying out in-depth research, studies, and understanding of the effects of policies, strategies, and solutions on cities. Stakeholders, from anywhere on the globe, will be able to build simulations, carry out risk analysis, monitoring and analytics, as well as prediction. An incredibly invaluable and powerful decision support system, accessible globally, to stakeholders engaged in the digital transformation of a city, with the goal of achieving sustainability and resilience. The key targeted audience from government, industry, academia, communities, standards associations, and NGOs are those involved in: • • • • • •
policy making, governance and finance/budgeting hard and soft infrastructure (e.g., transport, energy, waste, water, ICT) urban planning and the environment health, education, and security business leadership and economics community engagement, culture, and equality
Collaboration GOUI partners with private sector experts, representatives from leading research institutes, the academia, smart city networks, civil society, NGOs, private sector, and other relevant stakeholders. UN agencies, regional organizations and high-level participants are also welcomed and expected to contribute to the work of the collaborative framework. Structure The “Global Observatory for Urban Intelligence” is being developed on an open-source cloud platform using crowd-sourcing collaborative tools. Its components will be developed in multiple phases, from its Smart City Ontology, to
Thriving Smart Cities
935
Correlation, then Data gathering, Modelling and AI/ML. For each phase, open- source tools will be used where possible to automate processes, and step-by-step results will be tried and tested across global focus groups. The “Global Observatory for Urban Intelligence” will be the result of crowdsourcing: • IEEE’s more than 450,000 technology expert members from over 160 countries and its access to university professors and students from its 2500 student branches worldwide. • ITU will bring its excellence in working on global, regional, and local policies in digital transformation, with its unique access to government leadership and authorities. Activities to achieve its goals, the “Global Observatory for Urban Intelligence” is carrying out a series of activities/projects, including: • The Smart City Ontology: this is the first and most critical activity in developing a successful “Global Observatory for Urban Intelligence”, which is already underway. The common language that will sit at the core of its architecture. Each key index that categorizes a single aspect of a city’s infrastructure, social wellbeing, or governance, e.g., waste, water, energy, education, environment, finance, social behavior, and so on, is broken down into its smallest building blocks. • Correlations: Identification of correlations between all objects in the Smart City Ontology, based on AI/ML and similar processes to those used in web semantics. This automated process will be carried out in parallel with analysis of objects by our global community of experts who will use collaborative tools to link objects based on experience. • Gathering Data: Create a network of City Observatory “Ambassadors” from local authorities and academia, including those involved in the initiative to date. They will be “city leaders” for the “Global Observatory for Urban Intelligence”, coordinating the volunteers onsite for activities including gathering ongoing city data; modelling and AI/ML of their city; analytics; and publishing results that can become best practices. Again, most of the data will be gathered using online scrapers that access public documents available on cities and smart cities. How Cities and Communities Will Benefit The “Global Observatory for Urban Intelligence” aims to provide the toolset and international network for the world’s first Smart City Digital Twin platform – a Decision Support System for cities. A powerful platform permitting city stakeholders from government, industry, academia, community organizations and NGOs to access an in-depth understanding of the complex behaviors that make up a city’s ecosystem: • • • •
query and analyze existing urban data, with cause-effect from complexity; monitor and assess progress on actions, making improvements to realign; simulate and evaluate new policies and concepts; test out strategies and solutions; and predict pain points;
936
J. Myers et al.
• share knowledge across multi-disciplines, connect, compare results, and collaborate with their counterparts or with stakeholders at a local to international level; and • encourage a new sense of global partnership into the discourse of city and digital transformation at the local level. Expected Outcomes of GOUI • A common language for cities: the Smart City Ontology • Easier, faster, and smoother implementation of new policies, strategies, and solutions for a city’s transformation, digital or other. • Reduction in maintenance costs in digital solutions integrated into cities. • Reduction in required resources and greater efficiencies when executing city normal and extraordinary activities, by municipalities and outsourced companies. • Greater cooperation across multi-disciplines and experts involved in city functions • Publishing of best practices on how to best achieve sustainable and resilient cities. • Strong networking across global regions between city and community stakeholders, academia, and technologists. • Greater understanding on city and community dynamics, population movements, and microeconomics. • Improved ability to avoid digital transformation pain points • Tools for evaluating the effects of city actions on security; the environment; community; and equality. • Stronger collaboration and adoption of digital transformation initiatives, e.g., best practices and standards, by cities.
6.3 Caution – Beware! Even at the lowest level of practice, digital twins bring enormous value to the understanding, monitoring, growth resilience and sustainability of our towns and cities. However, the digital twin journey takes time and patience with the necessary investment in resources. It is not always an easy path to tread, so here are trip wires to avoid! Technology, data, smart cities, and Digital Twins are not a panacea. Not only will the process take years to achieve full-scaled Smart City Digital Twins, but the process of digital transformation is not always an easy path to tread. Here are some of the most important “Beware” signs that cities and their citizens have learnt to look out for when building CDTs: Information security and privacy: the entire data lifecycle must be secured against malicious actors and violation of privacy rights. Data governance & regulation: In ensuring a data rich environment with high security and privacy, policy and standards need to be established to spell out how
Thriving Smart Cities
937
data is collected, exchanged, distributed and how information is disseminated and used. Creating trusted and regulated processes is key to develop a robust data environment which drives the use of data and digital twins. Lack of an accountability framework: such as open data licence of data issue, evolving regulations pertaining to privacy and cybersecurity, conditions of use and liability of parties as well as IP rights will have implications on the design and execution of digital twin applications. Cyber Attacks on Cities: As cities collect more and more data, they become prime targets for hackers who gain access to key pieces of information, including names, addresses, tax information, financial transactions, health histories, and more. This information can then be used to exploit concerned citizens. Hacks on cities have happened already putting citizen data at risk, and more will happen in the future. Just one example includes Colorado in the US, which was hit back-to-back in February 2018. Computers owned by the Colorado Department of Transportation were locked by ransomware. The attack in Colorado has added up to about $1.5 million in recovery. Including the Unconnected As data gathered for Digital Twins of Cities comes primarily from IoT sensors, smart city applications, and online devices/services, anything or anyone not connected to these technologies will likely be missing from the dataset. For example, if data relies on mobile location services from smartphones, then it will probably exclude young children and the elderly, leaving the project with an incomplete data picture. These gaps in the data can damage the project’s chances of success, or worse still, can be harmful to those left out. Nayeli Rodriguez, Technologist for the Public Realm at the Mayor’s Office of New Urban Mechanics in Boston, MA, USA stated that “You have to overcome data gaps by really listening to people and questioning whether your source of data is representative of their lives and experience”. Issues of Trust: The Case of Toronto Sidewalks Labs: Privacy, data use, and other citizen’s concerns can block digital transformation, especially if governments fail to address these concerns. Toronto’s Sidewalks Labs is a prime example of how ambitious plans can easily be derailed. Sidewalk Toronto was an innovative experiment conducted by Sidewalk Labs (part of Alphabet—the parent company of Google) and Waterside Toronto to create an urban smart district that was climate positive, affordable and inclusive for residents, created jobs, and operated as an innovation testbed for smart city experiments. In May 2020, with leaders claiming that economic uncertainty brought on by the Covid-19 pandemic had forced the project to shut down. While the economic downturn was a factor, other sources claim that it was stiff resistance from citizen groups that really put an end to the plan. From the beginning, the project was under scrutiny from citizens who were worried about how Alphabet would collect, protect, and use their data. And of course, residents also wondered who would own their data.
938
J. Myers et al.
Privacy groups were quick to draw parallels with living in a surveillance state, with one coalition, named Block Sidewalk, labelling the project as a corporate takeover of the city. Block Sidewalk’s main concern was that the implementation of these smart city ideas lacked transparency and that citizens weren’t being properly informed of how their data was being handled. This smart city experiment eroded citizen’s trust in smart government. The lesson that can be learned from Sidewalk Toronto’s failure is that transparency is key, and no project can succeed without the trust of local residents. In a survey conducted by Vrge Strategies, it was revealed that 66% of Americans would not want to live in a smart citydue to a deep concern about cyberattacks and fears about mass data collection and surveillance. These concerns are understandable after data losses and cyber attacks—such as those from Facebook, CapitalOne, and Equifax, which have damaged the public’s trust in the past. To bridge the trust gap, governments must work hard to promote citizen awareness and encourage citizen participation. It is imperative that any smart city innovation that depends on data collection, AI, and IoT, is properly explained to citizens in a clear and transparent manner. It is important that the following concerns are addressed, with strict rules and security measures in place.
6.4 Smart City Digital Twins (SCDTs): Achieving Urban Sustainability The United Nations Environmental Programme reports that there are “ten years left to achieve the Sustainable Development Goals (SDGs).” [27]. The 17 SDGs, established in 2015, address a global crisis on “climate change, biodiversity loss, and pollution, together with the economic and the social fragility they cause.” [28]. There is a critical need to move towards “a more sustainable, circular and resilient direction for the survival of people and planet.” (Fig. 29). Technology and data, Smart Cities and Digital Twins of Cities play a key role in determining the “how” these Sustainable Development Goals (SDGs) are met by 2030.
7 Thriving Future for Cities If we are to accelerate towards achieving sustainability and resilience on a global scale, and in time, we need to focus on “the local” – our cities. The COVID-19 pandemic has shown two sides to digital transformation. On the one hand, we have learnt how important technology is to keep us socially connected, working, and educated. Yet, on the other hand, we have woken to the
Thriving Smart Cities
939
Fig. 29 Sustainable Development Goals (SDGs). (Image from UN, 2015)
importance that cities are for our daily lives. They are not just our houses or offices. They are our families and friends, homes, our livelihood, and colleagues, where we grow up and live out our lives. The pandemic has also clarified that the digital divide is growing. Over three billion people were not able to work or education remotely or keep in touch with family and friends during lockdown. For almost 40% of the world’s population, digital transformation has still to get started. We must work to correct this huge imbalance. Today, access to the Internet and the digital world of information and services is a basic human right. Urban Digital Twins have already proved their incredible worth, so we must push on ahead and support the accelerated development of smart city strategies and projects, with a core focus on improving the social well-being of those that live, work, and visit cities and towns. We need to push forward, investing greater time and resources in people and technology; create the skills required; and work together with citizens and local communities to continue the critical move to digital transformation with a people-centric approach. We need to strive speedily along the route to developing and deploying Smart City Digital twins that will become the core tools to enable sustainable and resilient cities survive and thrive. Core tools for decision-making by city leaders and stakeholders to safeguard not just the physical assets that are the city, but most especially the social makeup that makes us human and civilised. Digital Twins are complex, but the results for a city can be far reaching and bring incredible advantages. They can make the difference between a town or city growing positively or simply dying out by not being able to face the challenges outlined
940
J. Myers et al.
in this chapter, including competition arising from changing socio-economic trends and surrounding cities. For city leaders and stakeholders, using Digital Twins of Cities means accepting a change in the ways things are done. It is part of the Digital Transformation process for bringing urban change. Disruption is normality in the urban ecosystem. It is the bread and butter of municipality leaders and decision-makers, but it is also a daily nightmare for them. Natural phenomena, deterioration of infrastructure, traffic issues, access to 24/7 water and so on, show how intrinsically fragile our towns and cities are. They lack sustainability. They need to be run efficiently, and it is simply no longer possible to do so through simple human skills and resources. Technology, data, and citizen engagement are the key to the future, starting yesterday. Digital Twins are already helping to provide the tools needed in the form of Urban Digital Twins with physical assets. Very soon we will be able to integrate the complex processes and social systems into these Digital Twins of Cities. These Smart City Digital Twins will use agreed common ontologies to deliver Decision Support Systems to city leaders and stakeholders and ensure far greater sustainability and resilience to the places we live, work, and visit across the world. Those cities that take advantage of Digital Twins, acknowledging their complexity; allocating financial, skills and technology resources; and most importantly engaging communities in the entire process, will thrive.
Appendix: Maturity Models of Digital Twins in Cities UDT Levels of Practice: Singapore [15].
In 2021, Singapore is again the smartest city in the world, for the third year running, according to this year’s Smart City Index of 118 ranked cities, published by Swiss business school Institute of Management Development (IMD). Virtual Singapore is a digital platform that will enable the public, businesses, government, and research agencies to derive insights, develop solutions and run simulations using a large-scale city model of Singapore as part of a Smart Nation initiatives. The digital twin draws on IoT sensors, big data and cloud computing, combined with 3D models, geospatial datasets and BIM. With a rich data environment, Virtual Singapore provides a collaborative platform to help make long-term decisions on areas such as infrastructure, resource management, and urban planning. Its Capabilities Include –– Virtual Experimentation Virtual Singapore can be used for virtual test-bedding or experimentation. For example, Virtual Singapore can be used to examine the coverage areas of 3G/4G
Thriving Smart Cities
941
networks, provide realistic visualisation of poor coverage areas, and highlight areas that can be improved on in the 3D city model. –– Virtual Test-Bedding Virtual Singapore can be used as a test-bedding platform to validate the provision of services. For example, the 3D model of the new Sport hub with semantic information within the Virtual Singapore could be used to model and simulate crowd dispersion to establish evacuation procedures during an emergency. –– Planning and Decision-Making With a rich data environment, Virtual Singapore is a holistic and integrated platform to develop analytical applications (i.e., Apps). For instance, an app could be developed to analyse transport flows and pedestrian movement patterns. Such applications would be useful in non-contiguous urban networks such as our parks and park connectors in Punggol. –– Research and Development The rich data environment of Virtual Singapore, when made available to the research community with the necessary access rights, can allow researchers to innovate and develop new technologies or capabilities. The 3D city model with semantic information provides ample opportunities for researchers to develop advanced 3D tools. Virtual Singapore will be developed based on geometric and image data collected from various public agencies and will integrate different data sources to describe the city with the necessary dynamic data ontology. The interplay of map and terrain data, real-time traffic, and demographic and climate information show how a single change could affect the lives of millions of people, and the systems they depend upon. This project at a cost of $73 million is championed by the National Research Foundation (NRF), Prime Minister’s Office, Singapore, the Singapore Land Authority (SLA) and the Government Technology Agency of Singapore (GovTech). NRF will be leading the project development, whilst SLA will support with its 3D topographical mapping data and become the operator and owner when Virtual Singapore is completed. GovTech will provide expertise in information and communications technology and its management as required in the project. Other public agencies will participate in Virtual Singapore in various phases. Virtual Singapore was codeveloped with the French firm Dassault Systèmes, by leveraging its existing software platform. Target Users: Virtual Singapore includes semantic 3D modelling, which comprises detailed information such as texture, material representation of geometrical objects; terrain attributes, for example, water bodies, vegetation, transportation infrastructure, etc. Models of buildings encode the geometry as well as the components of a facility, such as walls, floors, and ceilings, down to its fine details, as in the composition of granite, sand and stone in a building material (Fig. 30). With proper security and privacy safeguards, Virtual Singapore will enable public agencies, academia and the research community, the private sector, and also the community to make use of the information and system capabilities for policy and
942
J. Myers et al.
Fig. 30 Virtual Yuhua – Singapore. (Image from Dassault Systèmes, 2015)
business analysis, decision making, test-bedding of ideas, community collaboration and other activities that require information. Virtual Singapore addresses four major users’ categories, namely: –– Government Virtual Singapore is a critical enabler that will enhance various WOG initiatives (Smart Nation, Municipal Services, Nationwide Sensor Network, GeoSpace, OneMap, etc.). –– Citizens and Residents of Singapore Through Virtual Singapore, the provision of geo-visualization, analytical tools and 3D semantics-embedded information will provide people with a virtual yet realistic platform to connect and create awareness and services that enrich their community. –– Businesses Businesses can tap on the wealth of data and information within Virtual Singapore for business analytics, resource planning and management and specialized services. –– Research Community The R&D capabilities of Virtual Singapore allow the creation of new innovations and technologies for public-private collaborations to create value for Singapore. Amongst other new research areas, semantic 3D modelling is an emerging area, where research and development is needed to develop sophisticated tools for multi-party collaboration, complex analysis, and test-bedding (Figs. 31 and 32). Possible Uses of Virtual Singapore: By leveraging the big data environment and aggregating information from the public and private sector, the potential uses of Virtual Singapore in tackling livability issues are limitless. Some of these uses and their applications are listed below: –– Collaboration and Decision-Making Virtual Singapore integrates various data sources including data from government agencies, 3D models, information from the Internet, and real time dynamic
Thriving Smart Cities
943
Fig. 31 Collaboration and decision-making – Virtual Singapore. (Image from National Research Foundation, Singapore, 2015)
Fig. 32 George Loh, director of programs at the National Research Foundation, showcases Virtual Singapore. (Photo from Reuters, John Geddie, September 21, 2018)
944
––
––
––
––
J. Myers et al.
data from Internet of Things devices. The platform allows different agencies to share and review the plans and designs of the various projects in the same vicinity. Communication and Visualisation Virtual Singapore serves as a convenient platform for citizens to visualize upgrades to their estate and allows them to provide timely feedback to the relevant agencies. For instance, the Yuhua estate is a test-bedding site for the Housing & Development Board’s Greenprint initiative, which features sustainable and green features such as solar panels, LED lights, pneumatic waste conveyance system, enhanced pedestrian networks and extended cycling networks. With the completion of Virtual Yuhua, it could be used to showcase the possibilities and benefits of the HDB Greenprint initiative in other estates. Improved Accessibility Virtual Singapore includes terrain attributes, for example, water bodies, vegetation, and transportation infrastructure. This is different from conventional 2D maps that are not capable of showing terrain, curbs, stairs or steepness of a slope. As an accurate representation of the physical landscape, Virtual Singapore can be used to identify and show barrier-free routes for the disabled and elderly. They can easily find the most accessible and convenient route, and even sheltered pathways, to the bus stop or MRT station. Public can also use Virtual Singapore to visualise park connectors and plan their cycling routes. Urban Planning Virtual Singapore can provide insights into how ambient temperature and sunlight vary throughout the day. Urban planners can visualise the effects of constructing new buildings or installations, for example green roofs in the Yuhua estate, on the temperature and light intensity in the estate. Urban planners and engineers can also overlay heat and noise maps on Virtual Singapore for simulation and modelling. These can help planners to create a more comfortable and cooler living environment for residents.Virtual Singapore also supports a semi- automated planning process where planners could quickly filter buildings of interest based on preset parameters. For example, when identifying HDB blocks suitable for installing solar panels under the HDB Greenprint initiative, urban planners can use Virtual Singapore to quickly filter the suitable blocks according to specified criteria like number of storeys and roof type. Analysis on Potential for Solar Energy Production Data such as height of buildings, surface of the rooftops and amount of sunlight are available in Virtual Singapore. This allows urban planners to analyse which buildings have a higher potential for solar energy production, and hence more suitable for installation of solar panels. Further analysis can allow planners to estimate how much solar energy can be generated on a typical day, as well as the energy and cost savings. Virtual Yuhua has demonstrated that by cross referencing with the historical data collected from neighbouring buildings, this analysis can be validated and seasonally adjusted to reflect an even more accurate and granular projection.
Thriving Smart Cities
945
Fig. 33 Urban digital twin. (Image from Living-In-EU)
UDT Levels of Practice: The European Union – Local Digital Twin Ecosystems [16]. Cities and communities have been advancing over the past years through the use of IoT, data analytics and various digital services. There is however still a lack of an integrated approach that will exploit the strengths of AI, cloud computing, advanced data analytics and increased computational power in order to improve the overall performance of a city. To fill this gap, the concept of Local Digital Twins has recently emerged as part of an all-encompassing strategy for Smart and Sustainable Cities and Communities (Fig. 33). Local Digital Twins (LDTs) are a virtual representation of the city’s or community’s physical assets, processes and systems that are connected to all the data related to them and the surrounding environment. They use AI algorithms, data analytics and machine learning to create digital simulation models that can be updated and changed as their physical equivalents change. Real time, near real-time and historical data can be used in various combinations in order to provide the necessary capabilities for data analytics (descriptive, prescriptive, predictive), simulations and what-if scenarios. As cities have different needs and challenges, LDTs may focus on a range of different topics and domains; from predicting extreme weather events to urban planning or crisis management. LDTs can visualise processes and dependencies, simulate possible outcomes and impacts, taking into account citizens’ needs. Benefits span from cities’ operational efficiencies and cost savings, increased resilience and improved sustainability, economic development, participatory governance, increased safety and security. This broad range of benefits can be applied to all three main phases of a city’s lifecycle: initial planning and construction, current city monitoring and management as well as future planning. Additionally, LDTs
946
J. Myers et al.
provide a risk-free testing environment that increases the precision of long-term predictions while improving the monitoring and impact assessment of certain decisions that affect large parts of a city’s ecosystem. While several EU cities are starting to implement their LDTs, in order to ensure a large number of EU cities and communities can benefit from this powerful technology; (ii) to enable a minimal level of interoperability between LDTs for challenges that are not limited to a city’s boundary; (ii) to enable moving towards a network of LDTs and (iv) spur a significant and fair European digital twin market, fostering innovative European SMEs and growth, the European Commission is introducing a number of policy measures. It aims to fund the creation of an EU LDT Toolbox, including re-usable tools, reference architectures, open standards and technical specifications for LDT and encourages EU Member States to invest in their implementation by cities. The Commission is working with the Living-in.eu community (signatories and supporters) to map existing use cases and potential interest through the LDT iconic project. The Commission will also foster good practice exchange and consolidate knowledge around LDT, identify drivers and barriers for the large-scale deployment of LDT (ecosystems) as well as keep pace with the State-of-the-Art and technological developments difference between Digital Twins and Local Digital Twins/LDT ecosystem. Difference Between Digital Twins and Local Digital Twins/LDT Ecosystem A Digital Twin is a decision-support system; a virtual representation of a physical object. Local Digital Twins are digital twins of cities and communities. Local Digital Twins can support: (i) simulation (e.g. reaction to crisis events, climate change scenarios such as flooding or resilience, cause and effect and what-if scenarios); (ii) planning (e.g. mobility, logistics, cycling and road planning, tourism and event planning, building refitting, urban planning, management of new buildings, public places as well as of the underground environment) and (iii) real-time decision-making (e.g. energy consumption, parking and traffic information as well as pre-emptive maintenance of buildings6) in the local environment. Local Digital Twins are different from other digital twins in that they are set in a specific ecosystem, driven by common/EU values, with a flexible governance mechanism, characterised by security/privacy/GDPR, control systems and the use of platforms. They aim to create public/social value, tend to support openness and transparency and facilitate the breaking of silos (e.g., mobility model integrated with other sectoral models). Local Digital Twins are a system of systems. In future, they may not only create the digital twin of cities, but also of citizens. A possible organisation format of an LDT could be Societas Cooperativa Europaea (SCE). Cities’ Experience with LDTs: Rotterdam Rotterdam had to make important choices, when creating their digital twin; such as with regard to its scope (municipality vs city), approach to development (well- defined project vs open process/‘journey’) and procurement (‘traditional’ vs innovative). He highlighted the importance of the role of governance, whereby government would have a proactive (but not exclusive) role, co-creating solutions
Thriving Smart Cities
947
Fig. 34 Digital twin of Rotterdam. (Image from Smart Cities Marketplace, European Commission)
Fig. 35 Customs at port of Rotterdam digitised further. (Image from Port of Rotterdam Authority)
with the market, relying on Minimal Interoperability Mechanisms (MIMs) as well as procuring separate functionality building blocks as opposed to procuring a single solution. They also underlined the importance of trust within the ecosystem amongst the participants, interoperability, flexibility and transparency. For Rotterdam, a LDT describes the current physical reality of a city, based on the combination of geographical (3D) and real-time data. The Rotterdam LDT is currently in its Minimal Viability Product (MVP) phase; aiming to be operational in 2022, with the objective for further scaling in 2024 (Figs. 34 and 35).
948
J. Myers et al.
Fig. 36 Kalasatama digital twins project. (Image from Municipality of Helsinki)
Cities’ Experience with LDTs: Helsinki The 3D model of Helsinki has existed for over 30 years. There are two next generation 3D city models of Helsinki; a semantic city information model (City GML model) and a visually high-quality reality mesh model. The City GML model is more scientific and analytical, allowing users to perform a variety of analyses, while the reality model, which is physically more accurate, can be utilised in various online services or as the basis for all kinds of design projects. The models are available as open data. Helsinki aims to be the most functional city in the world and become carbon neutral by 2025. Helsinki executed its first urban digital twin initiative in the Kalasatama district, with the aim to observe how changing weather conditions like wind and solar light impacted the district and its built environment over time. Other use cases address areas related to renovation history and energy data, heat savings, geo-energy and other alternative energy potential (Helsinki Energy and Climate Atlas). While the Kalasatama Report has been finalised, the Helsinki Innovation Districts project is still ongoing (Fig. 36). Building Blocks of a LDT LDTs are composed of data integration, model, visualisation, simulation and security components. In terms of data, LDTs will need interface linked data, 3D visualisation data, linking databases, agreement on level of information (LOI), Level of Geometry (LOG), semantics and attributes and a Smart Message Switch (helping to deal with lots of real time information). Beyond visualisation (3D mesh, Point Cloud), LDTs will also need semantic, intelligent models. Models and simulation will need high performance systems that can achieve better performance with the
Thriving Smart Cities
949
analysis of sub-items of city dynamics. LDTs may also be regarded as critical infrastructure. Local Digital Twins consist of different layers of complexity; they can help facilitate dialogue between various stakeholders (quadruple helix), improve our understanding of the city and help model where we want to move. Linking different digital twins will require particular governance, but also interoperability, semantic models to allow domain-specific models to exchange data, processes that map and match generic geospatial data to the domain-specific semantic layers as well as common language and data standards. LDT Maturity Levels Several initiatives exist that can visualise the current situation, but to reap the full benefits of the potential of LTDs, these should also create simulations of possible transformations and evolutions according to foresight approaches (what-if scenarios). This requires re-thinking the city as an ecosystem and the role of local government within it. Beyond data, AI and semantic models, it also needs the creation of trust, organisational/cultural as well as behavioural change. In future, LDTs will progress towards a Digital Urban Community. Maturity levels of LDTs can also be assessed along the lines of data, models, directions, interactions, users and level of operations. For example, LDTs may use only open data, geospatial, single domain, static data, which could be enhanced by citizen data and gradually by real-time data as well as automanaged data security. In terms of models, an LDT may use a model with basic, analytical features, which could develop towards cross-sector/cross-process models and the complete modelling of the system. As regards direction, the twin could be mono-directional (from physical to digital), bi-directional and even tri-directional (physical/digital/ AI. Concerning interactions, moving from simple dashboards, to 2D and 3D interfaces and towards VR interfaces, which also represents a progress in terms of decreasing human intervention and increasing automation. When looking at users, early stage LDTs are often used by data officers and researchers, later by city/transport planners and architects; at later stages by third party developers and citizens, finally LDTs can be made open to all types of users based on access rights. UDT Levels of Practice: The European Union – DestinE [17].
Destination Earth (DestinE) is a major initiative of the European Commission. It aims to develop a very high precision digital model of the Earth (a ‘digital twin’) to monitor and predict environmental change and human impact to support sustainable development. To achieve this ambitious goal, the Commission is joining forces with European scientific and industrial excellence to demonstrate how digital technologies can effectively contribute to a more sustainable and digital future.
950
J. Myers et al.
To support tackling complex environmental challenges, DestinE will help policy- makers to: –– monitor and simulate the Earth’s system developments (land, marine, atmosphere, biosphere) and human interventions; –– anticipate environmental disasters and resultant socio-economic crises to save lives and avoid large economic downturns; and –– enable the development and testing of scenarios for ever more sustainable development. The DestinE can benefit from Member States investments under their Recovery and Resilience Fund Plans in combination with Digital Europe (in the context of the EuroHPC Joint Undertaking), and Horizon Europe for the related research activities. Objectives: Destination Earth aims to develop a high precision digital model of the Earth to model, monitor and simulate natural phenomena and related human activities. As part of European Commission’s Green Deal and the Digital Strategy, Destination Earth (DestinE) will contribute to achieving the objectives of the twin transition, green and digital. DestinE will unlock the potential of digital modelling of the Earth system. It will focus on the effects of the climate change, water and marine environments, polar areas, cryosphere, biodiversity, or extreme weather events, together with possible adaptation and mitigation strategies. It will help to predict major environmental degradation and disasters with unprecedented fidelity and reliability. By opening access to public datasets across Europe, also DestinE represents a key component of the European strategy for data (Fig. 37). Users of DestinE will be able to access vast amounts of natural and socio- economic information to: –– Continuously monitor the health of the planet: For example, to study the effects of climate change, the state of the oceans, the cryosphere, biodiversity, land use, and natural resources. –– Support EU policymaking and implementation: For example, to assess the impact and efficiency of environmental policy and relevant legislative measures. –– Perform high precision, dynamic simulations of the Earth’s natural systems: focusing on thematic domains such as marine, land, coasts, and atmosphere. –– Improve modelling and predictive capacities: For example, to help anticipate and plan measures in case of storms, floods and other extreme weather events and natural disasters. –– Reinforce Europe’s industrial and technological capabilities: in simulation, modelling, predictive data analytics, artificial intelligence (AI) and high- performance computing (HPC). At the heart of DestinE will be a user-friendly and secure cloud-based digital modelling and simulation platform. This platform will provide access to data, advanced computing infrastructure including HPC, software, AI applications and
Thriving Smart Cities
951
Fig. 37 Open core platform – destination Earth. (Image from ECMWF/DestinE)
analytics. It will integrate digital twins — digital replicas of various aspects of the Earth’s system — on topics such as extreme weather events, climate change adaptation, oceans, biodiversity and more (Fig. 38). DestinE will allow users to access to thematic information, services, models, scenarios, simulations, forecasts and visualisations. The underlying models and data will be continuously assessed to provide the most reliable scenario predictions for the users. The platform will also enable application development and the integration of users’ own data. DestinE will initially serve public authorities and will gradually open up to a larger range of scientific and industrial users, to spur innovation and enable the benchmarking of models and data. Implementation: The operational core platform, the first digital twins and related services will be made operational as part of the Commission’s Digital Europe Programme. Horizon Europe will provide research and innovation opportunities that will support the further development of DestinE. There will be synergies with other relevant EU programmes, such as the Space Programme, and related national initiatives.
952
J. Myers et al.
Fig. 38 How a digital twin handles information and feeds it back for real world use. (Image from ECMWF/DestinE)
DestinE Digital twins: The digital twins of DestinE will give both expert and non- expert users tailored access to high-quality information, services, models, scenarios, forecasts and visualizations on climate adaptation models, extreme weather events, natural disaster evolution and more. Digital twins rely on the integration of continuous observation, modelling and high-performance simulation, resulting in highly accurate predictions of future developments. Preparing for DestinE: The first stakeholder workshop on DestinE was organized in November 2019 to announce the initiative and collect feedback from potentially interested stakeholders. It brought together a large number of potentially interested parties from public authorities and the industrial and scientific communities. To explore the potential application areas and parallel initiatives, the Joint Research Centre conducted a Study on DestinE use cases analysis and a Survey on digital twin initiatives in EU countries, with contributions from the ESA, ECMWF and EUMETSAT, Commission services and agencies. A further series of open stakeholder workshops took place: –– In October 2020, two workshops on user specifications for the first two Digital Twins (extreme natural events and climate change adaptation). –– In November 2020, a workshop on DestinE system architecture design.
Thriving Smart Cities
953
–– In February 2021, a Policy user engagement workshop was organised for discussing the two priority twins with potential policy users and their use case proposals. Timeline DestinE will be developed gradually through the following key milestones: –– By 2024: Development of the open core digital platform and the first two digital twins on extreme natural events and climate change adaptation. –– By 2027: Integration of additional digital twins, like the digital twin of the ocean, to serve sector specific use cases into the platform. –– By 2030: A ‘full’ digital replica of the Earth through the convergence of the digital twins already offered through the platform. UDT Levels of Practice: United States of America. Cities all over the USA are now using UDTs to study the effects of development, traffic, climate change and disaster recovery. U.S. government agencies have been using data to make informed decisions in various areas as data-driven organisations deliver better outcomes [29]. The report also states that four out of five local government officials in the U.S. say they have improved their use of data in the past 6 years to drive better outcomes for residents. These are some of the examples of cities that leverage data efficiently: –– Arlington, Texas: tracked ridership data for a pilot rideshare program and then allocated funding to expand it based on the findings. As a result, officials expanded the program this year, giving residents their first citywide public transit system. –– Tulsa, Oklahoma: moved US$ 500,000 of federal funding from a first-come, first-served strategy to one that favoured the poorest neighbourhoods based on an analysis that showed existing processes didn’t help underserved communities. –– Boulder, Colorado: redesigned a procurement process for 65 additional miles of fibre-optic infrastructure to prioritise value and results rather than dictating how to perform work. Using new data for evaluating bids, the city saved US$ eight million and subcontracted with more partners, including small and minority- owned businesses. “You can simulate the effects of your decisions on a physical object, in this case a city or a campus, before you actually make those changes in the real world,” said Ankit Srivastava, associate professor of mechanical and aerospace engineering at Illinois Tech. “It makes the decisions much more data-driven and much less expensive because you can play with various possibilities.” Case Study 1: Cityzenith: Energy Management in Buildings [18] Cities such as New York, Boston and Washington, D.C., have created targets for a future where buildings have net-zero carbon emissions. But decarbonizing a
954
J. Myers et al.
Fig. 39 Clean cities – clean future, Cityzenith. (Image from Cityzenith, 2021)
building requires analysis and implementation of energy management systems and renewable strategies, in addition to the purchase of carbon offsets (Fig. 39). Cityzenith is one of the companies that focus on developing Urban Digital Twins that help do just that. The model can start with as few as two to five buildings, said CEO of Cityzenith, Michael Jansen. Property owners input their data using a template, and the resulting digital twin helps them run and optimize simulations based on factors such as a building’s age, condition and purpose. The goal is to have more energy-efficient buildings and therefore cities. Cityzenith plans to donate the technology to a total of 100 cities by 2024. Case Study 2: Chattanooga, TN [19] Researchers at the Energy Department’s Oak Ridge National Laboratory and National Renewable Energy Laboratory from the U.S. Department of Energy partnered with Chattanooga on urban digital twins to increase energy efficiency while also optimising drivers’ travel time, speed, and safety, using simulations. Case Study 3: Boston, MA [20] Boston has been seeing a lot of development in the last few years. This historic city needed to be able to quickly visualize the impact of approved and proposed projects in the city, so decided to build an Urban Digital Twin to help (Fig. 40). “Now we can actually apply new [zoning and development] rules to a whole neighborhood and see what the impact is before making a decision,” says Kennan Rhyne, AICP, interim deputy director of downtown and neighborhood planning for the Boston Planning & Development Agency (BPDA). BPDA has also used its digital twin to measure the impacts of developments on parking, energy use, carbon emissions, and waste. Carolyn Bennett, deputy director of GIS for BPDA, hopes to advance planning workflows by integrating developers’
Thriving Smart Cities
955
Fig. 40 3D smart model. (Image from The Boston Planning and Development Agency, 2018)
project submissions seamlessly into the smart model, and eventually designing the model to be able to conduct suitability and growth capacity analyses. Within the main model they can see active, historical, and proposed structures. They even have alternative models of structures, which shows how a proposed development may have been altered over time. The model helps the city – planners and decision makers, as well as residents, and shows how planning and development has changed over time and could change their future. Case Study 4: Las Vegas, NV [18] An area of Las Vegas is to adopt ‘internet of things’ (IoT) and ‘digital twin’ technology to tackle issues including noise pollution, water management and emissions from buildings. A digital twin provider and an IoT exponent are jointly developing a system to transition the US city to zero-carbon emissions. A significant area of downtown Las Vegas will be equipped with a 5G network, plus IoT and digital twin technology to improve mobility, air quality, noise pollution, water management and emissions from major buildings. Las Vegas chief innovation officer Michael Sherwood said: “Digital Twins are rapidly becoming vital to how cities are run. Now in Las Vegas we will have a city- scale digital twin that is driven by the physical environment, and ultimately letting us control key systems through it. This will give us new levels of insights and
956
J. Myers et al.
Fig. 41 Brooklyn Navy Yard building. (Image from Dattner Architects, 2018)
control to benefit city planners, residents, and businesses. We’re setting the benchmark for cities around the world to become smarter, efficient, safer and more sustainable.” Case Study 5: New York City, NY [18] A ground-breaking project at the iconic Brooklyn Navy Yard site in New York City will show how a Digital Twin platform enables buildings of any size, type, or age to significantly cut operating costs and carbon emissions (Fig. 41). UDT Levels of Practice: United Kingdom [21].
“Data now as important to UK infrastructure as concrete or steel.” This declaration by Sir John Armitt, chairman of the National Infrastructure Commission, shows the relevance attributed to digital transformation in UK’s development strategies, that brought to the creation of the National Digital Twin programme (NDTp). Run by the Centre for Digital Built Britain (University of Cambridge), the NDTp was set up to deliver key recommendations to the National Infrastructure Commission’s 2017 ‘Data for the Public Good’ report, and launched by HM Treasury in July 2018. The National Digital Twin programme (NDTp) works towards facilitating an ecosystem of securely connected digital twins across a range of infrastructure assets and organizations that, combined would become the NDT. Cited by the National Infrastructure Commission as having the potential to unlock an additional £7 billion per year of benefits across the UK infrastructure sector, the NDT will support better decision-making to enable a sustainable, zero carbon and resilient economy and society for all (Fig. 42).
Thriving Smart Cities
957
Fig. 42 This is digital Britain. (Image from National Digital Twin programme – NDTp)
The National Digital Twin programme aims to offer key benefits for all stakeholders: –– Benefits to society: Transparent stakeholder engagement. Improved customer satisfaction and experience through higher-performing infrastructure and the services it provides. –– Benefits to the economy: Increased national productivity from higher- performing and resilient infrastructure operating as a system. Improved measurement of outcomes. Better outcomes per whole-life pound. Enhanced information security and thereby personnel, physical and cyber security. –– Benefits to business: New markets, new services, new business models, new entrants. Improved business efficiency from higher-performing infrastructure. –– Benefits to the environment: Less disruption and waste. More reuse and greater resource efficiency – a key enabler of the circular economy in the built environment. Since its launch, in 2018, the NDTp has reached a series of key milestones. Initial activity focused on aligning industry and government behind a common definition and approach to information management, so that data can be shared openly and securely between future digital twins (Fig. 43). Main deliverables include: –– Gemini Principles (2018), a paper setting out the proposed principles to guide the national digital twin and the information management framework that will enable it. –– DFTG roadmap (2018), a prioritised plan for five core streams responsible for the delivery of the information management framework. –– Digital Twin Hub (2019), a web-enabled community for early adopters of digital twins to learn through sharing and progress by doing.
958
J. Myers et al.
Fig. 43 Roadmap for delivering the information management framework for the build environment. (Image from National Digital Twin programme – NDTp)
–– Flourishing Systems (2020), a paper advocating a shift in vision for infrastructure that is people-focused and system-based. –– Pathway towards an IMF (2020), a technical paper and a summary paper on the proposed technical core for the information management framework; and the –– Gemini Programme (2020), this programme aims to develop resources for the DTHub community and expand the outreach of the Information Management Framework and of the NDTp itself. The Gemini programme brings together people and organisations willing to volunteer by contributing time and resources to the NDTp to develop materials for use by the DT Hub community. Since March 2018, a wide variety of case studies have been selected and thoroughly described within the NDTp. Among those, the following can be listed as recent examples: Case Study 1: Smart Infrastructure for Safety and Reliability Across the Rail Network When we travel by train, we expect that we will arrive at our destination safely and on time. Safety and performance of their service network is therefore a key priority for Network Rail. Researchers have inherited two intensively instrumented bridges and are transforming that high volume and velocity of data into a digital twin showing the wear and pressures on the bridges, as well as other information that can help the asset owners predict when maintenance will be required and meet their key priorities. Remote monitoring has several benefits over using human inspectors alone. Sensors reduce the subjectivity of monitoring. Factors such as light levels, weather
Thriving Smart Cities
959
and variations in alertness can change the subjective assessments made by human inspectors. They may also be able to identify issues arising before visual inspection can detect them by monitoring the stresses on the bridge. A human inspector will still be sent to site to follow up on what the remote sensing has indicated, and engineers will of course still need to perform maintenance. However, remote monitoring allows the asset owners to be smarter about how these human resources are deployed. The digital twin of the Staffordshire Bridges centers on a physics-based model for conducting structural analysis and load-carrying capacity assessments. The information that is site specific information, such as realistic loading conditions obtained by the sensors, will be fed into the physics-based model to simulate the real structure and provide the outputs of interest. A digital twin replica of the structure will be able to provide bridge engineers with any parameter of interest anywhere on the structure. Case Study 2: Smart Hospital of the Future: Digital Technologies, Service Innovation, and Hospital Design Plans are afoot to relocate the century-old Moorfields Eye Hospital from its current home in London’s Old Street to a two-acre site at St Pancras Hospital, in the King’s Cross area. The move brings opportunity to use digital technologies (AI, analytics, digital twins) to innovate service provision to NHS patients and redesign hospitals and the wider health (eco)system. Cambridge researchers are working with Moorfields clinicians and management on a project to help Moorfields plan services fit for a digital future which will lead to better health outcomes, improve productivity at lower cost and with superior patient/user satisfaction. Moorfields is the leading provider of eye health services in the UK and a world class centre of excellence for ophthalmic research and education. The new hospital will create an integrated eye-care, research and education facility with the aim of facilitating improvements in people’s sight. A Cambridge-based research team is currently working with multiple stakeholders – asset owners, management, service providers, and medics – to better understand how the digital transformation of the built environment will create new opportunities to enhance and improve the services provided to patients in the new smart hospital. Digitalisation is emerging in the application and use of AI and machine learning, cloud technologies, virtual platforms, telemedicine and other digital technologies. This project explores potential drivers in relation to the design, construction, and planned operation of the new smart eye hospital – known as ‘Oriel’ – a joint venture between Moorfields Eye Hospital, UCL and Moorfields Eye Charity. The rapid response to delivering services during COVID-19 has also become an important part of the study and researchers are exploring how the digital transformation during/post COVID might inform the reconfiguring of care provision around the planned physical hospital hub at Moorfields. This multi-disciplinary research project draws on service innovation, ecosystem dynamics, and digital transformation in putting visually impaired patient wellbeing at the heart of service delivery across the new hospital building and wider ecosystem.
960
J. Myers et al.
Fig. 44 West Cambridge Twin: Developing a digital twin demonstrator. (Image from Bentley Systems, 2019)
Case Study 3: Digital Twins: Driving Business Model Innovation For many engineering and manufacturing firms, digital transformation remains elusive as they struggle to emulate the success of platform-based consumer businesses such as Uber, airbnb and Facebook. While new technologies are undoubtedly delivering efficiency gains, complete digital revolution is yet to be achieved in the B2B (business to business) world. There are many reasons as to why success is still to emerge for B2B platforms, including existing market monopolisation, high entry cost, shortage of capital, and lower churn rate in comparison to B2C (Figs. 44 and 45). Researchers at the Cambridge Service Alliance (CSA), at the Institute for Manufacturing, University of Cambridge, are exploring the potential for Digital Twins (DTs) as the enablers of business model innovation for B2B organisations to reap the rewards of digitalisation. The digital twin concept specifically represents an opportune channel for business model innovation (BMI) in a B2B platform context. Specifically, digital twins can help create a conduit for customer insight and operations data that can in turn create new services offered in the after sales market. Could digital twins be the way forward? Some of the players in the B2B field are traditional companies and work in a heavily regulated market. They are familiar with the physical and social dimension of operations but still struggle to transition into digital dimensions of business models. This makes it unclear as to how established firms (incumbents) will respond to this necessary change – and what types of business models are required with the adoption of DTs for the manufacturing and construction sectors.
Thriving Smart Cities
961
Fig. 45 West Cambridge Twin: Developing a digital twin demonstrator. (Image from Bentley Systems, 2019)
UDT Levels of Practice: China. Case Study 1: Virtual Shanghai [22] With over 26 million residents, Shanghai has the highest population of any city in China, and is the third most populous in the world. Beijing-based digital twin specialists 51World have succeeded in creating a complete virtual clone of the city in Unreal Engine, modelling all of its 3750 square kilometers (fig. 46). It contains over 20 landmark structures including the Oriental Pearl and Shanghai Tower, using data from satellites, drones, and sensors to generate digital versions of countless other buildings, roads, waterways, and green spaces. Ultimately, the plan is to turn this model into a true digital twin that will be continuously updated in near real time. Case Study 2: Beihu Sewage Treatment Plant, Wuhan [23] The Beihu Sewage Treatment Plant project, located in Wuhan, has a coverage area of about 130 square kilometers, serving 2.48 million people. The China Railway Shanghai Engineering Bureau (CRSEB) is responsible for constructing a new plant – the largest scale sewage treatment plant ever built in China and in greater Asia. The scope includes civil design of more than 31 buildings, ranging from pump rooms to distribution wells, multiple tanks, and other related treatment equipment. Eleven major categories of electromechanical pipelines are covered by the project engineering requirements (Fig. 47).
962
J. Myers et al.
Fig. 46 Virtual Shanghai. (Image from 51 World)
Fig. 47 Immersive digital twins helps China Shanghai railway engineering establish new practices to deliver sewage treatment plant. (Image from China Railway Shanghai Engineering Bureau)
There are major challenges in this project that require highly efficient and controlled design and project management practices to deliver it successfully. Some of the requirements include managing the deformation, shrinkage, and cracking of highly sensitive biological and membrane pools, while accounting for the difficulty of controlling the impermeability of concrete. Minimizing field rework to place the
Thriving Smart Cities
963
numerous reserved holes and pre-embedded bushings across multiple structures is a monumental challenge requiring robust methods and standards. To achieve the stringent requirements of the project, CRSEB turned to BIM methodologies to establish a digital twin of the sewage treatment plant. Incorporating unmanned aerial vehicles (UAVs) to capture photos of the existing site extended the information-based digital twin to an immersive one. Leveraging the immersive digital twin establishes more reliable engineering workflows, such as rebar modeling. Using the graphical representation of the digital twin helped improve methods for managing project progress and cost and enhance the execution and safety of on-site construction. UDT Levels of Practice: South Korea. The South Korean government started its Smart City/Digital Twin programme in 2017 to address various issues, such as traffic, crime, environmental pollution, efficient energy management, urban planning, disaster management and redevelopment. There are approximately ten Smart City projects in South Korea today and the number is expected to increase. All4Land, a leading geospatial service provider in South Korea, is working on the a mobility project in Jeonju, the Virtual Seoul project for Seoul, South Korea, and a 3D geospatial project of Osan for the South Korea National Mapping Agency (NGII). All4Land has been building 3D digital models that include BIM and GIS data since 2014. To save time and reduce cost, All4Land collects oblique and Lidar data simultaneously with Leica’s CityMapper airborne mapping sensor and quickly processes the data with LeicaHxMap, a multi-sensor processing platform (Fig. 48).
Fig. 48 CityMapper LiDAR point cloud. (Image from Leica Geosystems)
964
J. Myers et al.
The resulting 3D models are enhanced with GIS data, such as vector maps of utilities, telecommunication or transportation infrastructures, to support in-depth analysis. The external shapes of buildings are made from CityMapper data and the internal shapes are made from 3D scanning data, design data and BIM data. Case Study 1: Mobility as a Service, Jeonju [24] One of the test projects for the Urban Digital Twin concept in South Korea was in the city of Jeonju, a tourist area in North Jeolla Province in the south west of the country. Many historic buildings, such as the traditional village called Hanok, attract visitors, and the city is known for its excellent regional cuisine. An Urban Digital Twin was created as part of MaaS (Mobility as a Service) to attract tourists and to help them travel around the city using electric bicycles, electric kick boards and other transportation (Fig. 49). All4Land was selected to conduct two projects in Jeonju, first mapping an area 4 square kilometres (sq. km) in size, including BIM for 20 buildings, followed by a 205 sq. km project. The City of Jeonju and LX, the government cadastral organization, are in the process of acquiring BIM data for all government-owned buildings in the larger area. Using CityMapper, aerial data capture of the large project took All4Land only 2 days, with an additional 2–3 months for data processing and 3D building modelling. The LOD3 outdoor data captured with the CityMapper were merged with indoor BIM data in the test area to create an LOD4 model of the real world. Case Study 2: Virtual Seoul [25] During the COVID-19 pandemic, the Seoul Convention Bureau decided to go ahead and develop an Urban Digital Twin for promoting Seoul as a virtual professional destination, offering a 3D MICE event platform with a package of 9 venues,
Fig. 49 CityMapper LiDAR point cloud. (Image from Leica Geosystems)
Thriving Smart Cities
965
Fig. 50 Virtual Seoul. (Image from Seoul Convention Bureau)
including Coex, Nodeul Island at Hangang river, Seoul City Hall, and Seoul Tourism Plaza (Fig. 50). The UDT platform offers a Content Management System to accommodate the hybrid trend to host e-conferences, live discussions, meet-and-greets through live- streaming and video-on-demand. They are now one of the pioneers in this new field of “virtual destination marketing”.
References 1. “World Urbanization Prospects”, Population Division of the United Nations Department of Economic and Social Affairs (UN DESA), 2018. https://population.un.org/wup/Publications/ Files/WUP2018-Report.pdf 2. “The World’s cities in 2018: data booklet”, Population Division of the United Nations Department of Economic and Social Affairs (UN DESA), 2018. https://digitallibrary.un.org/ record/3799524/files/the_worlds_cities_in_2018_data_booklet.pdf 3. Tao, F., Sui, F., Liu, A., Qi, Q., Zhang, M., Song, B., Guo, Z., Lu, S. C.-Y., & Nee, A. Y. C. (2018). Digital twin-driven product design framework. International Journal of Production Research, 1–19. https://doi.org/10.1080/00207543.2018.1443229 4. “Smart Cities and Smart Spaces”, ABI Research, January 2021 quarterly report. https://www.abiresearch.com/market-research/product/7778780-smart-cities-and-smart-spaces-quarterly-up/ 5. Van den Besselaar, P., & Beckers, D. (2005). The life and death of the great Amsterdam digital city. Springer. Third International Digital Cities Workshop, Amsterdam, The Netherlands, September 18–19, 2003, Revised Selected Papers. https://link.springer.com/ book/10.1007/b107136 6. Swabey, P. (2012, February 23). IBM, Cisco and the business of smart cities. Information Age. https://www.information-age.com/ibm-cisco-and-the-business-of-smart-cities-2087993/ 7. Report, Frost & Sullivan. (2020). https://www.frost.com/news/press-releases/ smart-cities-to-create-business-opportunities-worth-2-46-trillion-by-2025-says-frost-sullivan/
966
J. Myers et al.
8. Smolan, R., & Erwitt, J. (2012). The human face of big data. Against All Odds Productions. https://books.google.it/books/about/The_Human_Face_of_Big_Data.html?id=U TuxMQEACAAJ&redir_esc=y 9. “Worldwide Global DataSphere IoT Device and Data Forecast, 2020–2024”, International Data Corporation – IDC, July 2020. https://www.idc.com/getdoc.jsp?containerId=US46718220 10. “Analytical Report 6: Open Data in Cities 2.”, European Data Portal, 2020. https://data.europa. eu/it/highlights/open-data-european-cities 11. Evans, D. (2011, April). The internet of things – How the next evolution of the internet is changing everything, report. CISCO. http://www.cisco.com/web/about/ac79/docs/innov/IoT_ IBSG_0411FINAL.pdf 12. Nordrum, A. (2016, August). Popular Internet of Things forecast of 50 billion devices by 2020 is outdated warning: All projections for the Internet of Things are subject to change, article in IEEE Spectrum. https://spectrum.ieee.org/ popular-internet-of-things-forecast-of-50-billion-devices-by-2020-is-outdated 13. “Urban Planning and Digital Twins”, ABI Research, July 20, 2021. https://www.abiresearch.com/market-r esearch/product/7779044-u rban-p lanning-a nd-d igital-t wins/?utm_ source=media&utm_medium=email 14. Marigold, W. (2021, January 25). Be smart to stay safe: Harnessing technology for safer and more efficient buildings. HDR Inc. https://www.hdrinc.com/insights/ be-smart-stay-safe-harnessing-technology-safer-and-more-efficient-buildings 15. Virtual Singapore, National Research Foundation, Singapore (NRF), 2021. https://www.nrf. gov.sg/programmes/virtual-singapore 16. “Urban Digital Twin”, Living-In.EU, 2021. https://living-in.eu/groups/solutions/ urban-digital-twin 17. “Destination Earth”, European Commission, 2021. https://digital-strategy.ec.europa.eu/en/ policies/destination-earth 18. “Cityzenith: energy management in buildings”, Cityzenith, 2021. https://cityzenith.com/ 19. “Virtual EPB”, Oak Ridge National Laboratory, Oak Ridge TN, USA, 2017–2019. https:// evenstar.ornl.gov/autobem/virtual_epb/ 20. Brooks, P. (2018, April 23). 3D GIS city model guides development near boston common and beyond. Esri. https://www.esri.com/about/newsroom/blog/3d-gis-boston-digital-twin/ 21. National Digital Twin Programme. (2021). https://www.cdbb.cam.ac.uk/what-we-do/ national-digital-twin-programme 22. Weir-McCall, D. (2020, September 15). 51World creates digital twin of the entire city of Shanghai. Unreal Engine. https://www.unrealengine.com/en-US/ spotlights/51world-creates-digital-twin-of-the-entire-city-of-shanghai 23. “Digital Twin Helps Build Sewage Treatment Plant”, Industrial Automation – India, November 30, 2018. https://www.industrialautomationindia.in/articleitm/7050/ Digital-Twin-Helps-Build-Sewage-Treatment-Plant/articles 24. Duffy, L. (2021). South Korean smart cities combine 3D digital models with GIS and BIM. Leica Geosystems. https://leica-geosystems.com/case-studies/reality-capture/ south-korean-smart-cities-combine-3d-digital-models-and-gis-and-bim 25. “Virtual Seoul”, Seoul Convention Bureau, 2021. https://www.miceseoul.com/vsp 26. Myers, J., Lee, G. M., Larios, V., Essaaidi, M., & Drobot, A. (2021, May). The global observatory for urban intelligence: Unravelling the complexities of city ecosystems. IEEE IoT Newsletter. https://iot.ieee.org/newsletter/may-2021 27. Sharif M.M. (2020, February). The future of our planet depends on getting our cities right. UN-Habitat. https://www.un.org/development/desa/undesavoice/more-from-undesa/2020/02 28. “Take Action for the Sustainable Development Goals”, United Nations, 2021. https://www. un.org/sustainabledevelopment/sustainable-development-goals/ 29. “Closing the Data Gap: How Cities Are Delivering Better Results for Residents”, What Works Cities – WWC, July 2021. https://results4america.org/wp-content/uploads/2021/06/Deloitte- WWC-Data-Gap-Report_vFinal-063021.pdf
Thriving Smart Cities
967 Joel Myers is an IT engineer, entrepreneur and leading international technologist, specialising in the creation and development of innovation technology solutions in Cultural Heritage, Tourism, and Smart Cities. His company, HoozAround Corp. (USA) owns and manages a digital platform called IoP (the “Internet of People”) that provides socio-economic recovery for cities, through a micro-currency called HooziesTM. It has been successfully piloted by the Mexican State Government of Jalisco to the 8.2 million people living and working there and its 1000s of small businesses to support the local economy during and post-COVID-19 and restimulate growth. The next municipality to launch the IoP platform is New York City in 2021. One of the end-products of the IoP platform is unique “realtime” urban intelligence that flows in from cities, on the population’s demographics, ongoing business transactions, mobility, health and social community interactions, based on the use of the micro-currency, HooziesTM and AI/ML. Real-time reporting and analytics then offers city leaders and stakeholders an in-depth understanding of the complex and intricate behaviours that makes a city function. Municipalities may closely monitor the effects of their operations and policies, and predict issues in order to avoid them. As Chair of IEEE IoT Initiative for Smart Cities, Joel Myers has been focusing his working group on the redefinition of the digital transformation of urban environments from a truly “People-Centric” focal point. Exploring the goals of the smart city industry movement juxtaposed with the need for humanity to remain connected as physical people. This “People-Centric” approach is leading to a far greater multi-disciplinary approach to collaboration between the technology industry of IEEE IoT and its communication with smart city leaders and stakeholders from: urban planning; infrastructure and engineering; finance and economics; community engagement; cultural heritage and tourism; sociology; law and governance; and so on. This new approach was reflected in the past 2 years of smart city sessions, speakers and panellists at the IEEE World Forum on IoT. Through his leadership of the IEEE IoT Global Cities Alliance (GCA), IEEE has developed a close “ongoing” collaboration and networking, between the Tech Industry and Smart Cities, on a worldwide and regional scale, in order to work together in understanding urban needs and priorities, and sharing and developing projects, results, best practices, and standards to support interoperability. One of the GCA’s core initiatives is the Observatory of Urban Intelligence. A cloud-platform of observable urban data, gathered through crowd-sourcing IEEE’s global network of universities and student branches, building unique models and correlations in understanding cities and the effectiveness of smart city solutions. This project is being developed jointly with the ITU-T under the United Nations’ “United 4 Sustainable Cities” initiative, and the United Cities & Local Governments organization.
968
J. Myers et al. The second core initiative of the IEEE IoT GCA is promoting existing best practices and standards globally to city leaders, and developing new standards based on priority issues communicated by participating cities themselves. This initiative is run under the IEEE SA Industry Connection programme “Smart Cities Standards and Best Practices”, where IEEE and other standards organisations, like ITU-T, and smart cities work on a common agenda. The work carried out by Joel Myers has been published in international newspapers and journals such as the BBC, New York Times, Hong Times, the Hindu Times, Wired, and Forbes Magazine. Victor Larios followed a higher education degree program at the ITESO University in Mexico (B.Sc. in Electronics Engineering), graduating in 1996. In 1997 he received a M.Sc. at the Université de Technologie de Compiègne in France, and then went onto get a Ph.D. in Computer Engineering in 2001. Since 2004, has been a Full Professor at the Information Systems Department at the University of Guadalajara in Mexico. In April 2014, Dr. Victor M. Larios founded and became director of the “Smart Cities Innovation Center (SCIC)” at the University of Guadalajara, where he leads a group of researchers in Smart Cities and Information Technologies. The SCIC is a think-thank to help government, industry, and other academic partners to join efforts to improve the quality life and social wellbeing within an urban environment, by using technology as the core driver for transformation. His primary research interests are Smart Cities, IoT Distributed Systems, Networking, Multiagent Systems, and Data Visualization using Virtual Reality. Dr. Larios has published more than 70 papers in international scientific journals and conferences, and has published a book on “Serious Games”. Dr. Larios has ongoing collaborations in projects with High Technology industry and government, using design thinking and agile methodologies to accelerate technology transfer in living labs. As an entrepreneur, he is the founder and CIO of the consulting company, IDI Smart Cities, which collaborates with Advion Solutions LTD. in Finland. Its main activities are market research, promote scholarships to develop local talent, and support for international projects in Latin American countries and the European Union. One of its key efforts is to introduce the Circular Economic Model as a sustainability component for Mega-Cities. As a volunteer, Dr. Larios leads technically on the Guadalajara Smart City project for the government, and international ONGs, such as the IEEE. Victor M. Larios is a Senior Member of IEEE with 29 years of membership. Since 2013, he has led the “Guadalajara Core City” in the IEEE Smart Cities Initiative. Dr. Victor M. Larios is also a guest editor at the IEEE IoT Magazine and has been the IEEE International Smart Cities Conference (ISC2), 2020 general co-chair.
Thriving Smart Cities
969 Since July 2019 he has been working together with Joel Myers, Chair of IEEE IoT Initiative for Smart Cities, to develop the “Internet of People” (IoP). A people-centric approach to connecting people within cities to build opportunities and growth, on a business and social level. In March 2020 the first IoP pilot project was launched by the Mexican State Government of Jalisco in order to support and incentivise local economies and communities in recovering from the COVID-19 pandemic, through the IoP cloud/mobile platform and a pioneering digital micro-currency, called HooziesTM. Oleg Missikoff After more than two decades of scientific and academic activities, in 2021 co-founded the Earth 3.0 Foundation, with its mission to support the digital transformation journey in Africa. His main interests are in sustainable development, the digital twin paradigm and systems innovation, both for local communities and enterprises.
Digital Twins for Nuclear Power Plants and Facilities David J. Kropaczek, Vittorio Badalassi, Prashant K. Jain, Pradeep Ramuhalli, and W. David Pointer
Abstract The nuclear digital twin (DT) is the virtual representation of a nuclear energy system across its lifecycle. The nuclear DT uses real-time information and other data sources to improve the process of design, licensing, construction, security, O&M, decommissioning, and waste disposal. By leveraging the knowledge base and experience from the past 40 years of LWR operation, the nuclear DT is helping to accelerate the development and deployment of advanced nuclear technology in areas of passive safety, new fuel forms, instrumentation, and reactor control. For the currently operating nuclear fleet, DTs are reducing the operational risks, increasing plant availability, increasing energy capability, and reducing electricity production costs. For advanced fission and fusion reactors, DTs are being used to design for passive safety and built-in security-by-design. Rapidly deployable small modular reactor (SMR) and microreactor designs compatible with modular construction techniques and advanced manufacturing will be the new normal, reducing the need for large capital expenditures and compressing construction schedules. In addition, lower operational and maintenance costs will be realized by reducing the complexity of operations, staffing needs, and maintenance-related activities. Keywords Decommissioning · Digital Twin · Dual Coolant Lead-Lithium (DCLL) · Fluoride salt-cooled high-temperature reactor (FHR) · Fusion Energy Reactor Models Integrator (FERMI) High-Temperature Gas-cooled Reactor (HTGR) · Light-Water Reactor (LWR) · Liquid Metal cooled nuclear Reactor (LMR) · Nuclear Power Plant (NPP) Lifecycle · Operation & Maintainance (O&M) Quality Assurance (QA) · Small Modular Reactor (SMR) · Special Nuclear Material (SNM) · Virtual Environment for Reactions Applications (VERA)
D. J. Kropaczek (*) · V. Badalassi · P. K. Jain · P. Ramuhalli · W. D. Pointer Nuclear Energy and Fuel Cycle Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_31
971
972
D. J. Kropaczek et al.
1 Introduction Nuclear energy, conceived in the mid-twentieth century as an economical, safe, clean, and nearly limitless means of energy production, culminated in deploying hundreds of commercial nuclear plants worldwide. The first viable commercial designs were based predominantly on light-water reactor (LWR) technology that today produces 10% of the world’s electricity. Following a rapid expansion in the 1970s and 1980s, several factors slowed the wide-scale adoption of nuclear energy, including concerns over nuclear waste, nuclear safety, and regulatory licensing. These factors have increased the cost of new nuclear power reactors licensing and construction and existing nuclear facilities operation and maintenance (O&M). Despite the hurdles, nuclear energy has maintained a stellar track record of safety, reliability, and availability for the currently operating nuclear fleet. Many plants seek lifetime extensions ranging from 60 to 80 years and beyond. As part of this progression, technology upgrades improved plant and nuclear reactor core performance and addressed emerging nuclear security threats. In addition, a new generation of nuclear reactors based on both LWR and non-LWR technologies are being developed and deployed. These advanced reactors have passive safety, robust fuel performance, and reduced capital and O&M costs. In the climate crisis, advanced nuclear energy is a critical option for large-scale, reliable, greenhouse gas-free power for electric generation, facility process heat, and future hydrogen production as part of a green economy. Digital twin (DT) technology has a pivotal role in the future of nuclear energy. Advanced reactors leverage the knowledge base and experience from the past 40 years of LWR operation while adopting new features that enable new missions and lower costs. Passive safety and built-in security-by-design features in contemporary designs reduce the complexity of operations, staffing needs, and maintenance- related costs of advanced reactor technologies. Advances in nuclear fuel forms such as tri-structure isotropic (TRISO) particle fuel and accident-tolerant fuel cladding will further reduce risk through the design philosophy of defense-in-depth. Rapidly deployable small modular reactor (SMR) and microreactor designs compatible with modular construction techniques and advanced manufacturing will be the new normal, reducing the need for large capital expenditures, reducing O&M costs, and compressing construction schedules. Since their inception, nuclear energy systems have relied on DT technologies under continuous development and deployment as part of the nuclear lifecycle. The cornerstone of DTs, computational physics models, has been used to design all operating reactors. For decades, virtual reactor simulators integrating digital reactor models with analog control room hardware have played a critical role in operator training and control room design. As with many complex technologies, such efforts have often evolved independently without sharing others’ expertise. Siloed activities that would benefit from shared DT experience include plant siting, construction, design and licensing, operations, maintenance, security, and nuclear fuel storage. Looking ahead to a future era of widespread advanced nuclear deployment in the face of global climate change, DTs can fulfill a vital role in the integrated design to
Digital Twins for Nuclear Power Plants and Facilities
973
ensure the safety and cost-effectiveness needed to address the historical shortcomings of nuclear energy.
2 Nuclear Power Plant Lifecycle The lifecycle of a nuclear plant integrates safety, licensing, security, operations, and maintenance with business management decisions to maximize the return on investment over the operational lifetime of the plant. Several key differences exist for nuclear power plants (NPPs) relative to other industrial assets. First, NPPs use fissile material (235 U, 239Pu), either through direct fuel loading or fertile breeding material, to produce energy in the form of heat. This heat is typically converted to electricity by a conventional steam supply system. The nuclear fission process produces radioactive materials as a byproduct. Radioactive materials can have beneficial uses in applications such as cancer therapy, but they are generally harmful to health. Radioactive material production and use are strictly regulated to ensure nuclear safety. Second, given the production of radioactive materials, NPPs must maintain the highest possible levels of protection for the staff and public. Worker radiation dose, especially during maintenance outages, must be held at a minimum level through careful design, planning, and monitoring. To protect the public, NPPs have multiple levels of active and passive safety systems, including concrete containment, to prevent the release of radioactive material to the environment. Finally, nuclear plants have ongoing liability once electricity production ceases requiring the collection of funds over the plant’s lifetime to meet a significant future obligation for spent fuel management and site decommissioning.
2.1 Safety and Risk Profile The safety and risk profile for an NPP is an essential component of cost, as it directly affects the capital investment and the long-term maintenance of the plant. The main objective of nuclear safety is the protection of workers, the public, and the environment from undue radiation hazards through proper operating conditions and prevention or mitigation of accidents and their consequences. Nuclear safety comprises fuel management; the design, construction, operation, and decommissioning of nuclear installations such as NPPs; fuel production facilities; and waste storage facilities. Ensuring nuclear safety requires that suitably qualified staff be available, and a strong safety culture must be well established in the workforce. The organization must execute continuous process improvement to address operational and safety issues. The work of nuclear regulators covers all these aspects. According to probabilistic risk assessment studies, three categories of events are primarily responsible for the risks associated with LWRs:
974
D. J. Kropaczek et al.
1. Station blackout. 2. So-called transient without scram. 3. Loss of cooling due to pump failure or coolant leakage from the primary system to the containment. A station blackout is caused by the failure of the electrical line, which provides power to instrumentation, controls, or electrically powered equipment. The commonly implemented emergency defense is a secondary electrical system, typically a combination of diesel generators that are powerful enough to drive the pumps and a battery supply that is sufficient to run the instruments. In transient without scram, the assumed event is an insertion of positive reactivity through an event such as an undesired withdrawal of the shim rods, for example. In this case, the protective safety system response is the rapid, automatic insertion of the safety rods. A pump mechanical failure or a piping breach are the sources of cooling loss. The emergency response is the activation of an emergency core cooling system. In all such emergency responses, proper operator action and proper functioning of the appropriate backup system are paramount. Other reactor designs exhibit different types of risk. For example, the pool-type liquid-metal reactor (LMR) has a relatively low risk of loss of coolant because it operates near atmospheric pressure. A double-walled tank full of molten sodium contains the entire system. High-temperature gas-cooled reactors (HTGRs) have relatively low risks associated with station blackout. Passive cooling for heat removal is possible due to lower power densities and higher surface designs. However, other risks must be considered and managed. For example, molten sodium leaks into a steam system could react exothermically with water. An intermediate heat transfer loop between the primary coolant and the steam system can address such leaks. In HTGRs, air ingress can disrupt natural convection in the helium coolant. Hence, HTGR designs typically limit the usage of piping that creates opportunities for air ingress; instead, they rely on cross-vessel connections between the primary and secondary systems.
2.2 Safety Management Safety is an essential component of nuclear lifecycle management. The operator must ensure that the facility can operate within the established safety limits and have plans or procedures to ensure the durability of plant systems and structures over decades of operation. Safety management refers to the processes required to satisfy all operational and safety limits, prevent or mitigate anomalous plant events, and maintain a strong safety culture to protect workers and the public. A reactor is managed safely through Preventive measures encapsulated as design and operating rules. In an NPP that follows the defense-in-depth model, all safety systems must be functionally independent and inherently redundant, and they all must have diverse designs. Staff follows high-quality assurance standards that require strict adherence to procedures and maintenance processes for all nuclear reactor operations.
Digital Twins for Nuclear Power Plants and Facilities
975
Mitigating measures, also referred to as safety systems, are systems and structures that prevent an accident that does occur from proceeding to a catastrophic outcome. Several key safety system components include: 1. The shutdown control system for quickly putting the reactor into a subcritical state 2. The emergency core cooling system that provides sufficient cooling of the core and fuel region within the vessel upon loss of reactor coolant 3. The containment structure that prevents radioactive materials from being released into the atmosphere when other systems have substantial failures Emergency systems, in the event of standard electricity disruption, provide power to critical components to remove nuclear decay heat. Finally, an extreme mitigating measure is the evacuation of personnel to avoid exposure to high radiation in a reactor installation. An essential part of a safety system is the strict adherence to design requirements. The reactor must have a negative power-reactivity coefficient, which means that the power naturally decreases in response to increases in fuel temperature that threaten safety. The safety rods must be insertable under all circumstances, and the uncontrolled movement of any single regulating rod should not rapidly add substantial reactivity. Furthermore, the structural materials used in the reactor must retain good physical properties over their expected service lives. Design and construction must follow stringent quality assurance rules according to the standards set by significant engineering societies and accepted by regulatory bodies. No human activity involving complex technology is free from harm. Measures taken in nuclear safety cannot reduce the risks to zero. However, regulations and safety systems have protected the public and reduced risks to a minimum level deemed acceptable. Whether the nuclear industry has achieved this de minimis risk value is a subject of vigorous controversy. Nevertheless, independent regulatory agencies such as the US Nuclear Regulatory Commission (NRC), the International Atomic Energy Agency (IAEA), and similar worldwide agencies have judged nuclear power safe. They will continue to do so into the future.
2.3 Aging Management Aging management refers to the actions taken to maintain the plant in a state of high reliability with consistent performance of all safety systems, structures, and components necessary to plant safety and operations. Inspections, investigations into the mechanisms of material degradation, and proactive actions to mitigate the effects of aging are all critical aspects of aging management. Nuclear plants are also affected by the traditional mechanisms of component degradation: one such mechanism is thermal cycling. In nuclear plants, acceleration of component aging must account for the impact of high radiation fields. Examples include embrittlement, material creep, and iodine-induced fuel clad stress corrosion cracking.
976
D. J. Kropaczek et al.
Safety and aging management are critical components for successful asset management of the NPP over its lifetime. Asset management is integral to sustaining the long-term availability and reliability of the nuclear plant. Building an NPP considers energy demand forecast over at least six decades and the need for baseload power vs. load-follow mode. The projected period and amount of electricity generation determine the revenue required to cover the capital costs from conceptual design through decommissioning. The choice of technology depends on the energy plan and financial factors such as overnight capital costs, O&M costs, and fuel cycle costs. Plant operation requires guaranteed fuel supply and spent fuel storage and final disposal plans. The current operating fleet of LWRs, which represent the most mature NPP technology, provides a baseline for calculating the financial risk for new NPPs. Although non-LWR advanced reactors have not yet seen widescale commercial deployment, they offer substantial advantages in passive safety and modular construction. Their simplified design will reduce overnight and O&M costs. Advanced reactors with higher operating temperatures and associated energy conversion efficiencies are well suited for dual-use applications such as industrial processes requiring process heat and hydrogen production. Future NPP deployment should consider these factors as part of their risk analysis.
3 Defining the Digital Twin for Nuclear Energy Systems The nuclear digital twin is the virtual representation of a nuclear energy system across its lifecycle. The copious amounts of data available for an NPP provide a wealth of information to enable DT technology. The DT uses real-time information and other data sources to improve the process of design, licensing, construction, security, O&M, decommissioning, and waste disposal. DT technology significantly reduces the costs associated with nuclear power deployment by informing critical decisions regarding asset management. DT technology can be used in the following phases of the nuclear lifecycle: • The decision to build an NPP, the establishment of requirements, and choice of the type of plant • Site selection • Design, licensing, construction, and commissioning • The operation, including periodic safety reassessment up to the end of the lifetime • Decommissioning and dismantling • Spent fuel management and long-term waste storage Data-informed decision-making is made for each stage of the nuclear lifecycle and includes rigorously controlled documentation maintained throughout the installation’s life. Significant data includes:
Digital Twins for Nuclear Power Plants and Facilities
• • • • •
977
Original design data and calculations Reactor startup physics testing Operational data (sensors and signal history) Commissioning test results Component inspection history
Figure 1 ([56], p. 4) displays the embodiment of a DT system for an NPP comprising the physical systems, structures, and components and the DT counterparts of the virtual space. DT systems provide actions and recommendations based on the differences between actual data from the physical space and virtual data from the virtual representation of the system. The physical space includes all nuclear assets, computing systems, and instrumentation and control (I&C) infrastructure. In contrast, the virtual space consists of those aspects of DT technology familiar to those in the energy industry, including multiphysics models, machine learning and artificial intelligence data, and data analytics The following capabilities may characterize the DT: • Virtual model and simulator for plant systems and subsystems, including the reactor core and fuel: models may range from simplified block diagrams to high- fidelity 3D representations derived from as-built data • Integration of the virtual plant model into the workflow of plant design, construction, operations, decommissioning, and waste disposal processes • Data acquisition systems to incorporate a wide range of sensors and measurement systems across the plant to provide data on the site security boundary, containment, nuclear steam supply system (thermocouples, pressure, flow), reactor
Fig. 1 Digital twin conceptual design for a nuclear power plant [57]
978
• • • • •
D. J. Kropaczek et al.
core (neutron/gamma flux), component lifetime, and maintenance schedule and worker dose Data management framework for receiving and mapping sensor data onto the virtual simulator and updating model parameters to reflect the real-time state of the plant Recalibration of the virtual plant simulator based on measured data to improve predicted plant parameters of interest Virtual simulation to monitor reactor systems, structures, and components, including the fuel and core, to assure licensing technical limits Virtual simulation to make future projections regarding reactor behavior under various scenarios Virtual simulation in conjunction with machine learning as part of the reactor control system (human or autonomous)
The goal of the virtual simulator at the heart of the DT is to perform high-fidelity predictive simulation within an integrated, usable, extensible software system. Such virtual reactor models achieve their accuracy through physics-based modeling for critical phenomena, high geometrical resolution, and multiphysics simulation of the reactor core and safety systems. For nuclear reactor simulation, the quantities of interest include safety parameters such as local power and fuel temperature, operational parameters such as power response to load change, and component behavior important to lifetime analysis. Physics-based modeling across length scales reduces reliance on correlations based solely on experimental data. Integrated and coupled multiphysics modeling includes all phenomena of interest to reactor physics. The virtual model also utilizes real-time sensor data to perform model calibration. The resultant adapted models provide real-time high-resolution, measured distributions for the reactor core and fuel that serve as the basis of reactor core monitoring and control systems. The Virtual Environment for Reactor Applications (VERA), developed by the Consortium for Advanced Simulation of LWRs (CASL) [32], represents the cutting edge in nuclear reactor modeling and simulation. VERA can solve a variety of reactor performance challenges by modeling multiphysics phenomena. Within a single environment, VERA combines the relevant physics of nuclear reactors—including neutron and gamma radiation transport, thermal-hydraulics, fuel performance, and chemistry—to model complex reactor behavior (Fig. 2). VERA integrates physics components based on science-based models, state-of-the-art numerical methods, and modern computational science. It is verified and validated using operating reactors, single-effects experiments, and integral-effects tests. VERA’s high fidelity high-resolution coupled solutions accurately represent the reactor’s behavior and feedback mechanisms, and they are being used to advance understanding beyond existing industry knowledge. Numerous analyses have been performed in design, operations, licensing, and component lifetime analysis using existing VERA capabilities developed through CASL. These applications and their capabilities represent future DT technology for
Digital Twins for Nuclear Power Plants and Facilities
979
Fig. 2 VERA code suite with multiphysics integration [32]
the current and future nuclear operating fleet. The industry applications of VERA [44] for Westinghouse pressurized water reactors (PWRs) include: 1. AP1000® control rod ejection accident. 2. Evaluation of steam line break event. 3. Critical heat flux margin improvement. 4. Computational fluid dynamics (CFD)–based thermal-hydraulic applications. 5. Fuel rod performance analysis, including crud deposition. Software component models must be well validated against single and integral effects tests that facilitate the quantification of uncertainties in model parameters and input data essential to nuclear regulatory licensing. VERA is used as a validation tool for previous operation cycles and a reference solution for making highly accurate blind predictions for future reactor operations. VERA was used to validate the recent nuclear reactor startups for Watts Bar 2, which achieved initial criticality in May 2016, and the Sanmen and Haiyang Nuclear Power Stations in China. Sanmen and Haiyang are the first implementations of the advanced AP1000® reactor developed by Westinghouse (4 units); they achieved initial criticality from June 2018 through January 2019. Table 1 [32] shows the plants, operating cycles, reactor types, and fuel types considered in VERA validation. This list represents nearly the full spectrum of PWRs and operating fuel designs within the US nuclear fleet and advanced LWR designs like the NuScale SMR. The list represents reactors of different sizes, power densities, cycle energy productions, fuel products, burnable absorbers, and loading pattern design strategies. Each plant on the list has unique requirements based on energy production, load-follow or coast down, maintenance and fueling outage schedule, and fuel product transitions, which may impose additional constraints on thermal operating margins.
980
D. J. Kropaczek et al.
Table 1 VERA fleet validation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Plant AP1000® Byron 1 Callaway Catawba 1 Catawba 2 Davis-Besse Farley Haiyang Krsko NuScale Oconee 3 Palo Verde 2 Sanmen Seabrook Shearon Harris South Texas 2 Three Mile Island V.C. summer Vogtle 1 Watts Bar 1 Watts Bar 2
Cycles 1–5 17–21 1–12 1–9 8–22 12–15 23–27 1 1–3, 24–28 1–8 25–30 1–16 1 1–5 Generic 1–8 1–10 17–24 9–15 1–18 1–2
Reactor and fuel type Westinghouse Gen Ill +17 × 17 XL Westinghouse 4-loop 17 × 17 Westinghouse 4-loop 17 × 17 Westinghouse 4-loop 17 × 17 Westinghouse 4-loop 17 × 17 Babcock &Wilcox 15 × 15 Westinghouse 3-loop 17 × 17 Westinghouse Gen Ill +17 × 17 XL Westinghouse 2-loop 16 × 16 Small modular reactor Babcock &Wilcox 15 × 15 Combustion engineering system 80 16 × 16 Westinghouse Gen Ill +17 × 17 XL Westinghouse 4-loop 17 × 17 Westinghouse 3-loop 17 × 17 Westinghouse 4-loop 17 × 17 XL Babcock &Wilcox 15 × 15 Westinghouse 3-loop 17 × 17 Westinghouse 4-loop 17 × 17 Westinghouse 4-loop 17 × 17 Westinghouse 4-loop 17 × 17
Kropaczek [32]
In addition, the nuclear plant licensee must maintain the DT software for nuclear applications under a formal quality assurance (QA) program such as NQA-1 [4]. These programs define the QA requirements for the software lifecycle, addressing functional needs, software testing and control, independent review, configuration management, and error reporting and disposition. Performing code and solution verification ensures accurate physics models as an integral part of the QA process.
4 Digital Twins Supporting Design, Licensing, and Construction DTs have a crucial role to play in addressing the unique requirements of nuclear energy systems, including opportunities in the following areas: • Design, licensing, siting, construction, commissioning, and decommissioning of NPPs • Design, operation, storage, and disposal of the nuclear reactor core and fuel according to refueling cycles, typically ranging from 1 to 2 years (each requires a unique reactor core and fuel design to meet energy production targets and licensing limits)
Digital Twins for Nuclear Power Plants and Facilities
981
• Online simulation using a wide range of sensor and measurement systems for plant condition monitoring and forward-state prediction for power maneuvering and predictive maintenance • Operator training simulators based on virtual plant models to ensure knowledge and response to anticipated operational and accident scenarios • Nuclear materials tracking for nuclear physical security to ensure nonproliferation and criticality control to eliminate unsafe nuclear fuel handling and storage configurations • 3D radiation dose mapping as relates to operation, refueling, and maintenance Cyber-physical security to address unauthorized access to plant systems • Scoping studies, optimization, and cost analysis of new innovative reactors
4.1 Design and Licensing of a Nuclear Power Plant The design and licensing of an NPP are based on a comprehensive safety evaluation to protect the public, workers, and the environment. Virtual simulation of the integrated plant systems, reactor core, and fuel is a cornerstone of the plant safety case that relies on a risk-informed analysis of hypothetical accident scenarios to mitigate radiological dose consequences. This process includes site selection (geology, hydrology, and seismic risk) related to potential environmental impacts. NPP design is an iterative process that depends on the site characteristics, plant safety case, component manufacturability, and component lifetime analysis. These factors depend on the reactor materials’ anticipated operations and projected radiation dose history. Definition of the complex I&C systems is an essential part of the design and will determine the control aspects of reactor operation for normal functioning, incident, and accident management. DTs with virtual simulation can provide detailed cradle-to-grave analysis of all plant components, ranging from verification of construction with inspections, tests, studies, and acceptance criteria through lifetime irradiation analysis to final decommissioning analysis. The decommissioning plan is established at the outset and maintained throughout the nuclear plant’s lifecycle. It includes dismantling and waste disposal activities, focusing on protecting the public, the workers, and the environment from radioactive release and exposure.
4.2 The Decision to Build a Nuclear Power Plant and Determination of Plant Type The decision to build an NPP and the determination of plant type includes evaluating several figures of merit—siting, economics, use, the grid, for example. Modeling and simulation / DTs assess most figures of merit. DTs have been used to investigate nuclear reactors in hybrid integrated energy systems (IESs) to reduce the reliance on fossil fuels
982
D. J. Kropaczek et al.
and carbon dioxide emissions. Hybrid systems combine nuclear reactors with other conventional types of renewable energy—intermittent renewable power sources (solar and wind) and industrial systems for which heat and electricity are critical. IESs are “cooperatively-controlled systems that dynamically apportion thermal and electrical energy to provide responsive generation to the power grid” [6, p. 1]; these systems create financially, technically synergistic partnerships among various energy producers and consumers. IESs designed for specific markets and applications require different subsystems and integration options. Companies assess IES deployment scenarios against technical and economic performance. However, three general categories are considered for IES architectures, as identified in the US Department of Energy IES Program 2020 Roadmap [6]: • Tightly coupled IESs: thermally and electrically coupled and co- controlled systems • Thermally coupled IESs: subsystems only connected via thermal energy networks to manage thermal demands with companion grid connections for electrical needs • Loosely coupled electricity-only IES: subsystems are only coupled via electrical energy networks and are co-controlled to manage electricity demands The use of DTs is essential in each of the three IES categories. An example is the recently performed feasibility study for a nuclear IES to meet the steam and electricity needs of an existing chemical production facility in Kingsport, Tennessee [21]. This study explored reactor technology options, evaluation, and optimization, focusing on meeting the facility’s operational and reliability requirements. This work is part of an ongoing effort for good environmental stewardship that meets customers’ demands by shifting to more environmentally sustainable solutions to fulfill energy needs. For the established Kingsport site, the nuclear reactor is the primary aspect of the IES that will integrate with its network of steam and electricity. In the analysis, the nuclear reactor would connect to the existing 600 PSIG steam line, leveraging the remainder of the steam network as it currently exists, along with as much of the current electricity network as possible. Therefore, the primary modeling activity for this effort is to evaluate nuclear reactor technologies to be incorporated into the IES. The criteria used to downselect nuclear reactor options for the Kingsport site includes: 1 . Reactor capacity and operability 2. Steam temperature and pressure 3. Technology readiness 4. Siting allowance 5. Dynamic system response of the IES with candidate NPP technologies DT-based modeling has proved fundamental to determine the dynamic systems response of the IES in consideration of future nuclear technologies, including fluoride high-temperature reactors (FHRs), HTGRs, LMRs, and advanced light-water SMRs. Oak Ridge National Laboratory developed models of the Kingsport plant
Digital Twins for Nuclear Power Plants and Facilities
983
using the Transient Simulation Framework of Reconfigurable Modules (TRANSFORM) [20] to assess each candidate nuclear technology. The site model drew on 6 years of historical operational data for the Kingsport plant. Figures of merit included net electrical cogeneration and the required process heating. Figure 3 displays the TRANSFORM results for the site’s 1500 PSIG and 600 PSIG steam systems. The model shown in Fig. 3 was used to develop nuclear steam models for each reactor technology. The DT offers several component process blocks. The mechanical energy extracted from the two turbines is summed and compared to the electrical demand from the demands block using the netElectric block. The power from the turbines can be adjusted to account for efficiency and the nonelectrical use of steam by tuning the gains on the netElectric block. In this study, 100% of the 1500 PSIG steam and 40% of the 600 PSIG steam are used for electrical production (shown as gains on the inputs to the netElectric block). A natural gas heater, NGheater, was added to supply additional thermal energy to meet thermal demand. Figure 4 shows the resulting electric demand and supply for derivative cases considering each of the four nuclear technologies investigated. The blue lines in the figure indicate electrical demand. Mechanical powers produce intermediate pressure (turbineIP) indicated by green lines, while red lines indicate high-pressure turbine (turbineHP) components. The black line indicates the additional heat required to maintain the 600 PSIG system’s temperature. Note that the controllers ensure that steam supply and demand match as closely as possible. The net electricity generation is most significant for FHR and negative for LWR, with HTGR and LMR showing comparable performance. Net negative value electricity generation requires the purchase of additional electricity from the grid or other onsite generation. High-temperature reactor technologies can meet or exceed
Fig. 3 The steam system is used for nuclear technology simulations [21]
984
D. J. Kropaczek et al. Electric Demand
HP Turbine Supply
IP Turbine Supply
Net Electric
NG Heat Supply
250
200
200
150 100 50
0
1000
Time [d]
1500
0
IP Turbine Supply
500
1000
NG Heat Supply
250
250 200
150 100 50
0
Time [d]
1500
2000
IP Turbine Supply
Net Electric
NG Heat Supply
50
−50
1000
HP Turbine Supply
150
−50
500
2000
100
0
0
Electric Demand
300
200
−100
1500
Time [d]
(b) HTGR Net Electric
Power [MW]
Power [MW]
HP Turbine Supply
NG Heat Supply
−100
2000
(a) FHR Electric Demand
300
Net Electric
50
−50
500
IP Turbine Supply
150
−50
0
HP Turbine Supply
100
0
−100
Electric Demand
300
250
Power [MW]
Power [MW]
300
−100
0
500
(c) LMR
1000
Time [d]
1500
2000
(d) LWR
Fig. 4 Electric demand and supply curves for selected nuclear technologies [21] Table 2 Summary of candidate nuclear technology simulations compared to the current Eastman simulation
Existing facilities FHR HTGR LMR LWR
Average natural gas makeup heating (MW) N/A
Maximum absolute steam supply ramp rate (W/s) 518
Average net electricity generation (MW) −20.9
13.9 42.9 113.0 165.0
1287 921 702 290
87.3 36.0 5.4 −60.7
Greenwood et al. [21]
the electrical demand of the Kingsport site, although some fossil fuel heating is required to maintain steam conditions. The fossil heating requirement varies inversely with the temperature of the nuclear steam. Consequently, the LWR application requires a significant amount of fossil heat. Table 2 presents the study’s critical parameters, including additional natural gas requirements, maximum steam supply ramp rate, and average net electricity generation. The FHR is the most attractive option for the Kingsport site showing the average gas heating, maximum steam supply ramp rate, and average net electricity generation for the technologies evaluated. Each six-year simulation takes fifteen seconds to run on a standard desktop computer.
Digital Twins for Nuclear Power Plants and Facilities
985
4.3 Integration of Design, Construction, and Commissioning The cost of recent first-of-a-kind (FOAK) nuclear reactor deployment projects such as the European pressurized water reactor and the AP1000® has been skyrocketing because of schedule slippage and construction times now exceeding 15 years. Civil works, site preparation, installation, and indirect costs, including engineering oversight and owner’s expenses, dominate the overnight costs. Financing costs are a function of schedule and discount rates. Schedule delays impact cost, and for every year of unplanned delay in construction, the Levelized Cost of Electricity rises by approximately 8–10%. DTs can mitigate such problems and have proven to contribute to fast design and construction times for several nuclear reactors. Figure 5 shows how Hitachi-GE Nuclear Energy (HGNE) construction-related technologies evolved during the construction history of boiling water reactors in Japan [28]. Continuous expansion of DT technology has resulted in associated decreases in construction times. The world’s record for the quickest build of a large fission reactor—from first concrete to first fuel load—is 38 months for the HGNE Advanced Boiling Water Reactor (ABWR). Kashiwazaki-Kariwa Unit 6, and its sister units, Kashiwazaki-Kariwa Unit 7, Hamaoka Unit 5, and Shika 2 achieved this record between 1996 and 2006. A DT called the Plant Integrated Computer-Aided Engineering (CAE) System ([29, 47], pp. 252–254) facilitated this triumph by delivering a design that was more than 90% completed before construction–an uncommon occurrence in such large projects. The system also enhanced coordination among designers, construction workers, and suppliers. As shown in Fig. 6, the Plant Integrated CAE System resulted from decades of experience in designing, licensing, building, and operating nuclear reactors in Japan and is constantly refined. Rapid improvement in computer performance has led to
Fig. 5 Evolution of nuclear plant construction technologies [28]
986
D. J. Kropaczek et al.
Fig. 6 Plant integrated CAE system [47]
the growth of the HGNE system: currently, hundreds of computer systems are integrated throughout HGNE, connecting the design office, the factory, the construction sites, sub-vendor offices, and customer offices. The 3D plant design work, including piping layout, utilizes upstream design information such as the plot plan and system design data. After careful review, engineers add the fabrication and installation information (e.g., shop and field weld data) to the 3D plant design data to perform fabrication design and production of fabrication drawings, procurement data, and input data for computer-controlled fabrication machines. The 3D design data and the fabrication data are then made available for construction planning activities such as designing temporary facilities, planning a construction schedule, planning a detailed construction sequence, and reviewing the moving range of construction cranes. Computer-aided design (CAD) systems have been available since the 1960s and have become a cornerstone of engineering design in nearly every field. Designers use the Plant Integrated CAE system, built on the foundation of the 3D-CAD system shown in Fig. 7, to optimize equipment arrangement within the structure and limitations of each building’s layout. Factors critical to the design include safety, operability, maintainability, and constructability. The 3D database stores the detailed information created using the design system for manufacturing and fabrication. This information includes material specifications, tolerances, welding procedures, and inspection requirements. This information is used later for material and parts procurement and pipe fabrication drawings in the downstream design process. The 3D-CAD database approach improved cooperation between engineering, manufacturing, and construction specialists by facilitating information sharing through the groupware system. It also enhanced data organization through the document management system. The walk-through function enables engineers to visually check plant layout by creating a virtual plant model allowing first-person
Digital Twins for Nuclear Power Plants and Facilities
987
Fig. 7 Plant 3D-CAD system architecture [47]
navigation. This graphical representation aids the engineer in reviewing plant operability and maintainability. For example, any interferences or restrictions in the space required to install or replace a piece of equipment or in the room necessary to efficiently operate a piece of equipment can be reviewed and redesigned if needed.
4.4 Application to Construction Planning and Management In the construction of the FOAK ABWRs, HGNE’s construction management system helped construction engineers in the office and supervisors in the field to create all the interrelated plans necessary to complete a large project, including the detailed schedule, the carry-in plan, the installation sequence, the temporary facility plan, the yard plan, the crane and lifting device plan, and so on. Precise, accurate site schedules can minimize construction time and maximize efficiency, allowing for just-in-time delivery of drawings and products. However, the plans must also maintain the flexibility required to deal with unexpected events. A construction management system helps personnel control schedules, resources, documents, and other necessary information in a more general sense. The following are the primary purposes of the construction management system: 1. Effective use of the product data created within the design management system in installation and inspection work 2. Efficient control of products and parts delivery 3. Rapid response to any sudden changes 4. Real-time support for construction management
988
D. J. Kropaczek et al.
5. Quick feedback to the design and the manufacturing specialists of construction status and changes The construction management system uses product data such as equipment lists, valve lists, pipe spool lists, welding numbers from pipe fabrication drawings, pipe support numbers, inspection requirements, schedules, etc. The system creates a work instruction sheet for each job. When a work team finishes the job, the results are collected and stored. Managers follow these processes in real-time, allowing them to monitor the ongoing construction status and make adjustments if needed.
4.5 DT Applications to Advanced Non-LWR Reactor Design Kairos Power is developing and deploying DTs to design an FHR that uses TRISO fuel in pebble form with a low-pressure fluoride salt coolant. The Kairos approach to plant design, shown in Fig. 8, is based on rapid prototyping for critical plant systems and subsystems [55, p. 297]. Through rapid design iteration and prototype scaling, Kairos transitions quickly from conceptual design to detailed design to reduce uncertainty in the final plant systems configuration. The Kairos Power approach to rapid design prototyping shows a clear path to commercial reactor deployment based on systems testing, test reactor, and demonstration units. A DT that integrates virtual simulation with the available test data throughout the lifetime of design prototypes provides strong evidence to support the safety case and can significantly accelerate regulatory review. In addition, continued data collection once the plant starts operating will assure future regulatory compliance. DTs leveraged as part of new nuclear technology can simplify and accelerate licensing, resulting in reduced costs and schedules for future advanced reactor deployments. Additive manufacturing is a natural fit for rapid prototyping and design iteration of nuclear components, and it has strong potential to help reduce costs for new nuclear builds. The Transformational Challenge Reactor (TCR) [25, 48] pioneered
Fig. 8 The Kairos Power approach to rapid design prototyping [56]
Digital Twins for Nuclear Power Plants and Facilities
989
the 3D printing of parts and in situ monitoring of part quality during the manufacturing process. Nuclear components must adhere to strict quality standards of manufacturing control. Manufacturing built upon DT technology produces pieces that are “born qualified.” The DT platform integrates design, modeling, in-situ manufacturing, ex-situ testing data streams, and data from post-manufacture inspection and necessary component tests. The TCR program integrated additive manufacturing technologies with materials and computational sciences within a DT framework (Fig. 9). The framework coupled data analytics with real-time data collection during the 3D printing process. The process shown integrates data analytics for in-situ monitoring and active part correction. Data are collected, structured, and analyzed at each stage of the manufacturing workflow to ensure that 3D-printed components comply with specifications. Environmental variables are adjusted just in time using machine learning during manufacturing. The TCR additive manufacturing result is a nuclear- components process that produces qualified parts at a reduced schedule and cost.
Fig. 9 Additive manufacturing for nuclear components showing the interfaces between nuclear engineering, manufacturing science, materials science, and computational science
990
D. J. Kropaczek et al.
5 Digital Twins Supporting Operations and Maintenance Today’s operating nuclear fleet has benefited from applying digital physics-based models and data analytics evaluations of operational data with evolving degrees of sophistication throughout NPP lifecycles. These tools have understandably focused on the reactor core and fuel. Their evolution has substantially informed increases in rated power and reductions in outage time, effectively adding the equivalent of many additional power plants to electric grids around the globe. Building on these early successes, DT research, development, and deployment activities have accelerated during recent years to improve safety and aging management decision-making and reduce overall costs. This work encompasses a wide range of activities relevant to the improvement of reactor maintenance and operations, including refueling management, online plant condition monitoring, diagnostics and prognosis for risk-based upkeep, operator training, dose, and radiation field mapping, nuclear materials tracking, and criticality control, cyber-physical security, and regulatory approvals. Significant effort in optimizing crew deployment and reducing onsite staff requirements has resulted in substantial cost savings for nuclear plant operations. Event databases capture the wealth of operational experience to inform decision-making with an eye towards risk-based maintenance. Furthermore, introducing new sensor and instrumentation technology enables real-time resolution of plant and component status and reduces dependence on post-operation destructive evaluations.
5.1 Design and Operation of the Nuclear Reactor Core and Fuel The core and fuel design determine fuel utilization, plant energy production capability, and options for flexible operations. Nuclear reactor fuel planning usually requires a decade or more to establish the long-term uranium purchase and enrichment services contracts. Fuel and core designs focus on detailed 3D spatial configurations of the nuclear fuel and burnable absorbers, optimizing energy production while satisfying licensing limits for reactivity and thermal operating margins and fuel burnup limits. The type of reactor and grid supply requirement determines the fuel loading strategy. Reactors may follow a fuel cycle strategy such as those used in the current operating LWRs (1–2 years between refueling outages), or they may implement continuous online refueling such as that used in the advanced pebble bed reactor. Numerous safety and operational constraints exist on the fuel design and loading that impact plant economics. For example, the accumulation of oxide layers on fuel rods’ surfaces, commonly called crud, is an operational constraint on fuel management exasperated by high local powers and long cycle lengths. Figure 10 displays the VERA predicted crud distribution in a 4-loop Westinghouse pressurized water reactor (PWR). The
Digital Twins for Nuclear Power Plants and Facilities
991
Fig. 10 VERA Seabrook Cycle 5 distribution of maximum corrosion thickness [32]
quarter-core view displays the detailed radial and axial crud thicknesses on a rod- by-rod basis. Crud growth on fuel rod surfaces can result in crud-induced localized corrosion (CILC) and crud-induced power shift (CIPS) from the precipitation of boron, a high neutron absorber, within the crud layer. The existence of CILC can result in fuel failure, and CIPS presents operational challenges for the reactor that may lead to derating and plant shutdown. Although CILC cannot be monitored directly during reactor operation, CIPS can be monitored locally through in-core instrumentation. The ex-core detector system can monitor CIPS globally by sensing gross axial power anomalies. This DT for crud prediction can be used in the optimized reactor fuel design to track fuel behavior and predict crud consequences during reactor operations.
5.2 Online Plant Condition Monitoring, Diagnostics, and Prognosis for Risk-Based Maintenance Condition and performance monitoring has historically relied upon engineering analysis based on past and current operating fuel cycle data. Understanding and troubleshooting anomalous reactor behavior depends on subject matter experts with experience in nuclear operations and engineering. The analysis of aging and obsolescence has necessitated continuous upgrades in I&C, sensors, simulation, and data storage systems as part of plant modernization programs. High-capacity Wi-Fi and fiber networks enable additional wireless and wired sensors hosting several devices. These devices can now monitor component performance and failure modes, including vibration and acoustic sensors, digital gauge readers, and thermal cameras. Nuclear plant process computers have increased data storage and processing to accommodate the wealth of available data.
992
D. J. Kropaczek et al.
Preventive maintenance is crucial for nuclear plant equipment reliability, and it focuses on many recent efforts for the current LWR fleet. Digital work packages, procedures, and condition-monitoring databases are now widely used and are an essential part of plant-based maintenance programs. Configuration risk management software supports error checking concerning component availability, maintenance rules, and out-of-service equipment. The Electric Power Research Institute (EPRI) has been developing the Preventive Maintenance Basis Database, which enables failure modes and effects analysis for various nuclear plant components [14]. Analytical tools integrate probabilistic risk assessment for reliability and availability in planned and unplanned maintenance. Cost analysis based on condition and value-based maintenance focus on the costs of preventive and corrective care relative to reliability and the adverse consequences of equipment failure. Data are critical to developing DTs that adapt to changes in the system state. Data access remains a challenge despite advances in sensing and the kinds of measurements performed. Utilizing data from more than one facility can allow for DT model reuse. Models developed for a specific system or component at one facility may be reused with minimal updates by a different facility. This type of model reuse in DTs that includes machine learning typically requires the model to be retrained or calibrated with data from the various facilities. Current industry practices have focused on consolidating fleet-level data from a single owner/operator, thus allowing the owner/operator to leverage insights from similar systems. For integrating data from across multiple owners/operators, data anonymization is a potential solution. This model, leveraged from other generation spaces, pools data from plants across the grid with similar reactor types and operating characteristics. These data sets allow insights into normal and off-normal conditions, including operational experiences for similar equipment under conditions not typically seen in a single facility. Several nuclear utilities, including Duke Energy, the Tennessee Valley Authority (TVA), Public Service Electric and Gas, Arizona Public Service, Xcel Energy, Exelon, and Électricité de France (EDF) Energy, have established monitoring and diagnostic centers to support their condition-based maintenance programs [55]. Such condition monitoring uses machine learning trained on baseline historical equipment performance and failure data. Condition monitoring includes automated classification of maintenance tasks via Advanced Pattern Recognition (APR) tools that rely on manual data mining of historical incident reports and subject matter experts to identify maintenance rules. APR tools, trained using historical data for baseline equipment performance, use real-time data to signal identified equipment anomalies. These signals are confirmed for legitimacy and used to retrain the APR models over time. Operators may optimize the maintenance schedule for a given component using this current APR model. GE-Hitachi is developing DTs for their BWRX-300 SMR to reduce capital, maintenance, and operations costs [56, pp. 265–274]. The DTs cover parts, systems, operational, health, and decision-making processes. Nuclear DTs leverage AI technologies from aircraft and power turbine businesses that incorporate uncertainty and risk based on data and continuous learning. These twins are being used to
Digital Twins for Nuclear Power Plants and Facilities
993
optimize plant system health, schedule maintenance outages, and assure regulatory compliance. Plant systems, including the reactor core, must be monitored to optimize the reactor’s operational performance while assuring compliance with licensed technical specifications. Reliable real-time information about plant system status is necessary during power maneuvers, mainly if a plant load follows as part of an IES. Accurate forward-state predictions of reactor core behavior in response to control actions are also critical. In this context, the virtual reactor simulator parameters of interest are adapted to measured data from the plant system to obtain the most accurate representation of the reactor state, including reactivity margin, thermal margin, and fuel burnup. Following standard Internet of Things (IoT) approaches, direct data integration strategies tie the detailed models of the reactor core, systems, and structures to real-time information from advanced sensors, instrumentation, and data analytics. Integrating advanced system modeling with dynamic component reliability modeling enhances plant operational maneuvering and the maintenance schedule. Incorporating these data streams with the accurate digital representation of the present state and potential future states enables optimization of nuclear asset performance through plant operational maneuvering, aggressive maintenance planning, and reduced staffing requirements not otherwise achievable. Figure 11 displays the process flow for DT-based diagnostics and prognostics integration with decision-making as part of overall nuclear plant operations. Integrated decision-making with a DT uses advanced diagnostics and real-time modeling and simulation to continuously update the plant’s risk profile for safety and aging management. DTs that simulate the behavior of all components within the nuclear steam supply system (NSSS) in high-resolution detail provide a new opportunity for expanded use throughout the current and future nuclear operating fleet. For example, the propagation of micro-cracks modeled using advanced modeling supplemented with measured plant data (e.g., vibrational frequencies) can better inform the component repair or replacement risk profile.
Fig. 11 Process flow for DT-based diagnostics and integration of prognostics with decision making
994
D. J. Kropaczek et al.
Advancements in monitoring and diagnostics capabilities—including engineered sensor materials, radiation hardening, wireless technologies, and artificial intelligence—have produced rapid growth in data available to provide a deeper understanding of plant dynamics and performance related to operations and areas such as plant maintenance. DTs integrate the measurement of specific plant parameters or deviations through normal processes such as reactor maintenance with data analytics and higher resolution multiphysics simulation to improve predictions with reduced uncertainties in more conventional design, safety, and maintenance planning analyses. DTs contrast with lumped parameter and reduced-order models limited by incomplete knowledge. DT tools can improve plant performance, extend plant operations, and reduce production costs via more focused, proactive maintenance planning. For example, when combined with data analytics and time-dependent reliability analysis within a DT framework, nuclear materials component damage analysis offers a significant opportunity to reduce maintenance costs for NPPs. It can also reduce unplanned outages, increasing reactor availability [22]. This framework focused on a nuclear hybrid energy system that integrated modeling of the system, subsystems, and components with a dynamic component reliability model. Modeling of the NPP for a postulated, stochastic electricity demand within a load-follow strategy simulated the system, subsystems, and individual component behavior (i.e., valve positions, flow rates, temperatures, pressures) under various operational scenarios with the resultant plant reliability models being broadly applicable. Weibull distributions, characterized by lifetime and shape parameters, were used to model component failure events under different failure modes. Optimum maintenance intervals were then calculated based on failure rate and the net present value change in maintenance cost.
5.3 Operator Training Simulators Based on Virtual Plant Models The US Code of Federal Regulations (10 CFR 55.46) defines requirements for NPP training simulators. The nuclear plant simulator replicates the reactor control room and includes all operator input and output interfaces for: • Reactor control • Monitoring of the reactor state • Alarm systems Plant-referenced virtual simulators duplicate the physical behavior of the reactor core, and safety systems to anticipate operational occurrences and accident events are central to the training simulator. The use of highly accurate forward-state prediction virtual simulators for operator training based on the actual fuel and core configuration has become the norm over the past decade. This approach has
Digital Twins for Nuclear Power Plants and Facilities
995
increased operator knowledge of reactor behavior and mitigation during anticipated operational occurrences and hypothetical accident scenarios. Nuclear plants have recently introduced glass panel simulators (GPSs) that have a completely digital interface and use the same forward-state virtual simulators as the plant-referenced simulator. A GPS is used to supplement operator training and evaluate plant changes and their impact on human factors and plant response. The fidelity of plant-referenced simulators has evolved over the years to represent the best estimate model of the nuclear plant that reflects its current state. Training simulators include cycle-specific models for the nuclear fuel design and core loading patterns that allow for an accurate core response to operational control maneuvers and off-nominal events. The simulator model comprises the as-built fuel product, enrichment and burnable poison design, and exposed fuel carryover and shuffle configuration for the core. The as-built model is vital for introducing new fuel products such as those considered for fuel designed with new materials for accident tolerance, high enrichment, and extended burnup. As-built models are required for accurate predictive behavior for the reactor core during startup power ascension testing and surveillance during core depletion. The use of high fidelity 3D virtual simulators for operator training is a forerunner to using DTs for autonomous control and remote handling. Such AI-based systems would integrate plant condition monitoring and forward-state prediction with the spectrum of potential operational scenarios. For advanced reactor designs based on passive safety, AI-based control and remote fuel handling are attractive options for limiting operator dose and responding rapidly to load demand.
5.4 In-Core Radiation Dose Mapping NPP management requires mapping 3D radiation fields and continuous dose monitoring for worker safety, whether the reactor core is operating at full power or in a shutdown subcritical condition. Source range detectors and safety alarms must detect unsafe fuel configurations such as misloaded fuel during maintenance. For example, in collaboration with TVA, VERA modeled the coupled core and ex-core detector response during the refueling outage shuffle. This subcritical source-driven problem includes modeling activated secondary source rods important to the source signal. Placement of secondary sources is essential to the ex-core detector’s ability to identify potential core misloadings during the fuel reload and shuffle sequence. As shown in Fig. 12, the VERA subcritical thermal neutron flux quantifies the regions of influence for the two secondary sources. An initial comparison against measured data demonstrated excellent agreement with predictions of the startup range detectors. Minimizing radiation dose is governed by the principle of maintaining radiation dose as low as reasonably achievable (ALARA) and is essential to NPP management. ALARA is vital during periods of high human activity, such as refueling outages. During such activities, worker dose is calculated based on the radiation sources
996
D. J. Kropaczek et al.
Fig. 12 VERA reactor core subcritical flux distribution showing locations of secondary neutron sources [32]
within the NPP. Dose rates can be incorporated into integrated DT technologies when detailed irradiation histories are required to determine various radioactive source terms within the plant.
5.5 Nuclear Materials Tracking and Criticality Control The physical security of Special Nuclear Materials (SNM) such as uranium and plutonium is essential to satisfying nonproliferation requirements for reporting SNM inventories over the reactor’s lifetime. In addition, spent fuel storage pools and interim dry cask storage, including fuel handling procedures for refueling outages, require unique geometry configurations to maintain a subcritical state and manage decay heat loads. Numerous sensors and alarms are in place to ensure that subcritical conditions are maintained. Such DT technologies that integrate the lifecycle of nuclear fuel and other irradiated materials (e.g., control blades)—from initial loading, through its irradiation history, and on to spent fuel storage—can streamline and error-proof the reporting process.
Digital Twins for Nuclear Power Plants and Facilities
997
5.6 Cyber-Physical Security The ability of nuclear plant systems to recognize and respond to physical threats and external data breaches is a primary requirement. Physical security safeguards nuclear facilities and materials against malicious acts such as plant sabotage and nuclear materials theft. Physical protection relies on defense-in-depth, including physical barriers and controls, intrusion detection and assessment, and armed response. Transportation of spent nuclear fuel and other irradiated material also requires physical security. Safe transport uses regulator-certified casks, advanced schedule planning, and coordination with response forces in the event of hijacked shipments. Cybersecurity programs safeguard the digital and analog systems used in the plant’s monitoring, operation, control, and safety protection. Cybersecurity protects digital computers, networks, and communications, particularly those associated with emergency preparedness, with multiple layers of security and constant threat monitoring. Cybersecurity includes isolating hardware from external networks of digital assets critical to the plant’s safety and security. Plants have strict controls on computers and portable media, and they implement regular scanning for malware. In addition, those working with digital systems undergo extensive training, security screening, and monitoring for suspicious behavior. All aspects of cybersecurity, from protecting physical and digital assets to tracking nuclear materials, may benefit from DT technologies. Cybersecurity is an emerging area of concern for DTs. Cybersecurity issues in the nuclear power arena range from ensuring the integrity of measurement data to the security of digital I&C systems against unsafe and unauthorized control actions. From the perspective of DTs in nuclear power, these issues manifest themselves as assuring that the data used in developing or updating DTs are correct and ensuring that the computation results from the DT are accurate. Solutions such as data diodes may provide some security, although they are likely to be only a part of the cybersecurity solution when using DTs. Research on assurance, uncertainty quantification, redundancy of computation, and verification and validation (including data validation) addresses some of these emerging needs.
6 Digital Twins Supporting Decommissioning and Dismantling Considerable decommissioning and dismantling (D&D) work is underway worldwide and will increase in the coming years as NPPs are retired. According to IAEA projections, between 12 and 25% of the 2021 nuclear electrical generating capacity will retire by 2030. Effective management of decommissioning is vital to the sustainability of nuclear power. So far, a total of 189 nuclear power reactors are in the process of decommissioning globally, with 17 of them fully decommissioned. In
998
D. J. Kropaczek et al.
addition, 130 fuel cycle facilities have been decommissioned, along with about 440 research reactors. D&D includes characterization of the site and its vicinity for radiation hazards, decontamination, dismantling of plant and building structures, and site clean-up for reuse for some other purpose. Decommissioning may take from several years to several decades, especially in the case of deferred dismantling. Digital technologies such as 3D modeling and simulations, visualization, virtual reality, artificial intelligence, and machine learning can facilitate D&D projects by enabling experts to improve planning and implementation. Digital technologies can also support decommissioning difficult or dangerous projects for human workers and ensure safe and effective project execution. DTs will ultimately shorten the decommissioning period. DTs of nuclear facilities recreate a facility’s technology and structures and support effective design, operation, and maintenance. Properly maintained and updated DTs preserve as-built records, detailing how a nuclear facility was constructed and maintained during its lifetime to support planning and implementation of decommissioning. Digitization enhances safety by enabling analysis of different dismantling alternatives and minimizing worker dose. Digital information models can also significantly increase work quality and productivity due to improved planning, simulations of deployment for different types of equipment, and the possibility of promptly changing project parameters. Italy’s Decommissioning and Radioactive Waste Management Company (SOGIN) has used 3D models and simulations to facilitate the dismantling of different power reactors and has provided IT-supported management of generated waste streams. The Slovak Nuclear and Decommissioning Company (JAVYS), an IAEA Collaborating Centre on nuclear facility decommissioning and radioactive waste management, has also used 3D modeling and simulations to support the dismantling of the Bohunice V1 plant in western Slovakia. JAVYS used these techniques to determine and prove the best strategy for removing primary circuit components that needed to be dismantled and to design tools for retrieval, transport, and cutting activities. Moreover, the simulations were an effective tool for supporting communication with the public.
6.1 The Pleiades Project The European Union’s Horizon 2020 program launched project PLEIADES, a platform based on emerging and interoperable applications for enhanced decommissioning processes. The past practice used facility-specific, custom-made technologies optimized for given tasks. PLEIADES brings together 14 partners with a wide range of expertise from industry, government, and academia to demonstrate a digitally enhanced approach to D&D [41]. The partners include Catenda, Commissariat à l’Énergie Atomique (CEA), Cyclife DS, EDF, Empresa Nacional de Residuos Radiactivos, SA (Enresa), Institute for Energy Technology (IFE), Institut de Radioprotection et de Sûreté Nucléaire, Institut für Umwelttechnologien und
Digital Twins for Nuclear Power Plants and Facilities
999
Strahlenschutz GmbH (iUS), Karlsruhe Institute of Technology (KIT), LGI Consulting, Light & Shadows, Tractebel, VTT Technical Research Centre of Finland Ltd., AquilaCosting. The PLEIADES project proposes to improve the D&D process, enabling more efficient and coordinated actions by integrating state-of-the-art digital concepts into a coherent ecosystem. The PLEIADES platform will concatenate a set of mature digital modules typically implemented in D&D projects. By gathering a group of technologies onto the same platform to optimize the D&D process from scenario studies, dose assessment to workers, waste, and costs, PLEIADES addresses the main technical issues related to a D&D project. The integrated and tested modules are primarily solutions developed in Europe, with solid potential for applicability in non-European markets. The principal provided feedback from authentic use cases to obtain quantitative results on the modules and the integrated processes performed in actual conditions. The project will produce a prototype platform in which the digital solutions provided by partners of PLEIADES are interconnected through a building information modeling (BIM) interface. Demonstrations will include analyses based on scenario simulation and comparison of jobs in terms of feasibility, waste produced, radiation exposure, cost, and duration. The core technical concept of PLEIADES is a modular software ecosystem based on the interconnection of front-line support tools through a decommissioning ontology building upon open BIM, as shown in Fig. 13. The Pleiades modules cover all the aspects of decommissioning, and some are listed below (software names are in parentheses): • • • •
A robotic platform for 3D scans and imaging (3DscanPF, KIT) Dismantling info modeling system for storing all facility data (DIM, EDF) BIM platform used in construction (Bimsync, Catenda) Semantic wiki-based nuclear info system (iUS, IMS)
Fig. 13 PLEIADES modular software ecosystem
1000
D. J. Kropaczek et al.
Fig. 14 PLEIADES demonstrations on three partner’s facilities
• • • • •
Radiological characterization tool (RadPIM, IFE) Detailed job planning tool with a radiological model library (Vrdose, IFE) Decision-support tool combined with 3D simulation (DEMplus, CYCLIFE DS) Client-server based costing tool (Aquila costing, WAI) Virtual reality (VR) dismantling simulation with collision and radiological modeling (iDROP, CEA) • Low-level waste activity assessment tool (LLWAA-DECOM, Tractebel) • Augmented reality training platform with advanced tracking capabilities (ALVAR, VTT) D&D project managers will utilize the PLEIADES platform based on real-life experiences in existing facilities to demonstrate the applicability and quantify efficiency and needs for future developments, as shown in Fig. 14. Ultimately, the project aims to prove a new, innovative methodology for optimizing D&D strategies from a safety, cost, and efficiency perspective. The project partners will jointly provide a tool prototype that encapsulates data and software industry best practices for D&D.
Digital Twins for Nuclear Power Plants and Facilities
1001
7 Digital Twins Supporting Fusion Reactor Design 7.1 Introduction Currently, no fusion reactor produces net electricity, and a commercially viable fusion plant is at least 20 years away. However, recent technological achievements have sparked a new interest in fusion reactors, attracting relevant funding in billions of dollars for innovative fusion reactor builds. As the long project delays of the International Thermonuclear Experimental Reactor (ITER) reactor show, fusion reactor technology problems are similar to fission reactors for timely design and construction. However, fusion regulatory licensing will be more straightforward due to the limited radioactive sources. To date, no specialized DT exists that couples all the different physics, system transients, and control processes relevant to these systems. For the design and construction of ITER, software validated in the aerospace and automotive industries has been used, including CATIA and 3DEXPERIENCE platform from Dassault and some specialized software for nuclear calculations [27]. It is desirable to construct a fusion-specific simulation tool that can predict and assess the performance of the many recently proposed fusion reactor concepts while accelerating and improving the design cycle. Sophisticated, integrated computer simulation tools are essential to address the complexity of the physical processes and the highly interconnected nature of systems and components in fusion design. The structures surrounding the fusion plasma that form a plasma chamber serve a vital role in fusion energy systems. These structures include the first wall, the blanket, and the divertor and provide tritium fuel self-sufficiency, radiation shielding of the vacuum vessel, efficient power extraction, and plasma cleaning. The physical environment around the plasma is unique and complex. Modeling and simulating the extreme combination of physics involves neutron radiation, heat transfer, high electromagnetic fields, unsteady fluid flow, production and transport of tritium, and structural deformation. Modeling and simulating all these factors is fundamental to developing a robust first wall cooling and tritium breeding – for example, Dual Coolant Lead Lithium (DCLL) or molten salt Liquid Immersion breeding (LIB) blankets. The potential to create integrated simulations of the entire blanket system, the integrated first wall and plasma interactions is promising. Computational design optimization can improve performance, safe operation, and reduce cost. Furthermore, it can streamline efforts with fewer, more targeted experiments to test the materials and the entire system or its components. With digital technology, fusion reactor designers and engineers may also identify optimal operations and maintenance regimes.
1002
D. J. Kropaczek et al.
7.2 Fusion Reactor Design Workflow The international fusion community has recognized the need for integrated fusion reactor simulations. Research groups in Germany [17, pp. 378–405], the United Kingdom [11, pp. 26–38], and China [35, pp. 1153–1156] are developing integrated simulation environments for fusion reactors. The proposed DEMO-class fusion reactors are the next-generation demonstration reactors that build on the lessons learned from ITER. Coleman and McIntosh state that with their integrated simulation environment BLUEPRINT, “the typical activities required to generate a DEMO design point can be sped up by four orders of magnitude—from months to minutes—paving the way for a rigorous and broad exploration of the design space.” However, the conceptual design phase, which readies the design point to the details of the components surrounding the plasma, can take up to three years. It will need to be iterated 2–3 times minimum for a final baseline. For the European Union DEMO reactor, the contemporary fusion design and evaluation process show that the iteration of multiple designs for the design baseline definition could take up to 9 years, as shown in Fig. 15 [15]. Most of the time spent in the iteration duration (3 years) is caused by the complex multiscale analysis of different components, as detailed in the red box in Fig. 16. It is also time-consuming to coordinate the work of large multidisciplinary groups of researchers/designers in different countries with specialties in specific reactor analyses. Analysts require significant effort to establish accurate models, especially if it involves multiphysics coupling of models and results of other codes from multiple disciplines and groups. Such work is performed manually, with the geometry and the results being ported over to whichever software the various analysts wish to use. The geometry, boundary conditions, and other data are often
Fig. 15 EU-DEMO design strategy leading up to the engineering design phase in the late 2020s [15]
Digital Twins for Nuclear Power Plants and Facilities
1003
Fig. 16 Representation of the EU-DEMO reactor design point definition and evaluation procedure [11]
shared in an ad-hoc fashion (by email or through clouds) with no organized, comprehensive shared repository and no configuration or lifecycle management.
7.3 Fusion Energy Reactor Models Integrator (FERMI) The Fusion Energy Reactor Models Integrator (FERMI) [5] simulation framework is an example of an integrated simulation environment under development for the coupled simulation of plasma, plasma-material interaction, first-wall, and blanket. FERMI aims to increase the modeling and simulation speed that reduces design cycles from years to months, allowing the investigation of multiple design points to reach the optimal design to meet the design objectives—ultimately, a commercial fusion reactor. An efficient design cycle will bring significant benefits, such as reduced costs, shorter schedules, technical inconsistencies, and facilitating learning and innovation. As shown in Fig. 17, the open-source coupling library preCICE [7, pp. 250–258] couples single-physics solvers in a multiphysics simulation environment. FERMI encompasses single physics codes for neutron and gamma transport, thermal fluid flow, magnetohydrodynamics (MHD), and structural mechanics. FERMI also includes tools for visualizing results [2] and meshing [3, 42]. FERMI is the first simulation framework simulating nuclear fusion first wall and blankets in a fully coupled multiphysics manner. It follows the precedent established by the VERA software suite for the simulation of LWRs [32]. Specifically, the codes Monte Carlo N-Particle (MCNP) [19] and ADVANTG [39] are used for neutron and gamma transport to calculate the tritium breeding
1004
D. J. Kropaczek et al.
Fig. 17 FERMI workflow
ratio, the direct energy deposition in the structural components, and liquid blanket, the material neutron damage, and the shielding requirements. OpenFOAM [54] and HIMAG [23] perform CFD and MHD simulations of first-wall heat transfer and blanket cooling. MCNP couples to the thermal-hydraulic calculations for the cooling flow in the blanket. The DIABLO code [16] simulates the behavior of structural components and couples to the thermal-hydraulic calculations. HIMAG replaces OpenFOAM for dual-coolant lead–lithium (DCLL) blankets [50, pp. 44–54]. The FERMI multiphysics environment demonstrated the simulation of the Affordable Robust Compact (ARC) fusion reactor first wall and blanket [52, pp. 378–405] and a generic DCLL blanket [50, pp. 44–54]. 7.3.1 Integrated Simulation of the ARC Reactor Blanket The Massachusetts Institute of Technology and Commonwealth Fusion Systems is developing the Affordable, Robust, Compact (ARC) fusion reactor. ARC is a 500 MWt tokamak reactor with the following design innovations: 1. A major radius of 3.3 m and high-temperature rare-earth barium copper oxide superconducting toroidal field coils with an on-axis magnetic field of 9.2 T 2. Fusion reactions based on deuterium-tritium fuel, which generate 14.06 MeV neutrons 3. Energy generation and tritium breeding through gamma and neutron loading within a liquid immersion blanket for heat 4. A single replaceable vacuum vessel
Digital Twins for Nuclear Power Plants and Facilities
1005
Figure 18 shows a CAD model of the ARC reactor with tetrahedral mesh. Plasma divertors are positioned at the bottom and top of the vacuum vessel to remove impurities and control heat load within the plasma. Flowing fluorine lithium beryllium eutectic (FLiBe) molten salt comprising the blanket provides tritium and heat removal. FERMI simulated the integrated components to assess the feasibility of the ARC design with material temperatures, tritium breeding, corrosion, structural integrity, and shielding, thus facilitating future design iterations. In general, there are four essential neutronics metrics for fusion reactors, including the ARC fusion reactor: (1) tritium breeding ratio, (2) energy deposition, (3) high-energy neutron fluence, and (4) shutdown dose rate. The MCNP neutronics calculations determine tritium breeding, energy deposition, and shielding requirements. This energy deposition is mapped onto the grid for thermal-hydraulics and structural mechanics through preCICE. Coupled simulations are performed between OpenFOAM and DIABLO through preCICE to resolve fluid-structure interactions and conjugate heat transfer between the fluid parts (molten salt cooling channel and blanket) and the solid components (e.g., first wall, vacuum vessel). ParaView is used to view the multiphysics simulation results for the neutronics, thermal-hydraulics, and structural mechanics. Figure 19a shows the flow of FLiBe through the cooling channel and into the blanket. The cooling channel is significantly smaller in dimension than the blanket, so the flow velocities are higher (reaches a maximum of 20 m/s) with high turbulence, thereby causing effective heat removal from the first wall and inner vacuum vessel components. The flow in the blanket is very mildly turbulent except
Fig. 18 CAD model of the ARC reactor with tetrahedral mesh
1006
D. J. Kropaczek et al.
Fig. 19 (a) Velocity magnitude in different parts and (b) turbulent viscosity normalized by fluid kinematic viscosity from CFD simulations of the ARC reactor blanket and cooling channel
Fig. 20 (a) Heat deposition (W/m3) and (b) temperature (K) of FLiBe from CFD simulations of the ARC reactor blanket and cooling channel
downstream of the port and near the outlet. The normalized turbulent viscosity (Fig. 19b) shows a lack of turbulence in the blanket, which reduces the heat transfer, as noticed by the high temperatures near the upper divertor. These are regions of potential design improvement to be investigated in the ARC design. Figure 20a shows the heat deposited (in W/m3) in the FLiBe from the neutronics simulation and interpolated onto the CFD mesh. The temperature field of the FLiBe shown in
Digital Twins for Nuclear Power Plants and Facilities
1007
Fig. 20b indicates significant heating up in the channel and near the upper divertor leg in the blanket. 7.3.2 Dual-Coolant Lead–Lithium Blanket Integrated Simulation The dual-coolant lead-lithium (DCLL) blanket is currently the strongest candidate for the proposed US Compact Pilot Plant (CPP) [8] and is a critical test case for the FERMI software. FERMI integrates all features intrinsic to the DCLL blanket, including supply ducts, inlet and outlet manifolds, multiple poloidal channels, and the U-turn zone at the top of the blanket (Fig. 21). The flexibility and ability of FERMI to perform rapid design iteration for the fusion blanket within a standard, integrated code suite speak to the power DT technology for fusion reactor design. FERMI simulations were performed using realistic design parameters for the DCLL blanket (B-field = 5 T, poloidal flow velocity = 10 cm/s, Ha ~ 104, Re ~ 104, and Gr ~ 1012); the calculated velocity field and the electric potential (Fig. 22) are close to the independent analysis performed by UCLA-UCAS [9]; there is a difference inthe central channel of 11% (maximum velocity of 1.8 m/s in HIMAG vs 1.6 m/s in UCLA-UCAS) while the MHD pressure drop of the entire blanket is 2.55 MPa vs. 2.33 MPa (7% difference). These discrepancies for this class of computations are small taking into account that the UCLA-UCAS simulation used a much more refined grid (320 million grid cells vs the 10 million in FERMI).
Fig. 21 DCLL blanket with volumetric heating distribution
1008
D. J. Kropaczek et al.
Fig. 22 Comparison for PbLi MHD flows in poloidal channels of DCLL blanket between the MHD-UCAS code (left) and FERMI (right)
The FERMI simulations show how DT contributes to fusion reactor engineering, informing design choices by leveraging recent rapid advancements in computational hardware and software engineering. The FERMI code simulation suite will expedite the design cycle of fusion reactors by addressing the fusion community’s present and future needs for the ongoing ARC and CPP activities. It can lead directly into product development through collaboration with members of the manufacturing sector. FERMI will deliver an innovative, integrated fusion reactor simulation environment with high-resolution representation on current and future computational platforms, creating distinct technological innovation and leading to a nuclear fusion power industrial revolution.
8 Digital Twin Enabling Technologies The offline benefits of DTs for nuclear energy technology are well established for all phases of the nuclear energy facility lifecycle. However, the extension of these capabilities to real-time operations support is less well developed. To represent the system’s state correctly, DTs require information about the system. Advanced sensor technologies provide the mechanism for inferring the necessary information in operating advanced nuclear power plants. The measurement needs depend on the DT system representation. It may include process variables (e.g., process fluid temperature, pressure, neutron flux, flow, level), system condition-relevant variables (e.g., vibration, bearing/winding temperatures, current/voltage, wall-thickness, material condition), and personnel safety/ALARA relevant information from area radiation monitors (ARMs) or other related sensors. NPP measurements in the reactor, primary loop, or balance-of-plant may be categorized accordingly. However, there will always be some forms of data that must be collected or entered manually.
Digital Twins for Nuclear Power Plants and Facilities
1009
Still, technology advances allow routine operating information (such as pressure, temperature, flow, and rotational speed) or configuration information (item numbers, product numbers, or serial numbers connected to radiofrequency identification tags) to be logged automatically without human involvement. Although relevant data acquisition and communication systems have been available for years, recent interests in advanced reactor concepts (SMRs or microreactors and non-light water-cooled reactors) are driving advances in sensors and instrumentation. Such advances address measurement needs in harsh environments with high temperatures, corrosive coolants, inaccessible or limited access spaces to deploy conventional sensors, and high-radiation dose/fast neutron flux regions. Sensors that can survive in these conditions are being studied, along with advances in wireless technologies (ultra-wideband [UWB] and 5G) for robust data communication. These advances in sensing are a result of a combination of new sensing modalities (such as SAW sensors) [38, pp. 20907–20915; 51, pp. 1391–1393] rapid fabrication (including additive manufacturing) [38, pp. 20907–20915], radiation-hardened electronics [45], and low-power electronics and power management systems [24, pp. 476–489]. These efforts have driven down the cost of new sensors. In combination with low-cost edge computing platforms that allow for fast data acquisition, instrumentation that may have required custom-designed components can now be acquired off-the-shelf and configured to meet most data needs. These same advances are also behind industrial IoT devices, enabling low-cost continuous monitoring and surveillance of assets and other parameters. Sensor and instrumentation technologies are now available for monitoring parameters relevant to constructing and using operational reactor DTs. Parameters include quantities such as: 1 . Machine condition for rotating machinery [53] 2. Position for actuators such as valves [1, pp. 311–322] 3. Current and voltage [36, pp. 299–304] 4. Structural health [46, pp. 357–382] 5. Noise such as that due to cavitation, which can be an indication of impeller damage in pumps [18, pp. 2012–2020] 6. Water level, chemistry, and temperature in reactor relevant conditions [30] 7. Pressure [38, pp. 20907–20915] If these sensors are available, they can lower the cost barriers to the deployment of large-scale monitoring systems for a variety of measurements and can enable many applications discussed elsewhere in this document. However, it is also worth noting that many of these sensors and instruments are still in the research and development phase, with work ongoing to verify their capabilities and qualify them for use in nuclear environments. The deployment of these technologies in a steam cycle or emergency response components may be more straightforward. Still, these technologies also require radiation-hardened electronics when deployed for in-core, in-vessel, and in- containment monitoring and measurement. A related aspect of advanced sensors is power harvesting to address the power need for specific sensors and data acquisition
1010
D. J. Kropaczek et al.
hardware for making the measurements. The present work focuses on leveraging the heat, vibration, and radiation within the measurement environment using thermoelectric, pyroelectric, and piezoelectric materials [26, pp. 172–184; 58]. The widespread availability of industrial digital wireless communication systems and industry consensus standards have led to the general availability of sensors and instrumentation for monitoring purposes. However, available technologies are generally not interoperable, resulting in significant investment needs to address the issue. Recent advances include distributed antenna systems [34] compatible with multiple frequencies, IoT devices operating in unassigned portions of the frequency spectrum, and 5G systems with a broad range to communicate, reduce interference and increase data transfer speeds. UWB communications have also proven to be resistant to interference and demonstrated in challenging communication environments [37, pp. 31–42; 40, pp. 191–200]. Software-defined radio (SDR) and software-defined network (SDN) technologies [31, pp. 14–76] are now enabling cognitive communications that allow the communication system to adapt to local electromagnetic interference (EMI) conditions on the fly. They also enable ad hoc mesh networks to be set up and modified to increase communication reliability and enable low-error communications. Advanced sensors for DTs in nuclear energy may use the same sensors to control normal operations and actuate a safe shutdown. However, the specific suite of sensors used to develop and deploy DT will depend on the objectives of the DT. For example, a DT of the reactor core may use data from ex-core measurements of the neutron and gamma flux at multiple axial and circumferential locations to calibrate and represent the system’s state (reactor core). Available flux data may be sufficient for simulators or other simple DT applications. However, such measurements only provide an overview of the core conditions. Higher fidelity twins that are useful in optimizing plant-level operations require additional data (radiation, temperature, and flow) from in-core locations. Such detailed twins need data on the core inlet and outlet temperatures and flow rates, vessel pressures, axial temperature, flux variations at multiple locations within the core, and flux and temperature at the surface. The sensor requirements for these two sets of twins are significantly different. The in-core sensors may require other measurement systems (sensors and instrumentation) beyond current operations. In-core measurements are also likely to require radiation-hardened high-temperature sensors that are effective and highly reliable over the long term. As part of the defense-in-depth approach to nuclear safety, multiple sensors provide a diverse, duplicative set of measurements that assure reliability and robustness. Integrating this diverse set of measurements is essential for the DT to track the system’s state. Direct data integration is also challenging when considering a large and varied group of inputs. In recent years, the availability of data analytics technology in the form of practical machine learning has begun to be used to address this issue. Algorithmic advances that learn from large volumes of data using machine learning methods contribute significantly to this technology. There are several categories for data analysis methods, including deterministic and stochastic techniques, linear and nonlinear algorithms, and statistical and
Digital Twins for Nuclear Power Plants and Facilities
1011
machine learning methods. The lines distinguishing between these categories are not always clear, and in most cases, they are more suitable for academic purposes. Research into direct data integration has emphasized machine learning to derive parameters from data sets in recent years. The resulting algorithms include neural networks, decision trees and tree-search methods, and related regression techniques (Gaussian processes). Learning algorithms include the popular deep learning algorithms that allow for neural network structures that extend to multiple layers and allow understanding of complex systems inherent in the data. Data analytics techniques developed recently have proven effective in the following areas: 1 . Model calibration [49, p. 2107] 2. Diagnostics to indicate the present state of the system [37, pp. 31–42] 3. Prognostics to predict the future state of the reactor [40, pp. 191–200] 4. Quantification of uncertainty associated with parameter estimates [31, pp. 14–76] Relevant techniques can be deterministic or stochastic, and they can leverage the physics of the underlying process via simulation models or lower fidelity surrogate models. Methods may also be purely data-driven and may attempt to infer underlying relationships from data using statistical inferencing [49, p. 2107]. Alternatively, machine learning methods can predict a quantity of interest through the use of data where the data correlation is unknown [33, pp. 1313–1337]. Recent advances in control systems technology have led to more robust control methods and machine learning for determining optimal control policies through methods such as reinforcement learning [43]. Another recent advance has been integrating risk assessment with system diagnostics and prognostics to drive risk-informed decision-making regarding control systems. This integration will lead towards fully or partially autonomous supervisory control systems [10, pp. 913–918; 13]. Ultimately, most of the methods for measurement, data integration, and analysis are focused on combining data from an NPP (either component level, asset level, or system level) with knowledge of the system (physics) to create a DT that can predict the state of the system from relevant inputs. If the goal is also to reflect the real-time state of the NPP (for operational decision making or control, for instance), then the model parameters are updated with data in near-real-time. From a technology perspective, real-time DT updates require accurate surrogate models based on a combination of machine learning, physics information, continual learning, federated learning, and transfer learning techniques. These learning methods must result in fast, robust DT models that efficiently scale from component to system level. They must be capable of running on machines that range from edge devices to high- performance computing (HPC) platforms. Modeling and simulation tools are often combined with sensor data to provide a near-real-time understanding of the DT [33, pp. 1313–1337] or with conventional or machine-learning optimization methods for design optimization [43]. In addition, there is a need for uncertainty quantification and assurance that the DTs so derived and updated are appropriate for the use cases.
1012
D. J. Kropaczek et al.
9 Digital Twins for Nuclear Reactors: A Vision for the Future DT for a nuclear plant is an interlinked digital information network that realistically and sufficiently captures, integrates, and presents the modeled and measured plant data and their relationships. A DT describes various structures, components, and systems and can use physics or data to make intelligent future projections during the multiple phases of a nuclear lifecycle. Nuclear reactors are highly complex systems that remain in existence, in some shape or form, for a long time, generally over 100 years. Their design and working knowledge must be preserved through their various lifecycle phases and passed over to multiple generations of management, workers, and regulators to support a variety of tasks and goals. Therefore, DT for nuclear reactors must be data- and knowledge- centric, with long-term interoperability in mind, opposed to the existing document- centric paradigm. Such a DT would become a centralized resource that can be maintained by all stakeholders, including designers, engineers, contractors, manufacturers, and building engineers, to form a reliable basis for decision-making during the reactor’s entire lifecycle.
9.1 Adoption Pathways for the Existing Fleet of Nuclear Reactors A low-risk pathway for developing and adopting DTs for the currently operating fleet of nuclear reactors would be to integrate the existing information streams in a familiar digital environment, such as the 3D CAD models and design drawings for various plant systems and structures. The design information interlaces the material, procurement, and maintenance information with the existing physics-based 1D, 2D, or 3D plant representation to create the best-estimate or safety-basis models. Integrating real-time measured datasets across the plant with best-estimate models ensure high model accuracy. Such an operating DT would evolve with time as it becomes enriched with information and interoperable with various existing plant systems. In this manner, the DT would gradually become the most complete and reliable source of information for an NPP. DTs can exchange information between the plant operators and supporting engineers, thus improving the quality of the plant’s engineering assessments and reducing the preparation duration for any plant upgrades or retrofitting efforts. Moreover, they can be adopted to train a nuclear plant’s workforce for routine or infrequent operating tasks (e.g., pre-job briefings or refueling operations) or complex repairs. With embedded procedures and simulated walk-downs, interactive digital platforms can help optimize work planning and reduce operating costs (Fig. 23). Additionally, DTs can reinforce operational safety by visually representing the real-time radiation environments, potential fire scenarios, safety hazards, or emergencies, reducing the
Digital Twins for Nuclear Power Plants and Facilities
1013
Fig. 23 A virtual reality digital twin model of the ORNL’s liquid-salt test loop facility, displaying embedded technical information with simulated walk-downs to optimize work planning
risk of injury or the threat of delivering a heavy radiation dose to personnel. Through a virtual reality interlayer, DTs can have a video-game-like look and feel and can be used to train plant workers and security personnel in emergency preparedness. In summary, DTs can manage the plant’s regulatory requirements, physical configuration, safety, and performance characteristics. A realistic-looking DT of a nuclear plant can be used to plan and train workers for routine maintenance and repairs. DTs can help execute tasks rapidly without delays and could reduce the duration that personnel need to be present in high- radiation areas. Space constraints or equipment clashes can be anticipated and addressed before work commences. By interacting with hardware-embedded sensors, a DT can continuously monitor the current condition of various plant equipment to enable automated listing, charting, and assessment of multiple performance parameters in a digital environment. Any additional data collected during routine physical inspections (e.g., photographs and notes) can also be integrated within a DT environment, leading to improved maintenance decisions and an overall reduction in equipment failures. A fully functioning DT environment can integrate existing plant simulations (e.g., criticality models, safety margin projections, or fuel management) and various engineering evaluations (e.g., flow, pressure drop, or structural response) for the reactor core and other components. Such a physics-informed DT can interact with the plant’s operational data to support a more efficient condition-based maintenance approach. An operating nuclear reactor DT can maintain detailed historical records for various facility parameters, such as radiation-induced degradation of various vessel components, modifications and upgrades to reactor components, and balance of plant items such as pumps, valves, or sensors. This type of retrospective information can support accurate forecasts of the remaining lifetimes of various parts and can be used to assess maintenance needs and frequency based on historical data. A DT for an operating nuclear reactor would already have the most information to support its successful decommission. This information would include data on reactor components, their locations in the plant, materials, operational history, accumulated radiation waste and contamination across the plant and its categorization (low- to high-level), and other relevant engineering information. All this consolidated knowledge would be available through a single DT repository and would be
1014
D. J. Kropaczek et al.
useful in planning and safely executing various demolition and decommissioning activities. An existing DT for an operating reactor could be further “evolved” as a multi-D simulation tool to guide disassembling various plant equipment and structures while simultaneously calculating radiation dose rate to workers and the amount of generated waste. All this information could be very useful in optimizing disassembly and demolition workflow while adhering to ALARA principles for radiation safety.
9.2 Vision for the Next Generation of Advanced Reactors A regulation-aware collaborative DT for a new advanced reactor can accelerate the readiness of its design, expedite costly regulatory approvals, and significantly reduce construction, operation, and maintenance costs, but only if it is available for application in these lifecycle phases. Advanced reactors and ancillary balance-of- plant components use modern CAD and physics-based software toolkits to create their design. Therefore, they include most design, engineering, materials, performance, and other relevant information in various digital forms. This plethora of digital information that exists with early stakeholders, such as designers and vendors, and may not necessarily be interoperable, would have benefits across the lifecycle phase of an NPP if carried forward within a DT paradigm. Comprehensive and detailed information for advanced reactor design could be shared and transferred through contractual mechanisms. A regulation-aware collaborative DT for a new advanced reactor can accelerate the readiness of its design, expedite costly regulatory approvals, and significantly reduce construction, operation, and maintenance costs. Developing a DT framework during the design process of an advanced reactor would be very promising and would provide the most value. Integrating the outcomes of many designs and engineering activities (e.g., software tools, data, and models) into a unified framework will help consolidate the knowledge base. Such a commonly accessible resource could help reduce human errors, shorten the overall design completion period, and make it easier to upgrade the design if the need arises. A fully functioning DT would make it easier to produce documentation for any supporting calculations or justify a specific decision using various automated tools. A well-designed DT would pave the path to success for the subsequent phases in this lifecycle. Managing construction and operation costs will be an even more difficult challenge for FOAK advanced reactors, for which there is zero experience in construction or operation. An all-digital, n-dimensional (3D spatial, with time, cost, and regulatory compliance) BIM-governed DT can be developed and adopted to support the cost-effective construction of advanced reactors (Fig. 24). An advanced reactor’s physical and functional characteristics can be stored digitally in the BIM DT, overlaid with additional information layers that include cost, time, bid, contract documents, bills of materials, specifications, installation, and operation documents.
Digital Twins for Nuclear Power Plants and Facilities
1015
Fig. 24 A BIM process allows effective management of construction plans, sequencing, and coordination to minimize project execution risks and delays and to retain knowledge [12]
Once adopted, a BIM-based DT technology for advanced reactors can help reduce the overall construction cost and on-site construction time through: • Improved design coordination, site layout, and site utilization planning to facilitate material staging areas, temporary facilities, assembly areas, and efficient data management throughout the construction lifecycle • Enhanced coordination between domain-specific consultants, fewer drawing errors and omissions, and improved design intent and scope clarification communication • Effective construction sequencing, scheduling, and options analysis to reduce dependencies during resource planning • Early detection and resolution of infrastructure clashes, collisions, and interference reduced requests for information, change orders, and reworks • Real-time regulatory compliance and ensuring compliance against specifications for critical materials, along with tracked material numbers and inspection records • Effective bidding, component tracking, and supply chain management for custom, off-the-shelf, and factory-fabricated procurements with a construction simulation synced with real-time site progress and accurate cost engineering instead of cost estimates A DT can be continuously maintained and evolved during the construction phase of an advanced reactor and used as an as-built model of the facility. The DT can be handed over and used by the operators during the next stage of NPP commissioning. Post-construction, the nuclear-oriented BIM technology to be developed will facilitate cost-effective O&M via a continuously updated, retrievable information management system throughout the reactor lifecycle, helping reduce total staffing levels during plant operation. For example, staff could now be trained more efficiently on multiple reactor systems requiring fewer specialized workers resulting in a functional operational workflow with less support personnel. The BIM model will also help regulators to ensure compliance through having access to digital operational archival logs for select subsystems as utility operators achieve a cost-effective, sustainable operation. A DT-driven collaborative data management process in which all parties contribute to and draw from a central model can serve an advanced reactor throughout its lifecycle, from inception through decommissioning and demolition. Given the
1016
D. J. Kropaczek et al.
multi-decade life spans of advanced reactors, there will invariably be some challenges in maintaining and continuously upgrading the DT software and models. However, once fully established, standardized, and adopted, the DT technology can disrupt and revolutionize the way we design, build, operate, and regulate nuclear reactors worldwide. Key Points • From the cradle to the grave, an NPP will have a life span of more than 80 years. Over such a long life, the NPP led by at least three workforce generations may undergo significant improvements in underlying technologies. • Therefore, DT for a nuclear plant must be designed with a long-range vision and be adaptable to keep pace with the rapid progress in digital technologies. • However, as DTs have yet to gain acceptance and adoption in the nuclear power industry, there is an opportunity to set the stage for early-stage adoption in various phases of a nuclear life cycle, from its conception, construction, operation, and eventual decommissioning.
10 Conclusion and Summary The world’s operating fleet of NPPs has continuously evolved since the first commercial NPP devoted to electric generation at Shippingport, Pennsylvania, in 1957. Digital representations of NPPs and their components have played an essential role in the development, licensing, deployment, operation, and decommissioning of these critical assets almost from the beginning. Over the last decade, advancements in vital enabling technologies—sensors, computing, digital data storage, physics- based simulation methods, data analytics methods, and so on—have created opportunities for more expansive DTs. Robust DTs will drive further evolution of the plants operating today and the advanced reactors of the future. The lifecycle of a nuclear facility involves many decision points that must be addressed collectively by multiple stakeholders. An owner/operator must work with financiers to develop a business model that supports the construction of a new plant. The owner/operator must then work with a vendor, suppliers, and the engineering procurement and construction contractor to establish a design, develop a construction plan, and obtain many layers of regulatory approvals. The owner/operator must then work with regional power providers to develop an operating strategy that meets the power needs of end-users, assures safety, and optimizes returns on investment. Finally, the owner/operator must work with regulatory authorities and contractors to establish a plan for D&D the site while planning for the security and safety of the used nuclear fuel inventory.
Digital Twins for Nuclear Power Plants and Facilities
1017
At its core, a nuclear energy DT integrates a data lifecycle management strategy with model-based system engineering approaches and adequate data streams, realtime or hierarchical, to sufficiently describe the NPP’s conditions and components. A nuclear energy DT necessarily integrates many different digital representations of the individual aspects of its design and operation, including 3D CAD data, n-D physics-based models and simulations, material specifications and databases, operating history, sensor data, maintenance, and reliability data. The DT also evolves, synchronizing with the evolution of the plant itself and the digital tools that can describe its features and behaviors. The nuclear energy DT enables safety assurance, regulatory compliance, maintenance planning, and operations optimization for maximum return on investment in the asset. Across the lifecycle of the nuclear facility, DTs have reached varying degrees of sophistication, robustness, portability, and validation. Current efforts focus on elevating these technologies across the lifecycle and integrating the DT technologies developed in other application areas, such as BIM and IoT technologies. In the decade ahead, DT enabling technology will need continued development to realize the potential of DTs for the optimization of nuclear energy applications. Critical DT enabling technologies include advanced sensors and data analytics methods to extract more useful information from the significant data stream that sensors can provide. Integrating the plant operational and performance database with high-fidelity, model-based representations of the plant will allow for real-time projections of future performance. The biggest beneficiary of DT capability development will be advanced reactors in the coming decades. Advanced reactor deployments will leverage DT capabilities across the full scope of their nuclear facility lifecycles. Fundamental DTs are already a key feature of many advanced reactor designs under development in the industry. DTs will continue to evolve with the establishment of new capabilities. Ultimately, DT technologies provide a means to de-risk these deployments as a vital part of the worldwide expansion of energy generation that raises the quality of life of people across the globe in the face of the challenges brought by global climate change.
References 1. Agarwal, V., Buttles, J. W., Beaty, L. H., Naser, J., & Hallbert, B. P. (2017). Wireless online position monitoring of manual valve types for plant configuration management in nuclear power plants. IEEE Sensors Journal, 17(2), 311–322. https://doi.org/10.1109/ JSEN.2016.2615131 2. Ahrens, J., Geveci, B., & C. Law (2005). ParaView: An end-user tool for large data visualization. In Visualization Handbook. Elsevier. ISBN-13: 978-0123875822. 3. Ansys, I. C. E. M. (2015). ICEM CFD theory guide. Ansys. 4. ASME NQA-1. (2019, December). Quality assurance requirements for nuclear facility applications.
1018
D. J. Kropaczek et al.
5. Badalassi, V., Sircar, A., Solberg, J. M., Bae,J. W., Borowiec, K., Huang, P., Smolentsev, S., & Peterson, E. “FERMI: Fusion Energy Reactor Models Integrator.” Fusion Science and Technology. Advance online publication. https://doi.org/10.1080/15361055.2022.2151818. 6. Bragg-Sitton, S. M., Rabiti, C., Boardman, R. D., O’Brien, J., Morton, T. J., Yoon, S. J., Yoo, J. S., Frick, K., Sabharwall, P., Harrison, T. J., Greenwood, M. S., & Vilim, R. B. (2020). Integrated energy systems: 2020 roadmap. Idaho National Laboratory. 7. Bungartz, H. J., Lindener, F., Gatzhammer, B., Mehl, M., Scheufle, K., Shukaev, A., & Uekermann, B. (2016). preCICE – A fully parallel library for multi-physics surface coupling. Computers and Fluids, 141, 250–258. 8. Buttery, R. J., et al. (2021). The advanced tokamak path to a compact net electric fusion pilot plant. Nuclear Fusion, 61, 046028. 9. Chen, L., Smolentsev, S., & Ni, M. J. (2020). Toward full simulations for a liquid metal blanket: MHD flow computations for a PbLi blanket prototype at ha∼ 10^4. Nuclear Fusion, 60(7), 076003. 10. Coble, J., Coles, G., Meyer, R., & Ramuhalli, P (2013). Incorporating equipment condition assessment in risk monitors for advanced small modular reactors. Chemical engineering transactions, 33, 913–918. The Italian Association of Chemical Engineering, Milano, Italy https:// doi.org/10.3303/CET1333153. 11. Coleman, M., & McIntosh, D. (2019). BLUEPRINT: A novel approach to fusion reactor design. Fusion Engineering and Design, 139, 26–38. 12. DELMIA’s Virtual Construction 4D Simulation solution for the Energy, Process and Utility Industry. Dassault Systems, July 2013. https://www.youtube.com/watch?v=XrtMJ5z1O0w. Accessed 04 Jan 2021. 13. Denning, R., Muhlheim, M., Cetiner, S., & Guler Yigitoglu, A. (2017). Operational performance risk assessment in support of a supervisory control system. Presented at the Conference: American Nuclear Society NPIC/ANS Meeting – San Francisco, Washington, United States of America – 6/11/2017 12:00:00 AM-6/15/2017 12:00:00 AM, United States. 14. EPRI Preventive Maintenance Basis Database (PMBD) v6.0. Electric Power Research Institute, June 2015. https://www.epri.com/research/products/000000003002005428. 15. Federici, G., et al. (2018, November). DEMO design activity in Europe: Progress and updates. Fusion Engineering and Design, 135, 729–741. 16. Ferencz, R. M., Parsons, D., Felker, F., Havstad, M., Castillo, V., & Pierce, E. (2005). DIABLO: Scalable, implicit multi-mechanics, for engineering simulation. https://doi.org/10.13140/ RG.2.2.10040.98563 17. Franza, F., Boccaccini, L. V., Fisher, U., Gade, P. V., & Heller, R. (2015). On the implementation of new technology modules for fusion reactor systems codes. Fusion Engineering and Design, 98–99, 1767–1770. 18. Frohly, J., Labouret, S., Bruneel, C., Looten-Baquet, I., & Torguet, R. (2000). Ultrasonic cavitation monitoring by acoustic noise power measurement. The Journal of the Acoustical Society of America, 108(5), 2012–2020. https://doi.org/10.1121/1.1312360 19. Goorley, J. T., James, M. R., & Booth, T. E. (2013). MCNP6 user’s manual, version 1.0. LA-CP-13-00634. Los Alamos National Laboratory. 20. Greenwood, M. S., Betzler, B. R., Qualls, A. L., Yoo, J., & Rabiti, C. (2019). Demonstration of the advanced dynamic system modeling tool TRANSFORM in a molten salt reactor application via a model of the molten salt demonstration reactor. Nuclear Technology, 478–504. 21. Greenwood, M. S., Yigitoglu, A., Rader, J., Tharp, W., Poore, M., Belles, R., Zhang, B., Cumberland, R., & Mulheim, M. (2020, May). Integrated energy system investigation for the Eastman chemical company, Kingsport, Tennessee, facility. ORNL/TM-2020/1522. 22. Guler Yigitoglu, A., Greenwood, M. S., & Harrison, T. (2018, September). Time-dependent reliability analysis of nuclear hybrid energy systems. In Probabilistic Safety Assessment and Management (PSAM 14). 23. HIMAG, HyPerComp Incompressible MHD Solver for Arbitrary Geometry, Hypercomp, Inc. Available at: https://www.hypercomp.net/HIMAG.html. Accessed: 04 Jan 2021).
Digital Twins for Nuclear Power Plants and Facilities
1019
24. Huang, C., & Chakrabartty, S. (2012). An asynchronous analog self-powered CMOS sensor- data- logger with a 13.56 MHz RF programming interface. IEEE Journal of Solid-State Circuits, 47(2), 476–489. https://doi.org/10.1109/JSSC.2011.2172159 25. Huning, A., Fair, R., Coates, A., Paquit, V., Scime, L., Russell, M., Kane, K., Bell, S., Lin, B., & Betzler, B. (2021, September). Digital platform informed certification of components derived from advanced manufacturing technologies. ORNL/TM-2021/2210. 26. Hunter, S. R., Lavrik, N. V., Datskos, P. G., & Clayton, D. (2014, November 1). Pyroelectric energy scavenging techniques for self-powered nuclear reactor wireless sensor networks. Nuclear Technology, 188(2), 172–184. https://doi.org/10.13182/NT13-136 27. ITER Case Study. (2021, April). Dassault Systemes. Available at https://www.3ds.com/sites/ default/files/2021-04/iter-case-study-en-april-web-2021.pdf. Accessed: 04 Jan 2021. 28. Kawahara, A. (2003, September 15–19). Advanced design and construction technology for ABWR. GENES4/ANP2003, Paper 2017, Kyoto, Japan. 29. Kawahata, J., Murayama, K., & Akagi, K. (2010). Advanced construction technologies and further evolution towards new build NPP projects. Proc. Intl. Conf. on Opportunities and Challenges for Water Cooled Reactors in the 21st Century, IAEA-CN-164-3S04, International Atomic Energy Agency. 30. Korsah, K. et al. Assessment of sensor technologies for advanced reactors, Oak Ridge National Laboratory (ORNL). ORNL/TM-2016/337; Other: RC0423000; NERC036 United States 10.2172/1345781 Other: RC0423000; NERC036 ORNL English, 2016. Available at: https:// www.osti.gov/servlets/purl/1345781. Access: 04 Jan 2021. 31. Kreutz, D., Ramos, F. M. V., Veríssimo, P. E., Rothenberg, C. E., Azodolmolky, S., & Uhlig, S. (2015). Software-defined networking: A comprehensive survey. Proceedings of the IEEE, 103(1), 14–76. https://doi.org/10.1109/JPROC.2014.2371999 32. Kropaczek, D. J. (2020, September). CASL phase II summary report, ORNL/SPR-2020/1759. 33. Lim, K. Y. H., Zheng, P., & Chen, C.-H. (2020, August 1). A state-of-the-art survey of digital twin: Techniques, engineering product lifecycle management, and business innovation perspectives. Journal of Intelligent Manufacturing, 31(6), 1313–1337. https://doi.org/10.1007/ s10845-019-01512-w 34. Manjunatha, K. A., & Agarwal, V. (2019, May). Review of wireless communication technologies and techno-economic analysis. Idaho National Laboratory. 35. Mao, S., Ye, M. Y., Li, Y., Zhang, J., Zhan, X., Wang, Z., Xu, K., Liu, X., & Li, J. (2019). CFETR integration design platform: Overview and recent progress. Fusion Engineering and Design, 146, 1153–1156. https://doi.org/10.1016/j.fusengdes.2019.02.030 36. Moffat, B. G., Desmulliez, M. P. Y., Brown, K., Desai, C., Flynn, D., & Sutherland, A. (2008, September 1–4). A micro-fabricated current sensor for arc fault detection of aircraft wiring. In 2008 2nd electronics system-integration technology conference (pp. 299–304). https://doi. org/10.1109/ESTC.2008.4684365. 37. Molisch, A. F. (2009, 2009). Ultra-wideband communications: An overview. URSI Radio Science Bulletin, (329), 31–42. https://doi.org/10.23919/URSIRSB.2009.7909730 38. Morales-Rodríguez, M. E., Joshi, P. C., Humphries, J. R., Fuhr, P. L., & McIntyre, T. J. (2018). Fabrication of low-cost surface acoustic wave sensors using direct printing by aerosol inkjet. IEEE Access, 6, 20907–20915. https://doi.org/10.1109/ACCESS.2018.2824118 39. Mosher, S. W., Johnson, S. R., Bevill, A. M., Ibrahim, A. M., Daily, C. R., Evans, T. M., Wagner, J. C., Johnson, J. O., & Grove, R. E. (2015, August). ADVANTG―an automated variance reduction parameter generator. ORNL/TM-2013/416 Rev. 1. 40. Nekoogar, F., & Dowla, F. (2018, June 3). Design considerations for secure wireless sensor communication systems in harsh electromagnetic environments of nuclear reactor facilities. Nuclear Technology, 202(2-3), 191–200. https://doi.org/10.1080/00295450.2018.1452418 41. PLEIADES Smarter Plant Decommissioning. Available at: https://pleiades-platform.eu/. Accessed: 04 Jan 2021.
1020
D. J. Kropaczek et al.
42. Quadros, R. (2019, October). CUBIT, Sandia’s geometry & meshing toolkit. SAND2019-12356C, Sandia National Laboratory. Access as https://www.osti.gov/servlets/ purl/1642808. Accessed: 04 Jan 2021. 43. Radaideh, M. I., et al. (2021, February 1). Physics-informed reinforcement learning optimization of nuclear assembly design. Nuclear Engineering and Design, 372, 110966. https://doi. org/10.1016/j.nucengdes.2020.110966 44. Ray, S., Kucukboyaci, V, Sung, Y., Kersting, P., & Brewster, R. (2018, September). Industry use of CASL tools. Oak Ridge National Laboratory, CASL-U-2019-1739-000. 45. Reed, F. K., Ezell, N. D. B., Ericson, M. N., Britton, J., & Charles, L. (2020, May). Radiation hardened electronics for reactor environments, ORNL/TM-2020/1776. Available at https:// www.osti.gov/biblio/1763473. Accessed: 04 Jan 2022. 46. Rodriguez, G., Casas, J. R., & Villalba, S. (2015). SHM by DOFS in civil engineering: A review. Structural Monitoring and Maintenance, 2(4), 357–382. https://doi.org/10.12989/ SMM.2015.2.4.357 47. Saito, T., Yamashita, J., Ishiwatari, Y., & Oka, Y. (2011). Advances in light water reactor technologies. Springer., ISBN 978-1-4419-7100-5. 48. Scime, L., Sprayberry, M., Collins, D., Singh, A., Joslin, C., Duncan, R., Simpson, J., List, F., III, Carver, K., Huning, A., Haley, J., & Paquit, V. (2021, September). Diagnostic and predictive capabilities of the TCR digital platform. ORNL/TM-2021/2179. 49. Sikirica, A., Čarija, Z., Lučin, I., Grbčić, L., & Kranjčević, L. (2020). Cavitation model calibration using machine learning assisted workflow. Mathematics, 8(12), 2107. [Online]. Available at: https://www.mdpi.com/2227-7390/8/12/2107 50. Smolentsev, S., Morley, N. B., Abdou, M. A., & Malang, S. (2015). Dual-coolant Lead–lithium (DCLL) blanket status and R&D needs. Fusion Engineering and Design, 100, 44–54. 51. Soluch, W. (2008). SAW synchronous multimode resonator with gold electrodes on quartz. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 55(6), 1391–1393. https://doi.org/10.1109/TUFFC.2008.803 52. Sorbom, B., Ball, J., Palmer, T., Mangiarotti, F., Sierchio, J., Bonoli, P., Kasten, C., Sutherland, D., Barnard, H., Haakonsen, C., Goh, J., Sung, C., & Whyte, D. (2015). ARC: A compact, high-field, fusion nuclear science facility and demonstration power plant with demountable magnets. Fusion Engineering and Design, 100, 378–405. 53. Valero, C., Egusquiza, E., Presas, A., Valentin, D., Egusquiza, M., & Bossio, M. (2017, April. 2021-09-18 2017). Condition monitoring of a prototype turbine. Description of the system and main results. Journal of Physics: Conference Series, DOI, 813(1). https://doi. org/10.1088/1742-6596/813/1/012041 54. Weller, H. G., Tabor, G., Jasak, H., & Fureby, C. (1998). A tensorial approach to computational continuum mechanics using object-oriented techniques. Computers in Physics, 12(6). 55. Yadav, V., Zhang, H., Chwasz, C., Gribok, A., Ritter, C., Lybeck, N., Hays, R., Trask, T., Jain, P., Badalassi, V., Ramuhalli, P., Eskins, D., Gascot, R., Ju, D., & Iyengar, R. (2021a, June). The state of technology of application of digital twins. TLR/RES-DE-REB-2021-01. 56. Yadav, V., Sanchez, E., Gribok, A., Chwasz, C., Hays, R., Zhang, H., Lybeck, N., Gascot, R., Eskins, D., Ju, D., & Iyengar, R. (2021b, March, March). Proceedings of the workshop on digital twin applications for advanced nuclear technologies, Dec. 1–4, 2020. RIL 2021-02, US Nuclear Regulatory Commission. Accessed as https://www.nrc.gov/docs/ML21083A132. pdf. Accessed: 04 Jan 2022. 57. Yadav, V, Agarwal, V., Gribok, A., Hays, R., Pluth, A., Ritter, C., Zhang, H., Jain, P., Ramuhalli, P., Eskins, D., Carlson, J., Gascot, R., Ulmer, C., & Iyengar, R. (2021c, December). The state of technology of application of digital twins. TLR/RES-DE-REB-2021-17. Accessed as https:// www.nrc.gov/docs/ML2136/ML21361A261.pdf. Accessed: 05 Jan 2022. 58. Zhang, Y., Butt, D., & Agarwal, V. (2015). Nanostructured bulk thermoelectric generator for efficient power harvesting for self-powered sensor networks. INL/EXT-15-36260. Available as https://www.osti.gov/biblio/1260882. Accessed: 04 Jan 2021
Digital Twins for Nuclear Power Plants and Facilities
1021
Dr. David J. Kropaczek of Oak Ridge National Laboratory (ORNL) is the Director for Nuclear Industry Technology Innovation and Deputy National Technical Director for the DOE Nuclear Energy Advanced Modeling and Simulation program. Most recently he was the Director for the Consortium for Advanced Simulation of Light Water Reactors (CASL), a DOE Energy Innovation Hub. He joined ORNL in 2018 after having served as CASL Chief Scientist and Professor of Nuclear Engineering at North Carolina State University where he continues as Adjunct Faculty. Dr. Kropaczek has over 30 years of experience in the field of nuclear energy with a focus on the development of computational methods in the areas of nuclear fuel cycle optimization, reactor physics, and thermal-hydraulics for both PWRs and BWRs. Previous experience includes positions in both R&D and leadership with Westinghouse, General Electric, and most recently, Studsvik Scandpower as CEO. He is a Fellow of the American Nuclear Society. Dr. Vittorio Badalassi is a Distinguished R&D Staff Member in the Thermal Hydraulics group – Reactor and Nuclear Systems Division at Oak Ridge National Laboratory. Vittorio received his PhD (2004) in chemical engineering at the University of California, Santa Barbara and trained in Innovation and Entrepreneurship Business Management at the Imperial College London Executive Program. He is a Chartered Engineer with international experience and a proven record for success in R&D, industrial consulting, R&T management and startup funding/ operation; he is a recognized expert in modelling and simulation (CFD) and in Nuclear Thermal Hydraulics, with further expertise in the aerospace and chemical sector; he has worked in numerous technical positions for R&D institutions (Royal Society Industry Fellow at Imperial College/Rolls-Royce), in industry (Chief Nuclear Safety Engineer at the PALLAS reactor) and startups (Founder and CTO of his company in the UK). He sourced R&D funding, managed multi-cultural engineering teams in both industry and academia, and had direct exposure to senior executives in corporations, author of highly cited papers and patents. He the principal investigator of the ARPA-E GAMOW (2021) project “Fusion Reactor Models Integrator”).
Dr. Prashant K. Jain is a Group Leader, Supervisor, and Sr. R&D staff for Thermal-Hydraulics research in the Nuclear Energy & Fuel Cycle Division at Oak Ridge National Laboratory (ORNL). Prashant received his MS (2006) and Ph.D. (2010) in nuclear engineering from the University of Illinois, UrbanaChampaign, and his BTech (2004) in mechanical engineering from the Indian Institute of Technology, Bombay. He has over 15 years of experience in nuclear thermal design and safety analyses, computational fluid dynamics, single- and two-phase turbulent flows and heat transfer, advanced multi-physics modeling, analytical benchmarks, lattice Boltzmann method, and parallel scientific software development. Prashant is a recipient of the American Nuclear Society (ANS) Mark Mills Award for his doctoral research on lattice Boltzmann methods and UT-Battelle’s 2019 Mission Support Award for his contributions towards the HFIR event causal analysis.
1022
D. J. Kropaczek et al. Pradeep Ramuhalli is a Distinguished R&D Staff Member and the group lead for the Modern Nuclear Instrumentation and Control (I&C) group at Oak Ridge National Laboratory (ORNL) in the Nuclear Energy and Fuel Cycle Division. Prior to joining ORNL, he was a Senior Research Scientist at Pacific Northwest National Laboratory (PNNL) and was previously a faculty member in the Department of Electrical and Computer Engineering at Michigan State University, East Lansing. He received the Ph.D. degree in electrical engineering from Iowa State University, Ames, in 2002. Dr. Ramuhalli’s research is in systems resilience and reliability, and is at the intersection of sensing, data science and decision science. Currently, he is developing sensors and algorithms for continuous online monitoring and diagnosis of system health; and physics-informed machine learning algorithms for virtual sensing and prognostic assessment of system and component health. His research activities are enabling development and application of digital twins and related digitalization technologies for power generation systems (conventional and advanced nuclear power, hydropower, and renewables), leading to life extension, operations and maintenance practice optimization, and economic operation of power plants; and increasing cyberphysical resilience of complex systems. He has served (or is serving) as PI on projects funded by multiple agencies, and has co-edited a book on integrated vision and imaging techniques for industrial inspection. He has authored or co-authored 4 book chapters, over 175 technical publications in peer-reviewed journals and conferences (including over 35 peer-reviewed journal publications), over 90 technical research reports, and over 100 invited talks. Dr. Ramuhalli was the general Chair of the 2021 ANS Topical Meeting on Nuclear Plant Instrumentation, Control, and Human-Machine Interface Technologies (NPICHMIT-2021), and was a General Co-Chair for the AI for Robust Engineering and Science (AIRES) 3 workshop. Dr. Ramuhalli is a senior member of IEEE and a member of ANS.
Dr. W. David “Dave” Pointer is the Section Head, Advanced Reactor Engineering and Development at Oak Ridge National Laboratory. Dr. Pointer obtained his PhD in Nuclear Engineering at the University of Tennessee in 2001. He has more than 20 years of experience in the design and safety of nuclear systems, including conventional light water reactors, advanced reactors using gas, liquid metal or molten salt coolants, and high-power accelerator systems. His work has focused on the advancement and qualification of modeling and simulation tools and the acceptance of the use of contemporary high-resolution methods running on high performance supercomputers in these applications. He leads a team of more than 60 scientists and engineers who are focused on R&D activities supporting accelerated deployment of advanced nuclear energy system technologies. He is a past chair of the American Nuclear Society’s Thermal Hydraulics Division and a past president of the North American Young Generation in Nuclear.
Digital Twin for Healthcare and Lifesciences Patrick Johnson, Steven Levine, Cécile Bonnard, Katja Schuerer, Nicolas Pécuchet, Nicolas Gazères, and Karl D’Souza
Abstract Health Digital Twins (HDT) can bring a decisive contribution to personalized, precise, successful medical treatments. They will rely on combining the latest fundamental knowledge from research with a patient’s exact history and unique physiology. First use cases of HDT have been developed in the last decade in specific domains, with the examples of Living Heart and Living Brain. They will extend to other domains, such as cell models, microbiota or patient cohorts, and support multi-discipline and multi-scale platforms. HDT will soon cover all medical disciplines and the whole patient journey, powering the next-generation medical practices, precision medicine and surgery, and allowing for improved patient autonomy – towards better health for all. Keywords Digital health · Medical practice · Patient data · Real world evidence · Healthcare innovation
P. Johnson (*) Dassault Systèmes, Research & Sciences, Vélizy-Villacoublay, France e-mail: [email protected] S. Levine Dassault Systèmes, Virtual Human Modeling, San Diego, CA, USA C. Bonnard Dassault Systèmes, Living Twin for Practitioner Technology, Vélizy-Villacoublay, France K. Schuerer Dassault Systèmes, Virtual Twin of Human, Biot, France N. Pécuchet · N. Gazères Dassault Systèmes, Virtual Twin of Human Technology, Vélizy-Villacoublay, France K. D’Souza Dassault Systèmes, Simulia Life Sciences & Healthcare Process Expert, Johnston, RI, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_32
1023
1024
P. Johnson et al.
1 Introduction “The right treatment, for the right person at the right time.” This ideal is shared by the whole healthcare community. But with massive costs and limited access to cutting-edge technology, it’s still more dream than reality. That is poised to change. Modern healthcare has reached a crossroads. The era of consensus medicine has served us well, but has plateaued and actually become a barrier to a more powerful medical paradigm. Clinical evidence alone can only go so far, as it fails to capture the uniqueness of each patient or track the subtleties of chronic conditions. Moreover, it relies on diagnosis and treatment after disease is at a progressed state, which is the most reliable, yet also the most costly. In the twenty-first century, we can and should do better. A new level of healthcare can be achieved if we create a digital medical system with a detailed view of each patient’s anatomy, biology and relevant life exposure in the context of all we know about their condition. The system must connect fundamental knowledge gained in research with observational insights from bedside care, and close the distance between these approaches. A new medical paradigm will allow this. It implies a stronger use of virtual universes throughout the patient journey – combined with new therapeutic solutions enabled by bio-production. Digital twins will capture the uniqueness of each patient, transforming the massive ability to capture data into operational insights available to healthcare professionals. Sharing through a care team through digital platforms becomes day-to-day experience. Connecting to fundamental knowledge from research and extensive data from clinical trials will enhance diagnosis and medical decision. Healthcare Digital Twins will become the integrative reference of personal health information. Created with software that conforms to known scientific principles, the twin functions as a complete dynamic “living” model, calibrated with real-world information, enabling new understanding and approaches to health diagnosis, prognosis, treatments and anticipation of future conditions. Not only does the virtual twin serve as a 3D reconstruction of a human body and its systems, it also contains genetic code and other fundamental biomarkers. This creates a holistic integrative representation of all facets of an individual’s health that over time is tuned by medical history and environmental exposures. As our understanding of human biology, physiology, biomechanics and pharmacology improves, virtual twins will become more precise, predictable and usable. This advancement delivers on the promise of patient-centric digital health and transforms all aspects of the current healthcare system. The breakthroughs that manufacturing industries enjoyed thanks to virtual twins of traditional products can now be replicated in the healthcare sector. The question is no longer ‘is it possible’, but rather, ‘when will it happen and who will deliver it?’
2 Creating a Virtual Twin of the Human Body The human body is the result of hundreds of thousands of years of continuous evolution. Despite a changing external environment, the basic functions of the body have not changed. The architecture of a human is composed of elements and
Digital Twin for Healthcare and Lifesciences
1025
molecules organized in organelles, cells and tissues delivering physiological functions at a macro scale. Unlike man-made machines which are version controlled and offer a design blueprint (which is the general situation for the rest of the current book on digital twins), when there is a problem with the body, there is no functional diagram to consult to deduce cause-and-effect relationships. What we understand is largely the result of deconstructing complex systems into smaller, functional parts and running trial-and-error analysis of their behavior. This knowledge had allowed us to make incredible medical advances. However, we have reached the limits of traditional approaches. We now face unsustainable hurdles to continue to more fully understand – and ultimately optimize – the human body. How much do we understand about the human body? We know that DNA encodes our inherited genetics and adapts to meet ever-changing conditions, sending out proteins as messenger molecules to do the necessary work within our cells. But while we can now readily identify the body’s coding, we still don’t understand its complex language. The answers lie inside of us, and we now have the technology to decipher them. Health data is being collected at a rate unprecedented in all of human history, revealing clue after clue about the human body. But each clue is of limited value without context. If the global scientific community combines their research and understanding, can we gain new insights? Can we create a virtual representation of a human body to highlight gaps in our knowledge? Could we develop functioning models to interpret, correlate and aggregate new data and identify where to focus research? Most importantly, could we apply each individual’s attributes and history to adapt this virtual human into a digital surrogate to reconcile medical uncertainties and find the optimized approach for each person? Achieving precision medicine is the fundamental challenge for healthcare of the twenty-first century. This is the experiment Dassault Systèmes began with the Living Heart project.
3 The Reason to Believe: The Living Heart Project 3.1 A Virtual Medical Reference In 2014, the Living Heart Project (LHP) was launched to test the hypothesis that cardiovascular experts around the world from academia, industry and clinical practice already had sufficient knowledge to create a fully functioning 3D model of the human heart but were not sufficiently connected accomplish this. The experiment to properly combine their information was successful, and the group published the details of a complete virtual twin of a human heart, built from basic understanding of heart tissue, structure and electrophysiology, and completely adaptable to mimic an individual person or a population. This virtual medical reference is now in use around the world, helping to unravel structural and hemodynamic heart disease. With each use, strengths and weakness of the model are identified and published, fueling new experts to join the project and pushing the learning and the technology further. The LHP demonstrated that by using real-world experience of medical practitioners as the basis, computational biologists and biomedical engineers can
1026
P. Johnson et al.
replicate the iterative process necessary to produce functional organs in the virtual world. The project delivered a reference model of a functioning beating heart that can reproduce any cardiovascular condition and safely test treatment options. Because it is virtual, it can be interrogated, shared and visualized to tell the complete story of its function and likely outcomes. Serving as an open reference platform, technical details and knowledge that were once fragmented and isolated is centralized, actionable and sharable. This was more than a useful computational model: the virtual human twin was born. In an important signal to the industry, the US Food and Drug Administration (US FDA) has served as a key participant in LHP since its outset. It is now using the technology in the ENRICHMENT project, where the Living Heart represents a cohort of human patients with mitral valve disease. This first-of-a-kind in silico clinical trial of a medical device using a digital review process proves we can safely unleash innovation by reducing dependency on costly, time consuming animal and human studies required for clinical trials. It can also create synthetic control arms, eliminating patients receiving only a placebo. Dassault Systèmes’ Medidata Rave platform supports this approach by applying AI to data accumulated in previous clinical trials for pharmaceuticals. Combined, the LHP and the platform set the stage to not only accelerate time to market, but to serve as a data source for precision medicine.
3.2 Enabling Clinical Decisions with 3D Experiences Visualization is crucial in healthcare diagnostics. 2D imaging provides essential visual detail to guide clinical treatment, offering a quantitative analysis of system dynamics that allows clinicians to utilize their expertise to make decisions on how to treat a patient. By leveraging breakthrough 3D virtual twin technology, scientists have the power to test and simulate scenarios to predict real-world outcomes. Such a “3D experience” creates the ability to safely push the boundaries of innovation and learning, while minimizing undesirable impacts. By reconstructing a patient’s heart in 3D, for example, doctors can virtually try out invasive measures that consider that specific patient’s anatomy, such as testing the pressure differential across a stenosis to be replaced. It also makes it possible to deliver directly to a radiologist a full blood flow analysis of that person’s entire coronary vascular system, showing the impact of each interventional option. These insights can not only help guide the right treatment, but also reveal insights that could allow clinicians to lower re- hospitalization rates and sudden deaths. Helping Underserved Patient Populations Eight million children each year are born with some kind of birth defect. Nearly half will die by age five. For economic and ethical reasons, the pharmaceutical and medical device industries are unable to properly serve them. Clinicians typically adapt adult treatments or experiment on their young patients with wide variations in success. Premiere medical centers have the volume and resources to develop reliable treatments for children, but due to limited knowledge sharing, many young patients are not able to be helped despite knowledge being out there. The LHP is an example of how information can be shared to help more precisely treat patients (Fig. 1).
Digital Twin for Healthcare and Lifesciences
1027
Fig. 1 Parents of a child with a ventricular septal defect are helped to understand his upcoming surgery through a 3D model of his heart on a holographic 3D screen. (Image © Boston Children’s Hospital)
3.3 Helping Underserved Patient Populations Eight million children each year are born with some kind of birth defect. Nearly half will die by age five. For economic and ethical reasons, the pharmaceutical and medical device industries are unable to properly serve them. Clinicians typically adapt adult treatments or experiment on their young patients with wide variations in success. Premiere medical centers have the volume and resources to develop reliable treatments for children, but due to limited knowledge sharing, many young patients are not able to be helped despite knowledge being out there. The LHP is an example of how information can be shared to help more precisely treat patients. While not yet commonplace, these virtual twins of the heart are saving lives in Europe and in the US, and are in development in Asia. For example, in one of the leading pediatric cardiovascular groups at Boston Children’s Hospital, Dr. David Hoganson works closely with a Harvard bioengineering team led by Peter Hammer, and together they’ve shown that computer models of complex cardiovascular reconstructions can be more effective in determining the best treatment than expert opinions. Online communities can be established to share best practices, giving clinicians from around the world the opportunity to quickly learn from what the leaders in their field have experienced, helping them to address their most challenging cases and successfully plan protocols, test surgical options and techniques and predict outcomes for their patients. Using virtual twins to address these underserved populations signals the future of precision medicine, where each patient is a cohort of one. Heart disease and stroke cost the world more than $3B per day; imagine what could be saved if only a fraction was channeled into building online collaborative communities instead? (Fig. 2).
1028
P. Johnson et al.
Fig. 2 A 3D cardiovascular reconstruction. (Image © Boston Children’s Hospital)
4 Reaching a New Step with the Brain The success of the Living Heart Project demonstrated the power of the virtual twin to combine and apply cross-disciplinary experience. The rapid pace of technical achievements confirmed the theory that establishing a collective understanding provides far greater benefit than relying on what any one individual can know. With that realization, the door opened to virtually model other organs and systems in the body, and the Living Brain project was launched. Having a virtual twin of the brain holds great possibilities in neurology, where the level of uncertainty is high and the impact of error can be catastrophic. Indeed, modeling and simulating the brain could provide critical guidance for mechanical or electrophysiological interventions, and one day at the level of brain function itself. Using high quality imaging, 3D reconstructions can reliably reproduce the physical structure of the head, neck and brain. Rigid structures (skull, vertebra) are the most reliable, but visualizing soft tissue (gray matter, white matter fibers (tractography), the spine) and fluids (sines, ventricles, cerebrospinal fluid) is also needed to get a complete understanding of the brain’s structure. A model of a brain must encompass both the physical anatomy and the behavioral activity, which is defined by electric current flow. A virtual twin can create understanding of the brain during different experiences, recreating activity inside the skull such as during sport, automobile accidents, work-related injuries or epileptic seizures. It can also help us better understand brain development. For example, while we know that many people accumulate brain lesions throughout life, we don’t know enough about their impact. As we become more able to track the physical health of the brain, we may better understand and predict behavioral changes or guide treatments when the cause is seemingly unknown. Critically, a virtual twin can report information in a standard way, often more reliably than the patient, who paradoxically becomes decreasingly reliable as getting information from and about them becomes increasingly important. Developing a better understanding of the brain’s 3D structural layout could help identify early development of a
Digital Twin for Healthcare and Lifesciences
1029
neurodegenerative disease. For example, if we measure the volume of cortical or subcortical brain regions over years, clinicians will have a reference point to understand their individual patient’s connectome, a comprehensive map of neural connections in the brain. Knowing these leading indicators can be discovered could encourage patients to not only seek care early, but to help them and their medical team select the best care plan for their precise situation. Properly managed, virtual twins of the brain can become essential references for the industry to develop new and more precise diagnoses and treatments, increasingly using virtual patients as fidelity improves.
4.1 Testing the Living Brain: Epilepsy Research In acute brain diseases such as epilepsy, personalized virtual twins are already helping to guide accurate diagnosis and treatment. At the earliest onset of seizure symptoms, the complex signals from the brain can be captured, diagnosed and tracked as care is administered. In the 30% of epilepsy patients who are drug-resistant, a localized part of the brain’s tissue is defective and responsible for seizures. Precisely targeted brain surgery can be an effective therapeutic strategy. The patient’s virtual twin can help to ensure the region to resect is well identified and precisely delineated, allowing the removal of damage while preserving critical functions of neighboring regions. This is the goal one of the programs of the Living Brain project, EPINOV: a public-private partnership coordinated by Aix-Marseille University and composed of Dassault Systèmes, the Assistance-Publique Hôpitaux de Marseille, the Hospices Civils de Lyon and the French Institute of Health and Medical Research (Inserm). After learning of the success of the LHP, clinicians and renown neurologists came to Dassault Systèmes to help accomplish their goal of improving epilepsy surgery management and prognosis by modeling each patient’s brain in 3D. Since 2019, a clinical trial is using virtual twins of the brain constructed from patient imaging and electrophysiological data, calibrated to the typical seizures of a patient. Bringing together the 3D anatomy of each patient’s brain with its connectomic structure, combining detailed electrophysiological models of brain regions (neural mass models) with high-dimensional data fitting algorithms, has the potential to raise the success rate of surgery for these patients. Further, data from the clinical trial will validate the added value of the virtual brain technology to surgical decisions or point toward improvements (Fig. 3).
5 Expanding to New Horizons With the successes and lessons learned from leveraging the virtual world to extend and improve the real world for the heart and the brain, advances in using twins to model other parts of the human body are also underway. While the idea of having
1030
P. Johnson et al.
Fig. 3 Illustration from Living Brain project
virtual twins of all organs such as the lungs and kidney and body parts such as feet appear in scientific literature almost daily, here we will focus on three nascent uses for virtual human twins: skin, cells and the gut. Living Skin: From Biologics Testing to Healing Human testing is the last stop for any new medical innovation. Most testing begins at the bench or in animals, never seeing a human environment until late in the cycle. Both public opposition and ethical considerations, however, have limited animal testing to only situations when it is absolutely necessary. Animal testing is already banned for the majority of development and testing of cosmetics, which must be compatible with human skin. As the largest organ, skin provides many essential functions: from shielding against external aggressions by pathogens, chemicals and radiation to thermoregulation and serving as the alert system against harm. To serve these functions, skin contains many different cell types, organized into multiple layers and distinct structures that collaborate together. Moving away from live animal models, petri dishes of laboratory-grown in vitro skin samples are now the norm to test cosmetic products. However, in silico skin models are maturing and are poised to take over. While still early in development, these models offer many potential advantages, including eventually becoming part of a virtual twin. And it is not only consumer companies who are set to benefit from these advances: the pharmaceutical industry is taking note, since for non-oral treatments such as biologics, access to the body through the skin is critical. Dassault Systèmes develops solutions that provide a multiscale model of skin, based on its fundamental molecular properties, and that can accurately predict the penetration of chemicals through skin layers. Virtual models can reflect individual skin types and properties such as age, levels of hydration or damage due to exposure, and can test formulations of cosmetic creams and lotions. The models are also used to design predictable drug delivery systems for painless injections, secure wearables and eventually injectables that correct for an individual’s shape, size, body chemistry and mobility. These same delivery systems will
Digital Twin for Healthcare and Lifesciences
1031
also monitor dosage, seamlessly feeding this information back to a person’s unique virtual twin for accurate clinical tracking. Even more complex models of skin are being used to understand the dynamic process of wound healing, a major burden on the healthcare system. An aging population and rising incidence of diabetes are among the major drivers behind a rise in wound care, which is expected to reach USD 16.5 billion by 2025, up from USD 10.3 billion in 2020. In particular, deep chronic wounds such as diabetic foot and venous ulcers are painful, slow-healing conditions and it is not always clear which available interventions and treatments are best for a given patient. Creating a virtual twin of an individual with a wound allows doctors to develop a personalized prognosis and test different treatment options. If the outcome doesn’t go as hoped, that information can also be fed back into the twin to study the body’s healing process.
5.1 Living Cells: Stopping Diseases at Their Source The myriad of cells in the body are ultimately responsible for its functions. Modeling can provide critical insights by reproducing cells that can in turn be studied for a variety of diseases. While we are not yet at the stage of being able to map all cells and personalize them in a virtual twin, we can model some cells that are known to directly impact patient health. Cancer cells, for example, can be modeled to reproduce key behaviors such as disease evolution and treatment efficacy, and virtualized through the concept of intracellular pathways where a healthy cell is represented by an equilibrium of chained molecular reactions. Each pathway is controlled by cellular functions, such as growth and division. Pathways that are deregulated in cancer have been extensively described, perhaps most notably in the book ‘The Biology of Cancer’ by Massachusetts Institute of Technology professor Robert Weinberg. The 3DS Cancer Map was developed based on this research, offering a logical system view of all biological entities (genes, proteins) and interactions that are most important in human cancer. This allows the scientific community to visualize and learn from: • oncogenes (entities responsible for cancer initiation & progress), • tumor suppressors (entities that prevent cancer), • major cancer hallmarks (subsystems than can become dysfunctional and lead to cancer). Virtual cells can become part of the virtual twin of the body, and used to reproduce likely biochemical alterations and help target the right drug treatments. With a virtual twin, trial-and-error treatment of diseases can happen in the lab before experimenting on a patient. Cell automates can take into account heterogeneous cell populations and reproduce, for instance, the genetic diversity observed in cancer. At each cell division, genetic modifications may occur with a defined probability. For example, intra-tumor heterogeneity can be used to predict targeted cancer treatments depending on the level of HER2 gene amplification observed in different breast cancer patients and treatment combinations can be tested through the virtual twin model of an individual (Fig. 4).
1032
P. Johnson et al.
Fig. 4 Cancer cell with pathways
Over the last 3 years, a similar model has been built in a collaboration between Dassault Systèmes and scientists at the French Institute of Research and Health (Inserm) describing key pathways in cellular aging. Aging is a major risk factor for many chronic and neurodegenerative diseases. As the human life span increases, understanding the impact and the underlying mechanism of aging on the onset and evolution of these diseases is important to act preventively and help people age in better condition. Cellular aging is a protection mechanism to avoid cells becoming cancerous. But as aging cells induce their neighbors to prematurely age too, the body cannot keep up with cleaning them all. Cleaning up prematurely aged cells a as treatment is under investigation in arthrosclerosis, and a virtual twin can be deployed to study the impact and test treatment options. An additional advantage in using virtual twins of cells is to advance the development of personalized drugs. Connecting cellular level function with organ or system-level behavior is the missing link in this pursuit. The Living Heart Project took this on, modeling drug interactions that could alter organ-level function. Models of the individual-action potentials responsible for voltage changes across cell membranes were substituted for the phenomenological electrical model. By mapping the cells into the whole heart, researchers could predict when an irregular heart rate would occur and at what point circulation of blood through the body is compromised. This allows dosage sensitivity to be robustly mapped across a virtual patient population, providing data for AI models that predict safety based on chemical function, rather than waiting for clinical observation.
5.2 Living Microbiota: Virtual Gut Health Many diseases, and in particular chronic ones, do not develop only due to genetic predisposition: environmental factors highly influence their course and gravity. Eating and living habits, pollution, past disease and taken medication shapes the
Digital Twin for Healthcare and Lifesciences
1033
community of microbes living in each person’s gut. The gut microbiota community integrates all environmental influences, making it a foe when it resists treatment, or a friend when it helps defeats diseases and promotes healing. Because microbiota is highly diverse from person to person, creating a virtual twin of it holds great promise in medicine, since both genetics and microbiota play a part in disease. Over the last 3 years, Dassault Systèmes and Inserm have collaborated to integrate existing knowledge of the interaction between the gut microbiota and the human body into a physiologically-based kinetic (PBK) computational model describing the cycling of some key metabolites throughout several organs of the digestive system. Inserm scientists use this model in their quest to better understand the role of microbiota in human health. This modeling enriches the virtual human twin by introducing a functional, physiological layer, completing the far more advanced 3D physics, fluidics and electrical parts. To personalize a fully functional virtual twin requires substantial innovation in minimally invasive technologies able to measure critical metabolites and microbiota community composition over time, as well as leveraging AI and deep learning technologies to fit patient data. Achieving this could open an entirely new way to support personalized diagnoses, prognosis, predict responses to drugs and preventively apply nutritional intervention.
6 Virtual Twins in Action 6.1 Population & Disease Models Over time, the collection of virtual twins will behave as a virtual clinic that represents the entire population. A virtual twin is descriptive of its real equivalent, but it can also be predictive. An effective way to model predicted response to treatment is with Pharmacokinetic/Pharmacodynamics models (PKPD). These models represent “what a body does to a drug” and “what a drug does to a body.” When a drug is administered, a body will absorb, distribute, metabolize and eliminate it. The efficacy of each step is different for each person, depending, in particular, on the phenotypic expression of its genes. PKPD models are key to developing a new drug. However, they generally are not refined enough to be able to predict a specific patient’s response, partially because the phenotypic expressions of genes are not available for each patient. To improve the predictive power of PKPD models, several approaches have been developed. One approach uses quantitative system pharmacology models built on the underlying physiology that integrate diverse data from micro-scale to macro-scale, considering the body as a unique, complex, biological network. The mechanistic consideration of processes underlying drug absorption, distribution, metabolism, excretion and action makes possible to study the behavior of drugs in more realistic models, such as the combined effect of multiple drugs with polypharmacology. Quantitative system pharmacology models can
1034
P. Johnson et al.
also explicitly introduce variability terms (e.g. inter-patient variability), to model and predict a population’s response to a treatment. Combining mechanistic and population-based approaches will generate virtual populations to show variability equivalents to real populations. These virtual populations allow for the exploration of observed uncertainty, to explain the causes of inter-patient variability. This means that defining the similarity of a patient to a population makes it possible to predict how a patient will react to a drug. Virtual twins of patient populations are expected to soon become part of the evidence supporting preclinical and clinical evaluations of new treatments. Currently, randomization is the gold standard method to generate clinical evidence. However, its feasibility is limited when it comes to rare diseases or to personalized medicine.
6.2 Cohort Models Synthetic Control Arm is a modeling method that uses historical claims and observational data to approximate the effect of randomization by comparing patient populations across different clinical trials. For any new experimental arm, a specific control arm is created using patient data from selected clinical trials. Synthetic Control Arms accelerate approval of new drugs by combining virtual and clinical evidences, avoiding large randomized clinical trials in situations where innovation is most needed. By merging personalized models with models defined at the population level along with machine learning, the virtual twin will help patients get the best available treatment for them.
6.3 Uniting Virtual + Real to Go Beyond Health Records A critical aspect of successfully creating a reliable digital healthcare system is having complete and consistent information, to support every step: from day-to-day monitoring to diagnosis to surgical planning. A complete representation of the condition and its supporting data must be available to the entire patient care team, in a form suitable for each team member. At the center of that team is the patient, their family or their surrogate. Point of care becomes dynamic and often location-free. The patient may be at home, with the clinician in their office or with the surgeon in the hospital. Regardless, the information should be always available. The virtual twin serves as a next-generation functional health representation, allowing for a common understanding, efficient sharing and data-driven care to reduce preventable errors. This is only possible if the medical systems communicate through the use of standards on a cloud-based platform, which helps reduce inconsistencies introduced by separate tools and reliably drive surgical robots for maximum precision (Fig. 5).
Digital Twin for Healthcare and Lifesciences
1035
Fig. 5 Example of reconstruction. (Digital Orthopaedics)
7 Day-to-Day Healthy Life with Digital Twins The Internet of Experiences (IoE) connects experiences worldwide, making them accessible everywhere and anytime. This will enable a shift to remote care and monitoring, leading to more proactive therapeutic solutions with personalized recommendations. For example, healthcare-related smart home devices are designed to track and manage health at home, allowing savings of healthcare expenditures. A home health network can include services—that track vital signs, sleep quality, and other health parameters via wearables, sensors, and devices—or telehealth, which includes information services, education, and care delivery. Wearables will not only be used for continuous monitoring of health, but they will also serve as treatment dispensers. The IoE will reshape the care delivery experience through ambulatory care, telehealth, wearable devices that monitor vital signs, at-home drug delivery devices reducing in-hospital treatments, and a wide panel of online services around prevention and behavior change. Citizens will increasingly be empowered to monitor and manage their own health, reaching a new level of autonomy and harmony in their relationship with their body.
7.1 Personal and Collective Data Intelligence Security and privacy of health information are a top priority. Regulations on personal health data will be progressively harmonized worldwide. As the patient is positioned at the core of their own health journey, the right to access and control personal data becomes more crucial than ever. At the same time, healthcare stakeholders require increased sharing of health data to build collaborative intelligence and to expand their understanding of healthcare activities. Data is shifting from the
1036
P. Johnson et al.
care of an individual to the care of a population and offers new opportunities for service and quality improvements. A data-enhanced platform of care enables siloed data sources to be integrated and contextualized within the health environment. Platforms, therefore, catalyze collaboration amongst diverse stakeholders and allow the setup of human patrimony in every country. Different approaches have been undertaken to collect patient data at the scale of a population. In Denmark, for instance, the entire country is a cohort scrutinized by integration of health information sources from claims, electronic health records, or genomic analysis.10 In the US, the largest ever cohort – called “All of Us” – has been launched to gather data about more than one million people, in order to explore the potential of precision medicine while taking into account individual differences in lifestyle, environment, and biology. Anonymization of data has been a key enabler for data sharing and will contribute to opening the data economy. There are no commonly accepted data sharing standards at this stage, although these will be required to build the needed trust at a societal level. A first meaningful step in this direction has been.
7.2 Healthy Living and Quality of Life In a fast-growing technology era, quality of life is the most important benefit citizens expect from healthcare technology breakthroughs. Health is a highly precious state of life, which enables individuals to fulfill themselves, unlimited by anything but their will and environment. Maintenance of health is a costly pursuit, as healthcare spending is projected to reach over US$10 trillion, nearly 10% of global GDP, by 2022.2 A swift upward trajectory in global health spending is particularly noticeable in low- and middle-income countries, where health spending is currently growing, on average, 6% annually compared with 4% in high-income countries. A series of innovations have driven better health for people, including hygiene, infectious disease prevention, precision diagnostics, therapeutic devices, biological pharmaceutical compounds, and minimally invasive surgical procedures. However, chronic diseases have never been so common. Globally, the number of people living with diabetes has risen from 108 million in 1980 to 422 million in 2014 and is now rising even more rapidly in low- to middle- income countries. Vascular diseases are the number one cause of death: 17.9 million people die annually from cardiovascular diseases, representing 31% of all deaths globally, and over three-quarters of these deaths occur in low- to middle-income countries. In high-income countries, nearly 50% of citizens suffer from chronic disease while the other half are diagnosed with cancer during their lifetime. The current rise of non-communicable diseases (NCDs) is associated with lifestyle choices – tobacco use, unhealthy diet, obesity, physical inactivity, and harmful use of alcohol – and environmental factors, yet NCDs could be largely prevented by early detection and appropriate counseling and management.
Digital Twin for Healthcare and Lifesciences
1037
To face this challenge, healthcare stakeholders—individuals, physicians, payers, policymakers, and health technology companies – must converge on digital platforms to connect, combine and share data, which will allow for global innovation of care that includes social and environmental determinants of health. Such platforms will allow stakeholders to capitalize on knowledge about health factors both at the individual and population-level. These data-based approaches will lead to a new human-centered view of healthcare that includes personalized prevention and support. “Knowledge is the only good that multiplies, when you share it” and sharing among patients, caregivers, payers, and regulators will not only provide information to support better decision-making and service, but will also expand global knowledge of health and life science—leading to sustainable and accelerated progress. By 2030, the life sciences industry will increasingly shift from reactive to proactive medicine, enabled by personalized health. This new era will encompass a holistic view of the citizen, where health will become a core value of daily life and cities. Digital platforms will play a key role in this transformation. Prevention is among the top ranking expectations as well, identified as top 3 priority for 56% of respondents. Preventive health plans are perceived as having the highest direct impact on people’s health. Patients also expect higher autonomy through better information and the ability to dispense treatments at home (Fig. 2). This builds a strong link between health and cities. More and more cities in the world are moving towards a new “city experience”, where the interactions between citizens and city services are transformed. These cities will enter into the platform era by leveraging data and technology to create more efficient living environments, improve sustainability, connect citizens to decisions by sharing information with the public, and improving the quality of government services. Achieving this goal requires a harmonious development in all dimensions of city experience: governance, education, housing, mobility, infrastructure, connectivity, innovation, energy, and healthcare – a core part of this holistic city experience. The quality, reliability, and completeness of healthcare infrastructure will be a fundamental factor for the global development of cities. As smart cities create a more valuable citizen experience, “cities of health” will become more and more attractive. In Virtual Singapore, intelligent 3D models were set up to improve the experience of residents, businesses, and government by capturing all aspects of the city. By connecting the dots across citizens, thinking about experiences, and connecting the virtual and real world, smart cities reveal sustainable urban solutions to maintain the health of their growing and aging population. A new approach will be required to the design of cities with a new mindset for operating these cities. Mobility and transportation will be planned to preserve the health of the residents, social services will be sized based on neighborhood health indicators, and environmental exposure and air quality will be crossed with patient health to generate new insights into emerging risk factors and to trigger personalized prevention recommendations. Emerging diseases are monitored continuously to detect clusters of cases and their link with infectious agents or pollutants.
1038
P. Johnson et al.
7.3 Continuous, Contextual, and Connected Journeys The fragmentation of the patient journey among different physicians and professionals, split across disease areas and territories, leads to “stacking” many disconnected health services to provide care to a single person. With the advent of the experience economy, value is now centered on the patient. The health industry network—from pharma to healthcare delivery—is focused on delivering effective and direct outcome for people’s health. Platform approaches become necessary to solve the complexity of this health journey. The holistic model of care for citizen health will provide stakeholders with. Health Digital Twins will also help improving Quality of life—living longer & with better health Prevention—being able to prevent disease with nutrition and activity recommendations.
7.4 New Business Model Calling for a New Platform I will prevent disease whenever I can, for prevention is preferable to cure.
Health systems are shifting from curative medicine to preventive approaches, by enrolling citizens and professionals in value-based economic models instead of volume-based funding. The value is now the patient experience. A value-based made in Europe with the European General Data Protection Regulation, which frames the definition of anonymization to reduce the risk of reidentification. Technical solutions, such as blockchain, exist but they are not sufficient by themselves to build trust. New processes and institutional approaches are needed to allow sharing of highly-sensitive data. While models of data sharing have yet to be developed, collective data intelligence will become a cornerstone for continuous learning and an improving healthcare system. This new approach requires reforming an entire model of regulation and evaluation where payments are made for activity to a system where payments are tied to patient-centered value and quality. Europe is leading this new model adoption; Sweden and the United Kingdom are the only countries with high alignment between payments and value. Until now, the value of health products had to be demonstrated by means of clinical trials in a pre-specified patient population. This process is long and expensive and is challenged by a high risk of failure in real life conditions. The current paradigm of clinical trials is expected to become more decentralized, more inclusive to diverse populations, and more able to rapidly adapt in real time during trials so that the right population—even in the case of small cohorts—is rapidly identified in order to deliver the highest benefit. Clinical trials are also expected to increase their validity in the real world. Real-world evidence is the clinical evidence regarding the usage and potential benefits or risks of a medical product derived from analysis of real-world data, as promoted. Real-world data are collected from various sources including, but not limited to, clinical trials, prospective and/or retrospective observational studies, medical health
Digital Twin for Healthcare and Lifesciences
1039
records, claims, and mobile and wearable devices. These data have the potential to complement clinical trials; increase knowledge for therapeutic innovations, pragmatic care, and prevention practices; lead to better designed and conducted clinical trials; and measure the real-world efficacy of a drug or a prevention. Payers will become more capable of setting price based on patient efficacy. As precision medicine is delivered on platforms of care, individual patient value can be assessed and used to support policymaker decisions and payer engagement, leading to a new value-based model of care that moves from product to outcomes and holistic care.
8 Supporting an Emerging Ecosystem of Virtual Twin Innovators We are still at the beginning of the digitalization of healthcare, including the virtual twin. As the industry embraces this transformation and ushers in a new era of medical innovation, a cloud-based, common platform must be deployed as the technical underpinning. Incubators, such as the 3DEXPERIENCE Lab, allow disruptive startup companies to gain access to state of the art platform capabilities to develop virtual twins. Though their work is in the early stages, these companies are already delivering improved diagnosis and treatment services, including outcome-optimized surgical planning. Some examples: Bioserenity develops mobile sensors for high risk patients to track detailed neurological and cardiovascular signals. These are loaded to the cloud and tracked by the physician to immediately identify a condition, such as an epileptic seizure, wherever it occurs. The neurologist can use the data from the sensors to plan surgical intervention. FeOps addresses the growing practice of replacing failed heart valves through non- invasive procedures. Though less costly and far less invasive, success is contingent on achieving a precise pressure fit for the newly implanted valve. Accurate sizing, shape and location can be predicted by reconstructing an individual patient’s heart in 3D and performing virtual surgery options, reporting the best approach to the surgeon (Fig. 6). These two companies are delivering their innovations through a cloud-based Clinical Decision Support System (CDSS) supported on the 3DEXPERIENCE platform. Today, their focus is on treating each patient individually, but over time their learnings will be aggregated into a knowledge base of parametric models used to train clinicians and feed AI systems. Additionally, researchers throughout the world are now using virtual twins of humans to develop innovative solutions to a wide range of medical challenges. Biomodex was founded in 2015 with the intent to develop 3D printing based complex training and case specific rehearsal solutions for physicians. Its purpose is to revolutionize pre-operative planning, resulting in safer medical procedures and improved patient outcomes (Fig. 7).
1040
P. Johnson et al.
Fig. 6 Illustration from Feops
Fig. 7 Illustration from Biomodex
Smart CPR For a person suffering a heart attack, time can be the difference between life and death. CPR administered correctly can save a life, but requires 30–40 min of demanding physical effort, sufficient to exhaust even a healthy person. A Smart CPR device that can provide advanced cardiac life support based on a patient’s specific physiological condition is under development by a team at the Indian Institute of Technology. Virtual patients, today represented by the LHP, are being used for inverse electro-mechanical modeling of the cardiac resuscitation process. Fundamental insights into the cardiovascular tissue correlation of external measurements will feed the development of the device, fine tuning treatment based on that patient’s physiology. This type of specific biomechanical study would be impossible without virtual modeling.
Digital Twin for Healthcare and Lifesciences
1041
Cervical Spine Surgery Heterotopic ossification is one of the most common major complications after artificial disc replacement, limiting the patient’s range of motion. People who have undergone surgery to implant an artificial disc in their spine can face severe problems after a few years. Abnormal bone growth occurs due to severe loading on the artificial disc, often requiring expensive and painful revision surgery. At the Vellore Institute of Technology in Chennai, researchers are simulating bone remodeling and growth to develop a personalized artificial disc that will balance the load and heal with predictability. Personalized Psychiatric Treatments Transcranial direct current stimulation (tDCS) is a noninvasive brain stimulation technique in which low intensity direct current is applied to the head for several minutes to modulate neural activity in specific regions of the brain. It is increasingly being used to treat neurological and psychiatric disorders such as depression, stroke recovery and schizophrenia. The simplicity, affordability and portability of tDCS make it ideal for developing countries that suffer from a high prevalence of mental disease and low access to pharmacological therapy. However, efficacious personalized application of tDCS can be problematic since the tDCS-induced current patterns in the brain show marked inter-subject variation due to underlying differences in cranial anatomy and brain tissue characteristics. To address this challenge, India’s National Institute of Mental Health and Neuro Sciences (NIMHANS) has demonstrated that computational modeling techniques can integrate multimodal brain imaging with other clinical and biological metrics to improve the efficacy and personalization of neuro-modulation. High-fidelity 3D head-brain models created from MRI images of schizophrenia patients and multiple predictive studies determined the optimal treatment protocol for each person. Ultimately, machine-learning approaches will be used to build a predictive framework based on 3D imaging and physics-based simulation, as well as on subject-specific genetic, clinical, biometric, behavioral and environmental information to better guide the diagnosis and treatment of neurological and psychiatric disorders.
9 Conclusion Improving human health relies on gaining a much deeper understanding of the human body. Nearly 50 years after the dawn of the digital revolution, we understand the power that results from when we capture, grow and apply research and understanding from teams throughout the world. In the 1980s and 1990s, Dassault Systèmes helped pioneer the use of digital mock ups to revolutionize how products were developed, by using 3D to represent entire complex systems, outside and inside. Each piece came together to simulate a complete engineering context in a virtual model, replacing physical mock-ups. This ushered in an era of collaborative innovation that revolutionized how non-organic products were designed and manufactured. We are on the cusp of the next revolution, where these same approaches are now applied to the organic world. By fusing real-world knowledge and
1042
P. Johnson et al.
know-how with multidiscipline, multiscale modeling and simulation technology, we can gain never-before possible insights into the human body. Harnessing data from medical records and collating insights and intelligence from a range of disciplines – industry, research, practitioners – and even patients themselves can unlock the means to prevent and battle disease and to speed healing. This data must be stored on a secure, cloud-based common platform that can be accessed by anyone, anywhere. By leveraging that data, we can create virtual twins of the human body to visualize, test, understand and predict what cannot be seen – from the way drugs affect a disease to surgical outcomes – before a patient is treated. The virtual twin can be built to represent most, then personalized for each patient. In the same way companies who create non-organic products have innovated using the virtual world to improve the real world – (V + R) – we can now leverage the virtual world to improve the organic world. The potential to use V + R to create an entirely new approach to healthcare is unprecedented; we will benefit from next-generation medical practices, precision medicine and surgery, and informed predictions to individualize care. The time has come to act.
References 1. Biot, C., Johnson, P., Massart, S., & Pécuchet, N. (2019). Improving patient healthcare through virtual platforms. Global Innovation Index. 2. Deloitte. (2019). Global healthcare outlook, Shaping the future. Retrieved from https://www2. deloitte.com/global/en/pages/life-sciences-and-healthcare/articles/global-health-care-sector- outlook.html 3. Glen de Vries. The patient equation, the precision medicine revolution in the age of COVID19 and beyond. Wiley. 4. Hood, L., & Flores, M. A. (2012, September 5). Personal view on systems medicine and the emergence of proactive P4 medicine: Predictive, preventive, personalized and participatory. New Biotechnology, 29(6), 613–624. 5. Irving, G., et al. (2017, November 8). International variations in primary care physician consultation time: A systematic review of 67 countries. BMJ Open, 7(10), e017902. 6. Le Masson, P., Weil, B., Daloz, P., Johnson, P., & Massart, S. Shaping the unknown with virtual universes – The new fuel for innovation. 7. Mikk, K. A., Sleeper, H. A., & Topol, E. J. (2017). Patient data ownership. JAMA, 318, 1433–1434. https://jamanetwork.com/journals/jama/article-abstract/2673960 8. Murphy, S. V., & Atala, A. (2014). 3D bioprinting of tissues and organs. Nature Biotechnology, 32(8), 773–785. 9. Peirlinck, M., Costabal, F. S., Yao, J., Guccione, J. M., Tripathy, S., Wang, Y., Ozturk, D., Segars, P., Morrison, T. M., Levine, S., & Kuhl, E. (2021 June). Precision medicine in human heart modeling: Perspectives, challenges, and opportunities. Biomechanics and Modeling in Mechanobiology, 20(3), 803–831. 10. Sherman, R. E., et al. (2016, December). Real-world evidence—What is it and what can it tell us? The New England Journal of Medicine, 2293–2297. https://www.nejm.org/doi/ full/10.1056/NEJMsb1609216 11. The Economist Intelligence Unit. (2016). Value-based healthcare: A global assessment. Retrieved from http://vbhcglobalassessment.eiu.com/ 12. Weiner, N. (1948). Cybernetics or control and communication in the animal and the machine. MIT Press. 13. World Health Organization (WHO). (2018). Global health expenditure report. Retrieved from: https://www.who.int/health_financing/topics/
Digital Twin for Healthcare and Lifesciences
1043
Mr. Patrick Johnson is Senior Vice-President Corporate Research & Sciences at Dassault Systèmes, defining scientific bases and inventing disruptive technologies for the Industry Renaissance. After several R&D positions (head of AI, CATIA Brand), he took over Research, set up key public/private alliances and launched the Lifesciences strategic diversification (BIOVIA, MEDIDATA Brands). Member of the French Academy of Technology and INRIA’s Scientific Board.
Steven Levine, PhD is the Sr. Director of Virtual Human Modeling at Dassault Systèmes. Dr. Levine has more than 30 years of experience in the development of computational science and engineering tools and is the Executive Director of the Living Heart Project. Prior to his current role, he led strategy for SIMULIA, the simulation brand within Dassault Systèmes. Dr. Levine was elected into the College of Fellows in the American Institute for Medical and Biological Engineering (AIMBE) in 2017 and holds a PhD in Materials Science from Rutgers University. He began his career in health tech at the San Diego based startup Biosym that went public as Accelrys in 2004 and acquired by Dassault Systèmes in 2014. Cécile Bonnard began working on virtual human modeling for life sciences in 2010 in a startup company. After the startup was acquired by Dassault Systèmes in 2014, she worked as a project manager on several life sciences applications as part of BIOVIA, a Dassault Systèmes’ brands addressing life sciences. Since 2020, she joined Dassault Systèmes’ Corporate Research department as expert to work on topics relative to Virtual Twins of Human initiatives in Healthcare. Cécile holds an engineer degree in Agronomic sciences, and completed her PhD in Computer Sciences in Montpellier in 2010.
Katja Schuerer has been working for more than 20 years now at the edge of computer sciences, software development and life sciences. Biochemist by training and always interested in computer sciences and mathematics, she started her career in the computer sciences department at the Pasteur Institute bringing software tools to biologists and teaching programming. After several years, she joined the startup world where she grew from a junior developer working on genomics and biological database problems into a project manager of software project for the research & development departments in pharma companies and biotech. She then joined Dassault Systèmes as senior project manager and transitioned afterwards to research, where she is responsible for a team developing next generation technologies for healthcare & life sciences.
1044
P. Johnson et al. Nicolas Pécuchet, MD, PhD, is head of research group Living Twins for Practitioners at Dassault Systèmes. The team focuses on translating virtual twin of humans into clinical benefit for patients, a work that is nurtured by strong interactions and partnerships with research institutes and healthcare professionals. He previously served as a clinical oncologist at the European Georges Pompidou Hospital and was a principal investigator in clinical studies. As a founding member of Molecular Tumor Board at Assistance Publique – Hôpitaux de Paris, he actively supports delivering precision medicine to the largest number of patients. His research on cancer genomics and liquid biopsy have been published in Plos Medicine, Annals of Oncology, Journal of Clinical Oncology, Clinical Chemistry and Clinical Cancer Research. Dr Pécuchet is a graduate of Paris Descartes University and Paris Descartes Medical School and was awarded of the Chancellery prize of the University of Paris.
Nicolas Gazères started working on the neural dynamics of local cortical circuits in visual cortex, in close collaboration with sensory electrophysiologists. In 1999, he joined Dassault Systèmes and worked on software engineering, application servers, Product Lifecycle Management and biomedical data integration. He now leads a computational neurology team, passionate about bringing new interpretation tools to neurologists based on Virtual Brain technologies. Nicolas holds a Master’s degree in Applied Mathematics from Ecole Centrale Paris (1993) and a PhD in Computational Neuroscience from Université Pierre et Marie Curie (1999).
Karl D’Souza is a computational modeling and simulation professional with extensive experience in technology consulting, product management, and business development as they pertain to simulation-based solutions for science and technology companies. In his current role, he is responsible for managing the Design and Engineering portfolio of Dassault Systèmes’ solutions for medical devices and equipment. Prior to his current role, he helped grow Dassault Systèmes’ Virtual Human Modeling initiative with a special focus on cardiovascular and neurological applications and was a founding member of the Living Heart Project. Karl has a B. Tech in Materials Science and Engineering from IIT Bombay, an MS in Mechanical Engineering from SUNY Buffalo, and an MBA from Bryant University; and is based in Providence, RI, USA.
The Digital Twin in Human Activities: The Personal Digital Twin Chih-Lin I and Zhiming Zheng
Abstract With the 5G era evolving continuously towards maturity, ICDT deep convergence is accelerating, and so is the transformation of our society. Simultaneously, advanced exploration of the 6G vision where a new era of “Digital Twins, Pervasive Connectivity with Ubiquitous Intelligence” is emerging. We expect the physical, biological, and cyber worlds to be fused together via the new generation of intelligent connectivity. In such an era, everyone may have his or her own Personal Digital Twin (PDT) in the cyber world. Such a PDT may include the person’s external appearance, as well its internal organs and tissues. The PDT may be used to predict human health, behavior, and emotion in advance of developments warranting concerns or requiring attention, and even simulate people’s thoughts to realize spiritual immortality in some extreme sense. Our PDTs can be transmitted to any place in the world in real time through the pervasive network for various activities, be it attending a concert, enjoying the beaches and sunshine of the Maldives, or hugging family members and friends that are oceans apart. This chapter will examine the usage scenarios of PDT, the challenges and the wide varieties of technologies involved, encompassing PDT information acquisition, transmission, processing, and presentation. In particular, wireless body area networks (WBANs) and multi-source data fusion will be highlighted. PDTs will rely on the joint development of many technologies from multiple disciplines, such as brain-computer communications, molecular communications, synesthesia interconnection, AI, and intelligent interaction. We will present a variety of health-care applications as part of a very early-stage list of accomplishments in our PDT platform development as well as the outlook of PDT related developments across the globe.
Chih-Lin I (*) Wireless Technologies, China Mobile Research Institute, Beijing, China e-mail: [email protected] Z. Zheng China Mobile Research Institute, Beijing, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_33
1045
1046
Chih-Lin I and Z. Zheng
Keywords Biometrics · Cognition and control · Cyber-physical systems · Digital Twin · Digitization · Intelligent connectivity · Perception and sensing · Personal digital twin · Pervasive connectivity · Physical-cyber fusion · Wireless body networks · 5G · 6G
1 Introduction With the 5G era evolution maturing continuously, ICDT deep convergence is accelerating, and so is the transformation of our society. Simultaneously, global efforts of investigating the societal and technology drivers contributing to build the next generation wireless communications networks [1] have begun in earnest. The advanced exploration of the 6G vision, shown in Fig. 1, where a new era of “Digital Twins, Pervasive Connectivity with Ubiquitous Intelligence” [2] is emerging. We expect the physical, biological, and cyber worlds will be fused together via the new generation of intelligent connectivity. In such an era, everyone may have his or her own surrogate, the Personal Digital Twin (PDT), in the cyber world. Such PDT may include the person’s external appearance, as well as internal organs and tissues. The PDT may be used to predict human health, behavior and emotion in advance of developments warranting concern or requiring attention, and even simulate people’s thoughts to realize spiritual immortality in some extreme sense. Our PDTs can be transmitted to any place in the world in real time through the pervasive network for various activities, be it attending a concert, enjoying the beaches and sunshine of the Maldives, or hugging family members and friends that are oceans apart.
Fig. 1 Scenarios beyond 2030 [2]
The Digital Twin in Human Activities: The Personal Digital Twin
1047
This chapter will examine the usage scenarios of PDTs, the challenges and the wide variety of technologies involved, encompassing PDT information acquisition, transmission, processing, and presentation. In particular, multi-source data fusion and heterogeneous network featuring wireless body area networks (WBAN) will be highlighted. Two examples of our early stage effort in the PDT platform development will be presented. Finally, the outlook of PDT related development across the globe will be provided.
2 The Basics of Personal Digital Twin (PDT) In 2020, Elon Musk presented Neurallink, an exciting work of Brain-Computer Interface (BCI). A coin-size chip of the BCI was implanted into the brain of a pig. Then, the Electroencephalography (EEG) signals of the pig could be wirelessly transmitted in real time to the external receiving equipment. The EEG signals were further analyzed to predict the animal’s motion intentions, where the movements of all four limbs were accurately matched with prediction [3]. This demo made the brain-computer communication and potential mind control popular again, because at least it provided a viable means of collecting the real-time brain activity information for remote interpretation and potentially suitable responses (Fig. 2). Then in 2021, Tsinghua University announced the enrollment of its first student powered by artificial intelligence (AI), named “Hua Zhibing” [4]. Hua was generated on the basis of a machine learning model, Wudao 2.0, which can process 1.75 trillion parameters, a record unsurpassed in the world then. This technology makes her not only lifelike in appearance and the way she speaks, but also enables her to learn and work. Researchers will also use deep learning techniques to sync up the types of data that can build her emotional intelligence. She is expected to understand people’s emotional as well as intellectual needs (Fig. 3). The development of Neurallink and Hua Zhibing represents promising technology progress needed for the ultimate PDT. Mapping of a human into the virtual world, its PDT will require cutting-edge technologies including brain-computer interface, extreme AI, molecular communication, etc. We’ll examine three levels of future PDT capabilities depicted in Fig. 4: the direct acquisition mapping of physical signs and behavioral data (acquisition), the interpretative mapping of the five
Fig. 2 An Implantable brain computer interface with thousands of channels [5]
1048 Fig. 3 Hua Zhibing, a virtual character enrolled in Tsinghua University
Fig. 4 Three levels of future PDT development
Chih-Lin I and Z. Zheng
The Digital Twin in Human Activities: The Personal Digital Twin
1049
senses and emotional thought analysis (perception and cognition), and the control of the impact on the physical world (intelligence and control).
2.1 Capabilities of Future PDT Systems 2.1.1 Acquisition: Biometric Information Based Digital Twin There are a wide variety of biometric information to be acquired for the PDT system. Some is readily available, some requires further work to become practical. The first type of information is the 3D reconstruction of the static information of the human body, including the internal organ structure and external facial physique 3D reconstruction. External 3D reconstruction usually uses 2D and 3D camera acquisition, commonly used in virtual streamers and other applications that have become more popular in recent years. For the internal aspects, instead, we use CT and MRI techniques to achieve 3D reconstruction of the internal structure of organs, which are commonly used in medical diagnosis, teaching, and other applications. The second type of information is dynamic motion behavior data. Typically, cameras, Inertial Measurement Units (IMUs), and wearable sensors are used to capture this type of data. It is then used for identification, health analysis, and big data analysis of human behavior through gait cycle and other activities. The third type of information is provided by physiological indicators tracking and prediction, including ECG, EEG, electrodermal activity, EMG, blood pressure, pulse, blood sugar, etc., which are usually obtained through physical and chemical means. In the future, all three types of information will be presented through the body area network (BAN) corresponding to the virtual human body, i.e., the PDT. Doctors can interact with the PDT dynamically to understand the patient details in real time. 2.1.2 Perception and Senses: Synaesthesia Twin Future PDT capabilities may expand to cover all senses of the body, literally getting the same feeling as a real human. A new form of network will be needed to expand the transmission content from traditional pictures, text, voice and video to the color, sound, smell, taste, touch and even emotion that a human can perceive. PDTs can smell the delicious dishes sent by others via such networks. In such a scenario, we feel no difference interacting with real or virtual people or things. This level of PDT may drastically improve our shopping, gaming, and multi-faceted daily life experience.
1050
Chih-Lin I and Z. Zheng
Fig. 5 Potential application scenarios of PDT
2.1.3 Cognition and Control: Transplantation Twin Intelligent interaction is an important application scenario for PDTs in the future. Intelligent behavior reflected through dialogue and interaction usually requires the intelligent system to interact with users or the environment and realize learning and modeling in the process. Such PDTs will not only have the real physiological information of the human body, but also mimic the thoughts and feelings of a human through brain-computer communication and emotion analysis [5]. The elderly at home and children left behind by parents working remotely are prominent social issues in some countries. Emotional companionship for them is very important. PDTs may help alleviate some of these social problems (Fig. 5).
2.2 Key Technologies of Acquisition and Sensing An end-to-end PDT system contains four stages. The first is acquisition through various sensors, to collect and upload data about the human body to the aggregation node. The second is aggregation, to converge data from multiple data sources to form the PDT big data. The third is computation, to analyze and process PDT big
The Digital Twin in Human Activities: The Personal Digital Twin
1051
data to create a digital twin. The last component is interaction, to transfer information and instructions between the PDT and the real person. We’ll elaborate on the first segment since it’s the foundation of a PDT system. The acquisition of body domain data can be divided into two categories: Ex vivo acquisition and in Vivo acquisition. The sensor technology and nanorobot technology behind these two kinds of acquisition have been progressing fast. Ex vivo acquisition devices usually refer to wearable devices that can measure body temperature, blood pressure, pulse reading and oxygen levels. It transforms physical signals such as photoelectric, vibration, pressure and temperature into electrical signals for transmission. Ex vivo acquisition is relatively straightforward and has a higher safety level but the types of data that can be collected are limited. In vivo acquisition is instead mainly through in vivo sensors or nanorobots, by moving the equipment in vivo or implanting it under the body surface, where the acquisition of the needed data is directly affected. In Vivo acquisition does not need carrying equipment around as opposed to ex vivo, but it is more difficult to implement, and there’s still a long way to go regarding its health safety implications. 2.2.1 In Vivo Acquisition In recent years, robot technology has made great progress, and the new robot technology represented by nano-robot and capsule robots, provides new opportunities for in Vivo data collection. The research and development of nanorobots has attracted worldwide attention [6–12]. Minimally invasive surgery can be replaced by nanorobots, which have also shown potential applications in cancer cells detection, and targeted drug delivery. Nanorobots can control the nanoscale structure of biomolecules and break the limitation of traditional approaches. Benefiting from the advantages of autonomous movements and efficient target capturing and separation, nanorobots with various biological receptors can be used as biological sensors which can detect and differentiate target molecules in body fluids (such as protein, nucleic acid, cancer cell, etc.), and improve the sensitivity and effectiveness of biopsies. Nanobots can also help to achieve precise acquisition of human physiological information. Safe and effective communications is necessary for ensuring and implementing these in vivo applications. However, traditional wireless communications cannot meet all the requirements of micro-communications required for in Vivo settings. Complementary networks integrating in Vivo molecular communications and wireless communications would be a worthwhile pursuit towards 2030. A capsule robot can be small enough to carry a camera and a wireless transmitter. Its size is generally within the range of 1~4 cm and can move freely in the digestive tract system of the human body [13]. It can take photos to collect image information in places like the gastrointestinal tract, and send the collected data in vivo by wireless communications in real-time. Due to the lossy nature of the in vivo medium, achieving high data rates with reliable performance will be a challenge, especially since the in vivo antenna performance is strongly affected by near-field coupling to
1052
Chih-Lin I and Z. Zheng
Fig. 6 MIMO In Vivo architecture of a WBAN
the lossy medium and the signals levels will be limited by specified Specific Absorption Rate (SAR) levels. Authors in [14–16] showed that by using MIMO in vivo, significant performance gain can be achieved, and at least two times the data rate can be supported with SAR limited transmit power levels. A new approach for data acquisition should be considered for BANs based on what the capsule robots provide and coupled with new architecture such as MIMO in vivo (Fig. 6). 2.2.2 Ex Vivo Acquisition The network system of the digital twin consists of medical sensor nodes or portable mobile devices on the body surface, aggregation nodes and remote-control nodes. Various medical sensors or portable mobile devices on the body can form a network in a distributed way. At present, technological advances in wireless communications, microelectromechanical systems, and integrated circuits have resulted in a good set of low power, miniaturized, and even intelligent devices, enabling invasive/ non-invasive micro and nano sensor nodes on or around the human body. These sensor nodes are able to collect a great variety of physiological information including electroencephalogram (EEG), electrocardiograph (ECG/EKG), muscle activity, respiration, body temperature, pulse, blood oxygen and blood pressure. And the collected information is sent to the remote control node through the aggregation node wirelessly. The remote-control node controls and manages the sensor network, issues detection tasks and collects the monitoring data. The energy consumption of the aggregation node, and its processing capacity, storage capacity and communications capacity are at a much higher level compared to the individual sensors. It is the gateway of the wireless body area networks (WBAN) and connects the WBAN with the Internet and other external networks.
The Digital Twin in Human Activities: The Personal Digital Twin
1053
Technological advances in wireless communications, microelectromechanical systems, and integrated circuits have allowed a good set of low power, intelligent, miniaturized devices, enabling invasive/non-invasive micro and nano sensor nodes to be on or around the human body to become possible. 2.2.3 Synesthesia Acquisition The Brain-computer interface (BCI) provides direct communication between the brain and peripheral equipment through a two-way information flow apparatus [5, 17]. Such technology spans across the fields of information science, cognitive science, materials science, and life science. It has an increasingly important impact on intelligent fusion, bioengineering, and neuroscience. BCI technology offers hope for restoring sensory and motor functions and treating neurological diseases, while also giving humans “superpowers”, namely the ability to “control” intelligent terminals with their minds. BCI technology is envisioned with four progressing stages. The first stage is “repair”, using the mind to manipulate the machine and replace or repair some human body functions. The second stage is “improve”, using the BCI technology to improve brain function. The third stage is “enhancement”, using the BCI to access large amounts of knowledge and powerful functions very quickly, achieving enhanced capabilities analogous to “superpowers”. The fourth stage is “infinite communication”, which, directly depending on whether the brain’s neurons can achieve “unlimited” communication between each other, utilizes BCI technology to achieve information transmission. Current BCI technology is still at the beginning level of the “repair” stage. The direction of information transmission over a BCI may be from brain to machine, from machine to brain, or from brain to brain. Brain-computer fusion is considered a possibility in the future development of the BCI. At present, the interactive technology directly between human brains and computer networks is still at a very primitive stage. Current BCI systems are mostly on a one-on-one brain-to- computer mode of operation. Whereas in the future, the subjects of social communication will not be limited to among humans alone, instead encompassing a wider range of intelligent agents, including PDTs and robots. The communication among agents will not only be the transfer of data and information, but also intelligent interaction.
2.3 Wireless Body Area Network The data collected in a multi-sensor WBAN are mainly from continuous biometrics sensing including heart rate, temperature, pulse, blood pressure, oxygen and exercise data. Generally, individual user’s data are collected and analyzed separately. They can be used to effectively predict the level of the physical health condition.
1054
Chih-Lin I and Z. Zheng
Table 1 Network requirements of different applications [20] Application ECG(12 leads) ECG(6 leads) EMG EEG(12 leads) Blood saturation Temperature Glucose monitoring Cochlear implant Artificial retina
Date rate 288 kbps 71 kbps 320 kbps 43.2 kbps 16 bps 120 bps 1600 bps 100 kbps 50–700 kbps
Delay 250 ms 250 ms 250 ms 250 ms 250 ms 250 ms 250 ms 250 ms 250 ms
Table 2 Network parameters [21] Basic static data volume Digital portrait 100Mbyte Digital healthcare 1GByte Holographic 10GByte portrait Brain storage 109TByte PDT in nanoscale 109TByte
Synchronous communication data volume Mbps Gbps 1000Gbps
Synchronous cycle S S 30 ms
105 Tbps 105 Tbps
Day Day
However, a typical medical WBAN has at least six nodes and can be extended to up to 256 nodes. The WBAN sensors can communicate with gateway devices in order to connect with the Internet. When a multi-sensor WBAN is used for data acquisition, due to its large number of sensors and sensor types, the collected data is susceptible to various forms of interference and noise. WBAN technologies rely on IEEE standards including 802.15.4 and 802.15.6 [18, 19]. These standards support quality of service (QoS), ultra-low power consumption, highly reliable wireless communications (low latency and data loss), and data rates up to 10Mbps, which is sufficient for the typical applications in Table 1. There will be high density of nodes and multi-dimensional new communications requirements to support PDTs in the future. With digital medical treatment and sports monitoring and emerging new applications such as human-computer intelligent interaction and the synesthetic internet of PDTs, future WBAN will need to meet much higher performance requirements as proposed by [21] in Table 2:
2.4 Technology Outlook and Key Considerations Ex vivo acquisition technologies are relatively mature, such as ECG, blood pressure, body temperature and other parameters, and are continuously improving. Various human motion capture technologies are already commercially available, where the
The Digital Twin in Human Activities: The Personal Digital Twin
1055
accuracy and real-time response still need improvement. With the development of flexible electronics technology [22], IMUs may be embedded in intelligent clothing, electronic skin, and other means. In addition, BCI technology and myoelectric signal-based multimodal neural drive fusion technology [23] is also developing rapidly, but still need to improve their portability, processing power and real-time availability. In vivo communications with the capsule robot currently is now reaching centimeter to millimeter distances. The aggregation aspect of electromagnetic wave communications is more mature in ex vivo. In vivo and ex vivo communications synergy still needs to be enhanced. In vivo communications with bio-nano robots, non-electromagnetic wave molecular communications, and heterogeneous communications are still in the theoretical development stage, but expected to reach early stage feasibility around 2030. One of the most important factors restricting further development of WBANs is energy consumption. The safety and biocompatibility on the human body is a very important consideration for in vivo devices. Biological power generation is being pursued instead of battery power supplies. Future PDT capabilities and services scenarios must address challenges in access density, heterogeneous communications, power synergy, security, reliability, conflicts between privacy and data modeling needs, as well as flexible networking for future atomized networks. Taking into account the considerations above and potential business maturity cycle, we expect a phased progression to realizable PDTs: the motion twin primarily based on ex vivo acquisition by 2025, the synesthesia twin based on limited BCI technologies by 2030, and finally the in vivo twin with comprehensive in vivo and ex vivo acquisitions afterwards.
3 Multi-system Fusion of PDT As discussed in Sect. 2, in each and every phase of the PDT development, a great variety of data may be acquired through various sensors. As more and more diverse sets of data for the PDT gets collected to provide more accurate and comprehensive information, the fusion of diverse sources of massive data is a prerequisite to support more accurate decision-making and meet diverse needs of PDT systems and services. The diversity, magnitude and complexity of various PDT data sources brings forth both advantages and challenges. Complex relations among different types of data provide great advantages over single source data. At the same time, multi- source data requirements in terms of timeliness, accuracy and reliability of data processing must be met. In order to make the best use of multi-source data, we must take care the fusion process at three levels: data, feature, and decision-making.
1056
Chih-Lin I and Z. Zheng
3.1 Data-Level Fusion Data level fusion is also known as signal level fusion. The information loss of the original data is the lowest and the information available is the highest in this phase. Data-level fusion is composed of data quality analysis, data normalization and data association (Fig. 7). Data quality analysis: Its main task is to inspect whether there is unreasonable data in the original data from various sources, including abnormal data values, missing values and unknown data values. It’s quite common to find incomplete data, inconsistent information of multiple data, and abnormal data when setting up the system. A significant effort is necessary to analyze, clean and integrate the original data to ensure the data quality of fusion analysis. Data normalization: In the process of information collection and integration, multiple data sources usually work independently and asynchronously in time and space. As part of the data fusion process, it is necessary to calibrate the data description and unify the dimensions of data attribute values. Data association: For the data reported by each data source, after data quality analysis and data registration, data association can be carried out according to the common characteristics of its data attributes. The steps of data association include data entity aggregation/fusion and entity alignment. The goal of entity aggregation is to find the correspondence between data from different sources that may be the same entity.
Fig. 7 Data level fusion
The Digital Twin in Human Activities: The Personal Digital Twin
1057
3.2 Feature-Level Fusion Based on the characteristic data post data level fusion, feature level fusion performs step by step feature extraction of each entity, same entity from different sources, similarity analysis, and feature fusion of the same entity so as to form the fusion result for each feature. Typical features considered include speed, position, direction, shape, and edge (Fig. 8).
3.3 Decision-Level Fusion Decision level fusion refers to the fusion of the above decision data by using certain fusion strategies to obtain more accurate decision results. When there are multiple sources producing multiple decision data for the same attribute, temporal and spatial correlations among them must be carefully handled to consider the joint likelihood and confidence interval of the final decision (Fig. 9).
4 Experimental PDT Platforms of Multi-system Fusion We’ll study a variety of health care applications in terms of data quality improvement, accurate target location identification and accurate decision-making of the health care status. These efforts are part of a very early stage in our PDT platform
Fig. 8 Feature level fusion
1058
Chih-Lin I and Z. Zheng
Fig. 9 Decision level fusion
development. It serves as a starting point for more accurate predictions of individual health needs as well as for the alarms of health events.
4.1 Four Sets of PDT Data Acquisition While human motion behavior sensors, brain-computer communications, molecular communications, are all in the scope of PDT systems, and human body domain data fragmentation (from different vendors, applications and sensors, with different data protocols and standards) remain major challenges, we initiated a PDT multi-source data fusion platform and a prototype system for future concept demonstration with BCI, synesthetic interconnection and motion twin as typical applications. Motion twin, brain twin, and in vivo twin are the phased progression targets, building from this Multi-system Fusion of PDT platform. This platform integrates four sets of acquisition prototype systems, including the BCI, IMUs, vital signs-based emotion, and gait analysis systems. Each provides a set of basic applications. We have fused them into two sets of experimental applications. The fusion approach is expected to expand the range of applications, increase the performance accuracy, reduce the cost, and minimize the requirements of the environment and sample size (Fig. 10). One set is for a motion twin. It consists of 3 subsystems: a motion capture system with 17 IMU motion sensing units distributed throughout the body; a foot shape meter scanning foot pressure distribution aided by four cameras delivering stereo imaging; and a pair of motion twin smart shoes with a pressure insole and IMU
The Digital Twin in Human Activities: The Personal Digital Twin
1059
Fig. 10 Four sets of ex vivo acquisition devices of PDT
Fig. 11 User interface of 4 basic and 2 new applications
motion sensing module on top. This set and its applications will be presented in Sect. 4.2. The other set is a multi-source fusion with BCI. The BCI acquisition system is augmented with non-invasive ECG and photoplethysmographic signals (PPG), event-related potential (ERP) of electromyogram (EMG) signals, and motor imagery (MI) multi-paradigm combination [5]. Section 4.3 will elaborate on this set (Figs. 11 and 12).
1060
Chih-Lin I and Z. Zheng
Fig. 12 PDT demo with multi-source fusion Fig. 13 Typical symptoms of PD
4.2 A Motion Twin to Augment PD Detection with Gait Analysis This section explores multi-dimensional data fusion in the application of auxiliary diagnosis of Parkinson’s disease. PD is the second largest neurodegenerative disease. According to the prediction of epidemiological institutions, there are seven to ten million people diagnosed with PD worldwide [24, 25]. Its main clinical symptom is dyskinesia, its exhibiting four main symptoms include slowness of movement, muscle rigidity, resting tremor and gait instability [26]. Gait is a person’s pattern of walking. Walking involves balance and coordination of muscles so that the body is propelled forward in a rhythm, called the stride. PD patients often suffer panic gait and frozen gait (Fig. 13). Due to the irreversible characteristics of PD, early detection is very important; however, due to its sporadic and occult nature, the current early detection rate is low. Moreover, for lack of clinical experience, the misdiagnosis rate can be as high as
The Digital Twin in Human Activities: The Personal Digital Twin The predicted increased prevalence of PD in individuals over 50 years old 5 China Europe 4 3 2 1 0
(B)
5000
Annual PD expense in China (2017) Direct medical costs
Direct nonmedical costs
Indirect costs
4000 Costs ($)
Number of individulas with PD (miilions)
(A)
1061
3000 2000 1000
2005
2010
2015 2020 2025 Year
2030
0
≤5
6-10 Year
≥ 11
Fig. 14 Epidemiological data survey
Fig. 15 Two phases and eight periods of a gait cycle
25%, and the average delay from onset to definite diagnosis is 2.5 years [27]. As the early detection of PD is of great significance, we’ve set up this motion twin to include auxiliary diagnosis of PD via gait analysis among its targeted applications (Fig. 14). 4.2.1 Gait Cycle Gait is the manner or style of walking. A person’s walking posture can reveal important health information. There are numerous possibilities that may cause an abnormal gait. Some common causes are degenerative diseases such as PD and arthritis. A full gait cycle contains eight periods, which are divided into two phases: the stance phase and swing phase (including the foot landing stage and the foot off the ground stage). Each person in these stages exhibit his or her own characteristics in terms of center of gravity, foot off the ground, step frequency, and step length. These characteristics reflect the status of the individual health (Fig. 15). Medical engineering researchers have begun to introduce 3D force plate pressure trail equipment, and the camera to recognize pace, step length, period, stride and other gait methods, to assist doctors in quantitative diagnosis. In the gait analysis process, data from multiple sources are captured and analyzed together. The time
1062
Chih-Lin I and Z. Zheng
Fig. 16 Data acquisition equipment for in-hospital auxiliary diagnosis
calibration required to align them is very challenging, so automated data fusion processing is needed (Fig. 16). Due to the sporadic and hidden nature of the early symptoms of PD, the exercise behavior in the hospital is susceptible to the “observer effect” which refers to the behavior change due to the presence of the observer. The actual state of behavior may be suppressed in that case. To avoid such problems, daily monitoring outside the hospital may obtain more effective information. We’ve thus developed a new intelligent footwear equipment (i.e., the smart shoe). Its embedded sensors include intelligent insole based on foot shape meter readings over time and motion sensor IMUs based on motion capture devices readings over time. This new element expands the range of the motion twin, applicable both in and outside the hospital (Fig. 17). In this case, one can imagine that the data-level fusion maybe done at the mobile terminal, the time and space parameter analysis of gait (feature-level fusion) could be carried out at the edge base station, and the decision-making level fusion might be completed in the cloud. 4.2.2 Experimental Motion Paradigm for Data Acquisition We let the subject wear a motion capture device with 17 IMUs tied to 17 key parts of the body, or wear the smart shoes with only one IMU at the tongue of each shoe. The acceleration data from the IMUs is collected to evaluate the motion of the subject.
The Digital Twin in Human Activities: The Personal Digital Twin
1063
Fig. 17 The intelligent footwear equipment Fig. 18 Experimental paradigm of motion
The subject walks on pressure trails with 100 pressure sensors per square decimeter and uses smart shoes to distribute pressure sensors on the sole. The subject is required to perform some cognitive tasks (such as adding and subtracting data regularly) at the same time while walking. The cognitive task is meant to distract attention in order to avoid the subject consciously correcting its gait. As shown in Fig. 18, the subject must walk with natural gait for 7 m in a straight line, turn, rest for 1 min, and repeat it 10 times. The 5 areas in the sole are evenly distributed according to Fig. 18. First, one of the motion states including standing, squatting, rising, walking, etc. is selected for training. For example, walking is selected in this case. Then, collection and calculation the 3D (left-right, front-rear and vertical) pressure data of MFF, LFF, LMF and HEEL areas of each sole is done, and finally the proportion of each pressure in the total pressure and corresponding time data, which has a sampling rate greater than or equal to100Hz is computed. This data is transmitted to the analysis module using Bluetooth, Wi-Fi, or cellular networks. 4.2.3 Data Analytics for Motion State Recognition Having collected the acceleration data from the IMUs and the 3D pressure data, we can start the analytic process of feature extraction, feature fusion, recognition modeling, and model refinement, but only after the noise reduction is done on the data. Figure 19 and this section will elaborate on the process. Noise Reduction: The main interference source in the acquisition process is the EMI in the circuit, a high-frequency noise; whereas the human body movement is mainly a low-frequency signal below 50 Hz. The pressure data collected from the four regions are subjected to wavelet decomposition, high-frequency wavelet
1064
Chih-Lin I and Z. Zheng
Fig. 19 Flow chart of the motion state recognition
coefficient processing, and wavelet reconstruction in three steps. The wavelet transform is used to digitize the time domain pressure signals of the four regions, where we decompose the mixed signals of multiple frequency components into different frequency bands, and then process them by frequency bands according to the different characteristics of various sub-signals in the frequency domain, to finally obtain the gait data with a high signal-to-noise ratio. Feature extraction: Considering the overall characteristics of the gait, such as periodicity, rate of change and acceleration, and the detailed characteristics of the gait, such as its spectral characteristics, wavelet packet decomposition and difference algorithms were used to extract the frequency domain and time domain features from 3D pressure in four regions, and Support Vector Machines (SVM) was used to identify them. Feature fusion: The minimum optimal wavelet packet set is first selected from multiple wavelets of extracted gait frequency domain features by the fuzzy C-means method, and then the minimum optimal wavelet packet decomposition coefficients are selected from the selected set by the fuzzy C-means method. This was based on fuzzy affiliation ranking to obtain the minimum optimal gait frequency domain feature subset, and then combined with gait time domain features to obtain the fused gait feature set.
The Digital Twin in Human Activities: The Personal Digital Twin
1065
Recognition modeling: SVM is used for gait recognition, and a nonlinear mapping radial basis kernel function is used to map a linearly indistinguishable low- dimensional space to a linearly distinguishable high-dimensional space. The classifier is trained first, and then the gait samples are recognized by the classifier. Assuming that n classes of individual gait samples have been registered in the gait database, the samples are input to the classifier for training, and according to the input values, it is determined which class they match to, from 1 to n. If the range from 1 to n is exceeded, a new class n + 1 is registered, and then the classifier is updated again. For each motion state of the human body such as standing, squatting, rising, walking, etc., the above methods can be applied separately to form a set of recognition models with corresponding motion states. The set of motion state recognition models can be used to provide remote health monitoring services, e.g., detection of different motion states to detect signs of morbidity that may be indicated by changes in activity composition. The recognition model of each motion state can be further enhanced with subclasses. For example, in normal people standing and walking, the peak pressure distribution of left and right plantar pressure is basically the same; while in diabetic patients and those with critical conditions, the joint mobility becomes smaller leading to a significant increase in forefoot/hindfoot pressure and an uneven pressure distribution. Hence the standing state can be further classified as either normal or pathological standing. Adaptive model refinement: The SVM classifier is capable of continuous adaptive optimization and improvement. Each time a new sample is input, the recognition rate of the SVM classifier is calculated according to the principle of cross- validation method for fitness evaluation, without setting the termination value of the genetic algorithm. The termination condition is set by the higher-than-high method, and if the recognition rate of the training is higher than the existing one, it is set as the optimal parameter, otherwise, operations such as selection, crossover and variation are executed to further optimize the training parameters. 4.2.4 The Motion Twin Setup and Experiment Progress The multi-system fusing platform of the motion twin and the data collection interface are shown in Figs. 20 and 21. We selected 69 patients in hospital with Parkinson’s symptoms for the experiment. We analyzed their stride length, swing phase and stance phase. The results of the experiment show that the error, as shown in Table 3, is within acceptable limits when compared to existing studies [28].
1066
Chih-Lin I and Z. Zheng
Fig. 20 Diagram of IMU and gait analysis fusion system
Fig. 21 User interface of the motion twin system Table 3 Experimental gait analysis Experimental results Error mean Standard deviation of error
Step size (cm) 5.00 6.28
Swing phase (ms) 3.27 2.46
Stance phase (ms) 4.14 4.05
4.3 BCI and Vital Sign Components of Synesthesia Twin 4.3.1 Brainwaves and BCIs Our thoughts, intentions, and actions stimulate the activity of neurons in the brain. Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity are very important physiological signals in the brain. Extensive research
The Digital Twin in Human Activities: The Personal Digital Twin
1067
efforts and medical studies decoding our mind have been pursued via these signals. Electroencephalograms (EEG) collected from the scalp or Electrocorticography (ECoG) collected from under the dura matter have been the most prominent brainwaves acquisition methods (Fig. 22). There are distinctive time-frequency characteristics in brainwaves. Since Hans Berger, the inventor of EEG discovered the alpha wave rhythm and its suppression (substitution by the faster beta waves) when the subject opens his eyes, it has been found that the state of the mind is reflected in the brainwaves. The higher the brain activity, the higher the frequency of the electrical activity. When a person is at rest, the EEG signal will flatten accordingly, whereas heightened attention will produce much higher frequency components in the EEG signal such as the gamma wave, as shown in the figure below (Fig. 23). It has also been found that distinctive spatial properties exist in brainwaves for various senses and activities. Different regions of the brain are associated with
Fig. 22 Basic principles of brainwaves and BCI
Fig. 23 Time-frequency variations of EEG signals
1068
Chih-Lin I and Z. Zheng
Fig. 24 Spatial distributions of ECoG signals
different functions, Specific correlation characteristics exist among regions. Medical researchers have even demonstrated the feasibility of ECoG-based BCIs that enable controlling movements of the lower and upper limbs and communication functions (Fig. 24). Non-invasive BCI measures EEG from the scalp, whereas minimally invasive BCI measures Electrocorticography (ECoG) directly from under the dura matter which is much closer to the cortex. ECoGs have higher temporal and spatial resolution than scalp EEGs. It also does not suffer from the attenuation of signals by the skull and scalp. Therefore ECoGs have significantly better signal-to-noise ratios than EEGs. However, given that it’s an invasive approach to the brain, we’ve not pursued it in this platform. Instead, non-invasive BCI was used. Electroencephalogram (EEG), which collects scalp electrical signals, has a weak signal amplitude of less than 200 uV and a low signal-to-noise ratio. However, due to its harmlessness, it is attractive to the non-medical market. How to enhance the signal in daily environments and improve its signal-to-noise ratio is an important research topic. Figure 25 shows the active development of various enhancements. The non-invasive BCI signal generation paradigm includes both active and passive mechanisms. Active BCI signal generation is produced through active thoughts such as the motor imagery BCI (MI-BCI) is active BCI). Passive BCI signal generation involves our senses including visual, auditory senses, and other stimuli. The multimodal stimulation can activate the brain areas responsible for the corresponding perceptual functions simultaneously, thus enhancing the human response to external stimuli. In addition, since multimodal EEG does not carry exactly the same information, fusion of multimodal data has become an inevitable trend in BCI systems for more comprehensive EEG information.
The Digital Twin in Human Activities: The Personal Digital Twin
1069
Passive BCI Based on MI BCI Based on visual BCI
1976
1979
Based on SSVEP BCI
Based on P300 BCI
1982
1985
1988
1991
Traditional paradigm
1994
Language decoding BCI
Based on CSP BCI
1997
Non-eye movement dependent BCI
2000
2003
Hybrid BCI Synergistic BCI
ErrPs BCI
2006
2009
Based on miniature ERP BCI
2012
2015
2018
Novel paradigm
Fig. 25 BCI technologies
4.3.2 Data Analytics for Multi-source Fusion of BCI Combined with the characteristics of future distributed network and computing of mobile operators, a multi-paradigm fusion method for online analysis of EEG signals is proposed in this section. A CNN-BLSTM-AdaBoost model is constructed for feature extraction and classification of motor imagery EEG (MI-EEG) and sensory events related potential (ERP) signals [29]. The combination of Convolutional Neural Network (CNN) and Bidirectional Long- and Short-Term Memory recursive neural network (BLSTM) can fully extract the time-frequency characteristics of EEG signals, and the improved AdaBoost algorithm is then adopted to create a strong classifier, which further enhances the prediction ability of the model. The CNN-BLSTM-AdaBoost model combines CNN and BLSTM to extract a feature matrix that reflects the characteristics of the EEG signal, and then uses a fully connected (FC) network to further fuse and extract the feature information extracted by BLSTM. The extracted feature matrices are then mapped into the sample marker space to achieve end-to-end EEG classification and recognition. Finally, the prediction results of all CNN-BLSTM predictors are integrated using the AdaBoost algorithm to obtain the final strong classifier. The structure of the CNN- BLSTM-AdaBoost model is shown in the following figure (Fig. 26). This is a hierarchical fusion mechanism that can match the distributed network architecture as well as computing power distribution of mobile operators: data fusion at the terminal, feature fusion at the edge, and the decision fusion in the cloud. Thus, the algorithm process utilizes the network processing power to enable edge computing in a cloud-edge collaborative network architecture. Terminal side: Data-level fusion for data from multiple sources The acquisition of raw EEG data and the CNN calculation is carried out with powerful terminals such as mobile phones. The structure of CNN, shown in Fig. 27, includes the convolution module, the pooling layer, the batch standardization layer and the IM2COL layer. The purpose of the parallel convolution kernel is to extract features of different scales and then fuse the features, which is better than that of a single convolution kernel.
1070
Chih-Lin I and Z. Zheng
Fig. 26 CNN-BLSTM-Adaboost model
Fig. 27 CNN architecture
In the design of the convolution kernel, larger kernels should be avoided as far as possible because EEG sequences do not have dense feature points like images, i.e., there will be many irrelevant features in the area of a signal. The larger the size of the convolution kernel is, the larger its receptive field will be, leading to the extraction of more useless features with an increasing amount of unnecessary computations. In general, CNN with smaller convolution kernels and greater depths are more desirable. Edge side: Experimental feature-level fusion
The Digital Twin in Human Activities: The Personal Digital Twin
1071
LSTM is an improvement of Recurrent Neural Network (RNN), which retains the ability of RNN to accurately model sequence data. In order to solve the phenomenon of gradient disappearance or explosion which appears in the training process of RNN, LSTM adds mechanisms such as memory unit C, input gate I, forget gate F, and output gate O. The combination of these gating mechanisms and memory units greatly improves the ability of RNN to process long sequences data, effectively solves the problem of gradient disappearance encountered by RNN, can adapt to long-term dependence, and is conducive to the extraction of serialized features of EEG signals. BLSTM merges LSTM units in two different directions, as shown in Fig. 28. The features extracted by the CNN are input to this network. A dimension conversion layer converts the sequence feature dimensionality as a bridge between the convolutional layer and the cyclic layer. Two unidirectional LSTM networks stacked in the forward and reverse directions, predicting the forward input signal sequence from beginning to end, and predicting the reverse input signal sequence from end to beginning, respectively. Using error back propagation, the two-layer LSTM predicts the input at the current time t after information fusion in the hidden layer, and the output is jointly determined by the two LSTM networks. Then through the two-layer FC network, the output is sent to Softmax for classification, as the output of the CNN-BLSTM module. Fig. 28 BLSTM model
1072
Chih-Lin I and Z. Zheng
Cloud center: Decision-level fusion The edge side will periodically upload the feature fusion and pre-processed data to the cloud center. The cloud center will then integrate the multi-modal data training and training from all edge devices to meet the needs of diverse applications. AdaBoost algorithm is used to integrate the multi-modal classifiers to achieve decision-level fusion and deliver an EEG signal classification model suitable for multi-modality, as shown in Fig. 29. The basic idea of the Adaboost algorithm is that for a complex task, the judgment obtained by appropriately synthesized judgments of multiple experts is usually better than the judgment of only one of the experts alone. The Adaboost algorithm is generally used to enhance weak classifiers and fuse multiple weak classifiers into a strong classifier. This study used the AdaBoost algorithm to integrate multiple modal classifiers into a universal classifier suitable for multiple modalities. 4.3.3 The Brain Twin Setup and Experiment Progress The multi-system fusing platform of the brain twin and the acquisition interface are shown in Figs. 30 and 31. The multi-modal EEG includes vision ERP, Audio ERP, and MI EEG. Additional physiology signals such as ECGs and EMGs, as well as vital signs based emotion will be integrated in the future.
Fig. 29 The flow chart of the improved AdaBoost algorithm for multi-classifier
The Digital Twin in Human Activities: The Personal Digital Twin
1073
By combining the active motor imagery paradigm with passive visual and audi-
Fig. 30 Diagram of multi-source fusion system with BCI
Fig. 31 Motor imagery, audio/video ERP enhanced EEG acquisition interface
tory ones evokes potential new images, which include passive motor imagery of the participants and additional active stimuli, i.e., auditory and visual guidance for the subjects. We expect the composite stimuli to make the acquired signals more distinctive and to be able to further use them to build preliminary classification models. Taking a four-category myopia experimental demonstration system as an example, EEG signals were used to judge the direction of arrows. In addition, passive visual evoked potentials and active motor imagery were used to judge the direction of arrows. The experimental program was as follows: Three subjects with normal corrected vision and normal limb function participated in the data acquisition. This paradigm consisted of four different motor imagery tasks, namely the motor imagery of the left hand (class 1), right hand (class 2), both feet (class 3), and tongue (class 4). Each subject attends 2 sessions on 2 days. Each session is comprised of 6 parts separated by short breaks. Each part consists of 48 trials (12 for each of the four possible classes). A total of 288 trials per session were carried out.
1074
Chih-Lin I and Z. Zheng
Beep
Cue
Fixation cross
0
1
2
Motor imagery
3
4
5
Break
6
7
8
t (s)
Fig. 32 Experimental paradigm flow
To eliminate the eye movement since it would decrease the signal to noise ratio in the system, the participants were instructed to: (1) open eyes for 2 min (look at the screen). (2) Close eyes for 1 min. (3) Eye movement for 1 min. Do it once before each of the 6 parts in a session. At the beginning of the test (t = 0 s), a fixed cross appears on a black screen. In addition, a short alarm tone is introduced. Two seconds later (t = 2 s), a cue in the form of an arrow pointing left, right, down or up appears and remains on the screen for 1.25 s. This prompts subjects to perform the required motor imagery task until the fixed cross disappears from the screen at t = 6 s. The screen goes black again and is followed by a short break. Visual cues: Subjects will see a clear directional arrow appear in the screen for 4 s. Later scenarios can add shading changes (flicker). Auditory cue: Subjects will hear the sound cue at the same time when they see the picture. There will be a short sound cue at the beginning and end of each image, lasting 0.5 s respectively (Fig. 32). A preliminary performance result of the test set in four different aspects is shown in the Fig. 33. The system achieved an average accuracy of 90.78%, and all individual section accuracy were above 80%, which is 20% higher than the single mode test with plain EEGs.
5 Global Status of the PDT Ecosystem The “Digital Twin” was predicted by Gartner in 2017 as one of the technologies that would explode exponentially. It is considered as a key aspect of 6G application scenarios. Comparing to DT technologies in the manufacturing industry, PDT is very much a novel concept still in the early stages of exploratory research [30, 31]. Not to be confused with the virtual digital human characters adopted in some live performances and real-time interactions recently, PDT is not a 3D reconstruction of exterior appearance only, but aims at a reasonable real time mapping of the actual human body and mind. BCI, IMU, nanosensors, and WBAN technologies are among key initial enabling elements.
The Digital Twin in Human Activities: The Personal Digital Twin
1075
Fig. 33 Multi-modal test results
5.1 BCI and Biosensors BCI technology involves interdisciplinary integration. At present, researchers are mainly from scientific research institutes and universities, while industry players have been exploring its applications in medical applications, education, and smart homes. The US government took the lead in putting forward the brain science plan in 1989, and attaches particular importance to the innovative research of BCI and its application to the military and medical fields. The European Union launched the human brain project in 2013 and has done a lot of work in BCI technology and equipment, as well as in social ethics research. The Japanese brain/thinking plan was launched in 2014 to draw brain maps through integrated neural technology for disease research, and it also included BCI as a focus. China also attaches great importance to brain science in its 14th five-year plan where Neuromorphic Computing and BCI technology research and development have a prominent role. Molecular communications is also a frontier technology still in the stage of basic theoretical research. Its underlying technology - biological nanosensors – have been widely studied and applied, such as in nucleic acid molecular detection, tumor markers molecular detection, blood glucose molecular detection and so on [32]. The outbreak of Covid-19 has particularly accelerated the development of biosensors. To facilitate the research of this frontier technology integrating biology and information technology, the IEEE established the Molecular, Biological and Multi- scale Communications Technology Committee in 2008. It has since delivered IEEE
1076
Chih-Lin I and Z. Zheng
p1906.1, the Nanoscale and Molecular Communications Framework providing a definition, terminology, conceptual model, and standard metrics for nanoscale networks in 2015 [33]; and IEEE P1906.1.1, the Standard Data Model for Nanoscale Communications Systems in 2020 [34]. In addition, the National Science Foundation of the United States (NSF), the Japan Society for the Promotion of Science, the National Natural Science Foundation of China (NSFC), and the EU Horizon 2020 Program have all been funding research related to molecular communications.
5.2 5G+ Medical and Health Intelligent mobile health and medical services have been pursued via “5G + medical/health”, and a “customized” PDT for everyone remains a very challenging stretch goal for the 6G era. Secure communications networks between each user and his own PDT, and also among PDTs, will be a must. Supported by such a powerful heterogeneous communications network in the future, PDTs will drive rapid development in medical treatment, education, defense, among other aspects of our lives and society. The early applications of WBANs are mainly in the medical field, such as continuous monitoring and recording of chronic diseases including diabetes and heart disease, patient data collection, disability assistance, just to name a few. Handheld mobile terminals served as an aggregation device of various BAN sensors can be used to monitor blood pH, glucose concentration, carbon dioxide concentration, etc., and trigger the necessary responses to any abnormality in real-time. The response for a diabetic patient may start the insulin pump via a BAN command, inject insulin, and send the information to the hospital through a mobile network. Since the onset of Covid-19, heterogeneous networks with 5G have been widely applied in the medical field, for human health monitoring, disability assistance, movement monitoring, telemedicine diagnosis, and even remote surgery.
6 Conclusion The emergence of PDTs will revolutionize people’s daily lives, making it easier to travel, play and work. It is a technology with a very wide span of disciplines, and it is a byproduct of the deep integration of information technology and biomedicine. It will rely on the joint development of many technologies from multiple disciplines, such as brain-computer communications, molecular communications, synesthesia interconnection, AI, and intelligent interaction. We presented a variety of health care applications as part of a very early-stage list of accomplishments in our PDT platform development. This is still the very beginning of a very exciting journey. Basic applied research and multi-disciplinary integration continue to be key to realizing the PDT vision.
The Digital Twin in Human Activities: The Personal Digital Twin
1077
References 1. Bertin, E., Crespi, N., & Magedanz, T. (Eds.). (2021, December). Shaping future 6G networks: Needs, impacts and technologies. Wiley-IEEE Press. ISBN: 978-1-119-76551-6. 2. Liu, G., Huang, Y., Li, N., et al. (2020). Vision, requirements and network architecture of 6G mobile network beyond 2030 [J]. China Communications, 17(9), 92–104. 3. Musk, E., & Neuralink. (2019). An integrated brain-machine interface platform with thousands of channels. BioRxiv, 703801. https://doi.org/10.1101/703801 4. “First virtual student, completely driven by AI”. https://lifearchitect.ai/zhibing-hua/ 5. Wolpaw, J. R., Birbaumer, N., Heetderks, W. J., McFarland, D. J., Peckham, P. H., Schalk, G., et al. (2000). Brain-computer interface technology: A review of the first international meeting. IEEE Transactions on Rehabilitation Engineering, 8(2), 164–173. 6. Fernández-Medina, M., Ramos-Docampo, M. A., Hovorka, O., Salgueiriño, V., & Städler, B. (2020). Recent advances in nano- and micromotors. Advanced Functional Materials, 30(12), 1908283. 7. Kulakowski, P., Turbic, K., & Correia, L. M. (2020). From nano-communications to body area networks: A perspective on truly personal communications. IEEE Access, 8, 159839–159853. 8. Kuscu, M., Dinc, E., Bilgin, B. A., Ramezani, H., & Akan, O. B. (2019). Transmitter and receiver architectures for molecular communications: A survey on physical design with modulation, coding, and detection techniques. Proceedings of the IEEE, 107(7), 1302–1341. 9. Akan, O. B., Ramezani, H., Khan, T., Abbasi, N. A., & Kuscu, M. (2017). Fundamentals of molecular information and communication science. Proceedings of the IEEE, 105(2), 306–318. 10. Zafar, S., Nazir, M., Bakhshi, T., Khattak, H. A., Khan, S., Bilal, M., et al. (2021). A systematic review of bio-cyber Interface technologies and security issues for internet of bio-Nano things. IEEE Access, 9, 93529–93566. 11. Akyildiz, I. F., Ghovanloo, M., Guler, U., Ozkaya-Ahmadov, T., Sarioglu, A. F., & Unluturk, B. D. (2020). PANACEA: An internet of Bio-NanoThings application for early detection and mitigation of infectious diseases. IEEE Access, 8, 140512–140523. 12. Akyildiz, I. F., Pierobon, M., Balasubramaniam, S., & Koucheryavy, Y. (2015). The internet of Bio-Nano things. IEEE Communications Magazine, 53(3), 32–40. 13. Mapara, S. S., & Patravale, V. B. (2017). Medical capsule robots: A renaissance for diagnostics, drug delivery and surgical treatment. Journal of Controlled Release, 261, 337–351. 14. Ketterl, T. P., Arrobo, G. E., & Gitlin, R. D. (2013, April). SAR and BER evaluation using a simulation test bench for in vivo communication at 2.4 GHz (pp. 1–4). IEEE WAMICON. 15. He, C., Liu, Y., Ketterl, T. P., Arrobo, G. E., & Gitlin, R. D. (2014). MIMO in vivo (pp. 1–4). WAMICON. 16. He, C., Yang, L., Ketterl, T. P., Arrobo, G. E., & Gitlin, R. D. (2014). Performance evaluation for MIMO in vivo WBAN systems. In 2014 IEEE MTT-S international microwave workshop series on RF and wireless Technologies for Biomedical and Healthcare Applications (IMWSBio2014). https://doi.org/10.1109/IMWS-BIO.2014.7032380 17. Jayaram, V., Alamgir, M., Altun, Y., Scholkopf, B., & Grosse-Wentrup, M. (2016). Transfer learning in brain-computer interfaces [J]. IEEE Computational Intelligence Magazine, 11(1), 20–31. 18. Feng, O., & Zhang, Y. (2016). Review on research Progress of wireless body area network [J]. Electronic Science and Technology, 12, 173–179. 19. IEEE Standard Association. (2012). IEEEStd 802.15.6TM-2006, IEEE standard for local and metropolitan area network-Part 15.4 wireless personal area networks[S]. IEEE Standard Association. 20. Li, Z. P., Zhang, J., Cai, S. B., et al. (2013). Review on molecular communication [J]. Journal on Communications. 21. The vision needs and challenges (White Paper) of 6G, Vivo Communications Research Institute, 2020. https://wenku.baidu.com/view/7bb3ba25cf2f0066f5335a8102d276a2012960ff.html
1078
Chih-Lin I and Z. Zheng
22. Fox, D. (2021, December 14). Stretchy electronics go wireless for flexible wearables. Nature. https://doi.org/10.1038/d41586-021-03757-z. PMID: 34907371. 23. Geethanjali, P. (2016, July). Myoelectric control of prosthetic hands: State-of-the-art review. Medical Devices, 9, 247–255. https://doi.org/10.2147/MDER.S91102, https://www.ncbi.nlm. nih.gov/pmc/articles/PMC4968852/. PMID: 27555799. 24. de Lau, L. M., & Breteler, M. M. (2006). Epidemiology of Parkinson’s disease. Lancet Neurology, 5, 525–535. 25. Guo, X., Zuxin, C., Xu, C., et al. (2021). Intestinal dysfunction induces Parkinson’s disease [J]. Journal of Central China University of Science and Technology (Medical Edition), 4. https:// doi.org/10.3870/j.issn.1672-0741.2021.04.023 26. Jankovic, J. (2008). Parkinson’s disease: Clinical features and diagnosis [J]. Journal of Neurology, Neurosurgery and Psychiatry, 79(4), 368–376. 27. Lewis, M. M., Du, G., Sen, S., Kawaguchi, A., Tmong, Y., Lee, S., et al. (2011). Differential involvement of striato- and cerebello-thalamo-cortical pathways in tremor-and akinetic/rigidpredominant Parkinson’s disease. Neuroscience, 177, 230–239. 28. de Oliveira Gondim, I. T. G., de Souza, C. C. B., Rodrigues, M. A. B., Azevedo, I. M., de Sales Coriolano, M., & Lins, O. G. (2020, September, October). Portable accelerometers for the evaluation of spatio-temporal gait parameters in people with Parkinson’s disease: An integrative review. Archives of Gerontology and Geriatrics, 90, 104097. https://doi.org/10.1016/j. archger.2020.104097 29. Zheng, Z., & Chih-Lin, I. et al. A lightweight multi-system fusion BCI platform for personal digital twin, in draft. 30. Barricelli, B. R., Casiraghi, E., & Fogli, D. (2019). A survey on digital twin: Definitions, characteristics, applications, and design implications. IEEE Access, 7, 167653–167671. 31. Barricelli, B. R., Casiraghi, E., Gliozzo, J., Petrini, A., & Valtolina, S. (2020). Human digital twin for fitness management. IEEE Access, 8, 26637–26664. 32. Khanna, V. K. (2021, February). Nanosensors: Physical, chemical, and biological. CRC Press. ISBN 9780367457051. 33. IEEE 1906.1-2015 – IEEE recommended practice for nanoscale and molecular communication framework. https://standards.ieee.org/standard/1906_1-2015.html. December, 2015. 34. IEEE 1906.1.1-2020 – IEEE standard data model for nanoscale communication systems. https://standards.ieee.org/project/1906_1_1.html. September, 2020. Chi-Lin I is the CMCC Chief Scientist of Wireless Technologies. She received her Ph.D. in Electrical Engineering from Stanford University. She has won the 2005 IEEE ComSoc Stephen Rice Prize, 2018 IEEE ComSoc Fred W. Ellersick Prize, 7th IEEE Asia-Pacific Outstanding Paper Award, and 2015 IEEE Industrial Innovation Award for Leadership and Innovation in NextGeneration Cellular Wireless Networks. She is the Chair of the O-RAN Technical Steering Committee and an O-RAN Executive Committee Member, the Chair of FuTURE 5G/6G SIG, the Chair of WAIA (Wireless AI Alliance) Executive Committee, an Executive Board Member of GreenTouch, a Network Operator Council Founding Member of ETSI NFV, a Steering Board Member and Vice Chair of WWRF, a Steering Committee member and the Publication Chair of IEEE 5G and Future Networks Initiatives, the Founding Chair of the IEEE WCNC Steering Committee, the Director of the IEEE ComSoc Meetings and Conferences Board, a Senior Editor of IEEE Trans. Green Comm. & Networking, an Area Editor of ACM/IEEE Trans. Networking; Executive Co-chair of IEEE Globecom 2020, IEEE WCNC 2007, IEEE WOCC 2004 and
The Digital Twin in Human Activities: The Personal Digital Twin
1079
2000; a member of IEEE ComSoc SDB, SPC, and CSCN-SC, and a Scientific Advisory Board Member of Singapore NRF. She has published over 200 papers in scientific journals, book chapters and conferences and holds over 100 patents. She is coauthor of the book “Green and Software-defined Wireless Networks – From Theory to Practice” and has also Co-edited two books: “Ultra-dense Networks – Principles and Applications” and “5G Networks – Fundamental Requirements, Enabling Technologies, and Operations Management”. She is a Fellow of IEEE and a Fellow of WWRF. Her current research interests center around ICDT Deep Convergence: “From Green & Soft to Open & Smart”. Zhiming Zheng, is a senior member of technical staff in China Mobile Research Institute. He received master`s degree in electronics and communication engineering from the Beijing University of Posts and Telecommunications,and another master`s degree in Industry & Business Administration from Xiamen University,and is a PhD candidate in Advanced Manufacturing, Institute of Medical Engineering and Translational Medicine, Tianjin University.He interested in science fiction, especially in Interdisciplinary subjects of biological information. His research interests include Internet-of-Brain, body area network and big data. Zhiming Zheng is a senior researcher of 6G Twin Domain network at China Mobile Research Institute. He has served as a member of inventors Association and national Smart City expert. He received a master’s degree in electronics and Communication from Beijing University of Posts and Telecommunications and a master’s degree in Business Administration from Xiamen University. He received a doctoral degree in advanced manufacturing from School of Medicine and Translational Medicine Engineering of Tianjin University. His research interests include digital twin, brain-computer communication, in vitro and in vivo molecular communication, and motor twin. He has 15 years of experience in smart medicine and body area network. Participated in the formulation of 8 enterprise standards such as wearable devices, mobile healthcare and VR/AR, led the publication of the white paper of 6G twin domain network at the First World 6G Communication Conference, completed 19 invention patents as the first inventor in the field of big data, artificial intelligence and blockchain, and wrote 3 information books. BI Big data practice project helped the team win the first prize in the National Economic competition, won 7 ministerial awards such as the 19th National Invention Award, and was the first innovative interview person of China Mobile magazine mobile Weekly.
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art Olivia Menaguale
Abstract Until 30 years ago, Cultural Heritage studies entailed the use of books as well as the direct viewing of many works of art, from paintings to sculptures, from architectural masterpieces to museums, to entire cities. The material assets forming a country’s Cultural Heritage have a vital and irreplaceable intrinsic value, but due to their very physical nature, they are subject to damages, significant modifications and even loss. A country’s Cultural Heritage has the power to stimulate the emotions of those experiencing it and this is why it has always sparked the universal desire to pass it on by narrating it and reproducing it. Already in the first half of the third century BCE, Callimachus wrote “A Collection of Wonders around the World”, which unfortunately was lost. The first surviving literary work that included a list of world wonders, is an epigram by poet Antipater of Sidon included in the Palatine Anthology (9, 58), “I have gazed on the walls of impregnable Babylon along which chariots may race, and on the Zeus by the banks of the Alpheus, I have seen the hanging gardens, and the Colossus of the Hellos, the great man-made mountains of the lofty pyramids, and the gigantic tomb of Mausolus; but when I saw the sacred house of Artemis that towers the clouds, the others were placed in the shade, for the sun himself has never looked upon its equal outside Olympus.” Today, after well over 2000 years, we are still studying the Seven Wonders of the World. Through time, the desire to reproduce Cultural Heritage pieces was fulfilled in different ways: from drawings to casts, from photographs to postcards. Today this desire/need to create twin copies is done digitally, since computers have become the language of our time and are opening a multitude of scenarios and opportunities well beyond the mere reproduction of works of art. Step by step, and with some delays compared to other fields, the latest technologies are being applied to the Cultural Heritage world and today this world is finally acquiring digital twins: exact copies of physical objects and settings resulting from the Internet of Things. Thanks to the Internet of Things in fact, creating digital twins has become more accessible and financially attainable. O. Menaguale (*) COO, DOMILA Ltd, Dublin, Ireland © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_34
1081
1082
O. Menaguale
What does the term digital twin mean in the world of art? The first thought that comes to mind is that a digital twin is just an exact copy of the physical work of art. And this is indeed true; however, it is a lot more, because it has opened innovative scenarios on multiple fronts. Thanks to digital twins, we can replicate lost pieces, enter museum halls, print 3D copies, monitor, and manage the security of works of art, and acquire a large volume of valuable data that can be used to conduct research and create multiple outputs, but digital twins have also become works of art themselves. During the pandemic, the support provided by digital twins has been extremely important, because it has allowed remote viewing when in person viewing had been suspended. The impact of Digital Twins is also expanding the selection offered within the Cultural Tourism sector, a key element for cultural heritage, which is experiencing an increasing involvement by users themselves. Faced with the huge volume of digital cultural heritage data available today, it appears clear that our real challenge is creating a common language based on global standards and inevitably, these standards will also have to include Artificial Intelligence. Keywords Augmented reality · Cultural tourism · Digital art · Digital cultural heritage · Digital Twin · Internet of Things (IoT) for cultural heritage · Art NFT · Technologies for art conservation · Technologies for art restoration · Virtual archeology · Virtual reality · 3D scanning
1 Cultural Heritage: A Definition by UNESCO The term cultural heritage encompasses several main categories of heritage: • Cultural heritage –– Tangible cultural heritage: movable cultural heritage (paintings, sculptures, coins, manuscripts) immovable cultural heritage (monuments, archaeological sites, and so on) underwater cultural heritage (shipwrecks, underwater ruins, and cities) –– Intangible cultural heritage: oral traditions, performing arts, rituals • Natural heritage: natural sites with cultural aspects such as cultural landscapes, physical, biological, or geological formations • Heritage in the event of armed conflict • We have object, societies with written or symbolic records, oral traditions (mythology). • Cultivation and consumption of food, remains. The people repeat their behaviour through times, by understanding them • Architecture part of culture and constrains our behaviour • Winnie “we shape our building and then they shape us”
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1083
2 Digital Twins: Two Definitions That Best Apply to Cultural Heritage Digital twins facilitate the means to monitor, understand and optimize the functions of all physical entities and provide people continuous feedback to improve quality of life and well-being. [4] The Digital Twins help us capture the past, understand the present and predict the future. (Adam Drobot)
3 Digital Twin for the Cultural Heritage and Its Value The Digital Twin for Cultural heritage is empowering the transition into a new dimension where physical and digital merge. Up to recently we would benefit from an analogue or a digital version of the Cultural Heritage. The way has been opened a while ago by the digital heritage that through VR created many metaverses, for example of archaeological sites. Before 2020 the digital media has been used mainly to create digital models as a copy or replica of the physical model. Now thanks to all the technologies available and affordable, among them the IoT is crucial, the Cultural Heritage metaverse is happening in many areas. Historical buildings, for example, are already using Digital Twins systems for monitoring microclimate and surveillance purposes (Fig. 1). Still the nature of some types of cultural heritage does not allow to meet part of the Digital Twin requirements. So it happens that a digital model, has no form of exchanging data automatically between the physical system and the digital model. This means that a change made to the physical object has no impact on the digital model, and vice versa. In all cases the Cultural Heritage Digital Twin helps to manage a series of needs, from fruition to conservation and for its nature it will always have differences with the industrial digital twin. It’s precisely for these differences that new scenarios are opening, producing collaborations and projects that are leading to new frontiers.
Fig. 1 Merging physical and digital Cultural Heritage, O. Menaguale
1084
O. Menaguale
The next step will see the integration of the Digital Cultural Heritage within a broader system (smart city), where each digital resource communicates, or it is connected. Considering that, it is possible to convert old digital scenarios with smart visits of cultural heritage in immersive and participatory virtual environments, within enabling platforms. The starting point is to make the virtual visit more collective, more interactive, and more participative. The important condition to make the digital twin effective regarding communication issues, is to obtain an ultra-realistic virtual restitution, to be able to offer emotion to the visitor, an emotion like the one achieved during a real visit [6]. The second condition is to have an accurate 3D model.
4 Technologies in Use for the Creation of Digital Cultural Heritage Starting from 1990s several technologies have been imported into the different areas of Cultural Heritage (Figs. 2 and 3):
5 Virtual Archeology Reconstructing ancient and historical cities, bringing historical, mythological, and legendary characters to life, playing with ancient artefacts from collections around the globe in their original environments. These are the objectives in communicating our cultural heritage, and a dream for lovers of history worldwide. Thanks to cutting-edge digital 3D technologies, it is possible to communicate in ways that once would have been considered a science fiction fantasy. Historical monuments and sites can now be reconstructed, piece-by-piece, in three-dimensional space using sophisticated computer graphics systems, creating what is called Virtual Heritage. This digital form of cultural heritage ensures that both present and future generations may share their historical identity, whether they are visiting onsite, in a
Fig. 2 Most used technologies for Cultural Heritage, O. Menaguale
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1085
Fig. 3 Different technologies for different dimensions, O. Menaguale
classroom, part of the audience at a virtual edutainment (“educational entertainment”) centre, or thousands of kilometres apart in different countries or continents. In these last decades archaeologists and IT specialists, have brought to life new concepts: Virtual Museums, Digital Cultural Heritage, and Virtual Archaeology. Paul Reilly first used the term “virtual archaeology” in 1991. In 30 years, the developments have been revolutionary up to the point that we can map archaeological sites from space and underwater. The last 10 years offered interactive and immersive solutions, Virtual Reality (VR), Augmented Reality (AR) and the Mixed Reality (MR), the gradations between the real and the virtual environment. In the MR environment, the physical interacts with digital objects, through smartphones, tablets, or smart glasses (VR glasses). At the source of all the digital process it is necessary the work of the scientific community that provides all the documentation to achieve verified digital models throughout all the steps.
1086
O. Menaguale
Fig. 4 Virtual reconstructions of Ancient Rome, UCLA
5.1 Ancient Rome One of the first examples of Virtual Archaeology involving a city was the model of Ancient Rome resulting from the close collaboration between the Italian Superintendency for Archeology, Soprintendenza Archeologica di Roma (Rome, Italy) and the University of California, UCLA (Los Angeles, US) (Fig. 4). The model recreates, through the reconstruction of thousands of buildings, the core of Rome with its Colosseum and Roman Forum at around 320 C.E. under Emperor Constantine, a very extensive caput mundi with over one million inhabitants. The creation of the model started over 25 years ago when the technology for digital modelling was still at its beginnings, nevertheless all the process of creating this virtual model remains a milestone of Virtual Archaeology.
6 Museums and Its Digital Ecosystem Today museums are tightly connected to their visitors, who in turn are growing in numbers enticed by the use of a new digital expression form. In the early digital days, museums established their own databases, consisting of high-definition images and metadata, for the purpose of creating digital archives that could act as effective conservation and monitoring tools as well as means to further study the collections. Thanks to their versatile nature, those databases have also greatly amplified the primary mission of every museum: making their collections fully accessible. The next logical step was offering viewers digitalized collections online, using social media channels, such as Instagram, to promote them. It was a breath of fresh air in the world of museums that was often seen as out of touch and covered by a thick layer of dust. In the meantime, digitalization techniques have become more and more sophisticated, making it possible to create Digital Twins of existing works of art.
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1087
Fig. 5 V&A museum collection online
6.1 Online Collections-Databases London museum Victoria & Albert is already considering digital tools an essential component of its mission. In fact, thanks to online viewing and the creation of digital twins, the museum has opened its collections to an increasingly wide audience. The V&A online collection database (Fig. 5) includes information on over 1.2 million objects, over half of which are accompanied by related images. The objective of this instrument is to offer a different experience than being physically present in the museum itself. Digitalization is not meant to be a replacement, but rather a positive addition to the relation between the art and the viewer. The website uses standard syntax for the URLs that have been used for over 10 years and extracts information directly from the managing and processing system. Thanks to this method, visitors have been able to continue viewing the museum’s pieces even during the COVID-19 lockdown. This method of viewing art, by now familiar to most, has not only managed to keep the connection with the public alive, but it also succeeded in acquiring another share of viewers, who approached these collections directly from their living rooms. Online collection databases can be considered the most traditional digital twin form.
6.2 3D-Scanning and Photography In the past, as previously mentioned, artists used to create either freehand copies, which were less accurate, or casts, that were true copies of the work of art. Today, digitalization not only provides copies, but those copies have their own “intelligence”. The goal of creating a physical copy is only one of the important aspects of digitalization; in fact, by scanning a work of art, we can acquire multiple information that can be used for different purposes, from conservation analysis to the study of the creative passages involved in that piece.
1088
O. Menaguale
6.3 CASE STUDY- The Digital Twins of Raphael’s Cartoons, Victoria & Albert Museum (London, UK) Among its collections, V&A boasts very important evidence of the work by Raffaello Sanzio (Urbino 1483-Roma 1520): seven drawings depicting scenes of Saint Peter and Saint Paul’ lives (Fig. 6). In 1513, Pope Leo X commissioned to Raphael ten life-size drawings (5 m wide by 3.5 m high each) with scenes from the lives of Saint Peter and Saint Paul, for a series of tapestries to be placed in the Sistine Chapel in the Vatican. The cartoons were transformed into tapestries at the shop of merchant-weaver Pieter van Aelst in Brussels. Seven of the ten cartoons survived till today, transported to Great Britain at the beginning of the XVII century by the Prince of Wales, who later became Charles I. The cartoons remained in the Royal Collection and were then loaned to the South Kensington Museum, the current V&A Museum, by Queen Victoria in 1865, in memory of Prince Albert, and they have been on display since then. In collaboration with the Factun Foundation, the V&A Museum created Digital Twins of Raphael’s Cartoons, by using a combination of 3D-scanning and highly specialized photography.
Fig. 6 Raphael, The Conversion of the Proconsul also known as The Blinding of Elymas, Cartoon © Victoria and Albert Museum, London. (Courtesy Royal Collection Trust, Her Majesty Queen Elizabeth II, 2021)
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1089
The creation of these Digital Twins was structured in two phases: the recording of surface and color, followed by the development of multi-layer browsers. During the first phase, the total surface area of the cartoons, approximately 115 m2, was digitalized using two complementary techniques. The first technique was centered on high-resolution panorama pictures, necessary to record colors and activate infrared recording that allowed detecting the layer beneath (Fig. 7). The second technique was based on a scanning process that utilized the Lucida 3D Scanner. The Lucida 3D Scanner was able to record an extremely articulated surface because made of a collection of sheets of paper. This type of recording was able to highlight the steps taken to create the cartoons, from charcoal drawings hidden under the paintings, to the strokes of colors, to pouncing on the borders. Pouncing is a necessary step to transform, by using a powdering technique, a drawing into graphics that can used to create a textile piece. In order to accurately record color, the cartoons were photographed with a resolution up to 900 dpi. The accuracy of color data was ensured by evaluating the digital colors with the X-Rite color control methods, coupled with physical assessments done with Pantone colors. These passages are essential to guarantee accurate colors and tones during the reproduction process. The creation of Digital Twins has produced important result on different levels, from the opportunity to share part of the results with viewers, to the gathering of important data for the study and conservation of the work of art.
Fig. 7 Creating the infrared layer. RAW file from camera (left) and after the RAW development (right). (© Factum Foundation for the V&A and Royal Collection Trust)
1090
O. Menaguale
In regard to communication with the public, starting in January 2021, the Museum website has been offering the opportunity not only to view the drawings in great detail, but also to explore their history, creation, and the complex transformation process from cartoons into tapestries (Figs. 8 and 9). Viewers can observe the cartoons up close, browsing through the high-definition images and, thanks to the infrared images, they can take a glimpse of the charcoal drawings underneath. By watching the 3D scans, they can even detect the surface of the paper. Virtual visitors can therefore explore the cartoons in all their layers, observing the subtle differences between the drawing underneath, the layers of paint, and the structure of the surface, starting from the miniscule holes inserted to transform cartoons into tapestry, to the single sheets of paper making every cartoon, and to any restoration work that was done through time. Data acquisition has allowed museum professionals to further explore different aspects of this work of art, from conservation to Raphael’s technique. Having a digital twin helps monitoring the conservation state of the piece, allowing to constantly assess any alterations and changes in both painting materials and support. In relation to studying and analyzing this work of art, the information gathered from the infrared pictures, the various overlapping papers, and even the regrets and second thoughts of the artist, represent an incredibly precious resource and source of knowledge on this master’s execution technique. The creation of this type of Digital Twin has opened the door to new ideas on the role of artificial intelligence in the conservation and fruition of cultural heritage.
Fig. 8 Raphael Cartoon, detail showing the pouncing technique © Victoria and Albert Museum, London. (Courtesy Royal Collection Trust, Her Majesty Queen Elizabeth II, 2021)
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1091
Fig. 9 V&A Website, Raphael Cartoon digital twin, detail
6.4 DIGITAL TWIN into PHYSICAL TWIN – 3D Printing The following case study represents an example of a circular digital twin because the digital copy gave life to a physical copy.
6.5 CASE STUDY- Michelangelo, David, Galleria dell’Accademia, Firenze, 3D Print Michelangelo’s David, a very famous sculpture created by another of the most prominent artists in history, was also the subject of a “circular” Digital Twin (Fig. 10). In July 1501, Michelangelo Buonarroti was commissioned by the Opera del Duomo in Florence to create a sculpture of David. The statue, which was requested to have grand proportions, was supposed to be sculpted out of a large block of marble that was flawed with multiple imperfections and that had previously been rejected by two other artists. Michelangelo decided to depict David in the moment right before the fight when he spots Goliath. The statue became such a success that the people of Florence decided to place it in the famous Piazza della Signoria. For reasons of conservation, the piece was later moved to the Gallery of the Accademia.
1092
O. Menaguale
Fig. 10 Michelangelo, David, 1501–04, Marble, h 510 cm. (Galleria dell’Accademia, Firenze)
This sculpture is a symbol of Italian art, and it is still very popular today. This is why it was chosen to be on display in the Italian pavilion at the Dubai Expo in 2021. This was possible thanks to the creation of a Digital Twin, followed by a physical version printed in 3D. The example of David represents a circular digital twin, because the digital copy gave life to a physical copy, which was made very realistically by a group of talented and skilled restorers. The Project was developed in 3 phases: • Scanning of the statue to create a tridimensional digital twin • 3D print • The work of restorers to mount the various parts and ensure the realistic result of the 3D print A team of researchers from the Civil and Environmental Engineering Department of the University of Florence collaborated with a group of technicians from Hexagon, a Swedish company specialized in high-precision measurement technologies. The final phase included the work of a team of restorers from the Opificio delle Pietre Dure and Nicola Salvioli’s lab in Florence.
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1093
The scanning phase of David took 2 people and 10 days of work, an incredibly fast time when compared to a few years ago. A similar project done on David in 1999 by Stanford University required the work of 22 people for an entire month to obtain a lower resolution scanned image. To ensure the best scanning quality possible, Hexagon employed an AICON StereoScan, a neo structured light scanner, and a Leica Absolute Tracker. The laser scanner was able to register up to 156,000 points per second on the surface of the statue with a level of precision of a few hundreds of a millimeter. The absolute reference point in space was given by the Laser Tracker which, through the exchange of a laser beam with the scanner, could determine position and orientation in space and consequently attribute its tridimensional coordinates to each of the identified points. The second instrument used to identify the smallest and most geometrically complex parts of the statue was and Aicon StereoScan Neo sensor with structured light. The system consisted of a projector and two high-resolution video cameras. The projector transferred a sequence of geometric notes (patterns), while in a few seconds, the two video cameras acquired the projection of the same pattern at predefined angles. Upon completing the measurement, the result was a high-resolution tridimensional model of the surface accurate in size and proportions (Fig. 11). The combination of these two technologies was a success: the laser tracker, normally used to measure aerospace high-precision components, allowed a very-high level of accuracy on large areas whereas the light scanner could provide an even higher resolution when focused on smaller areas. This last instrument was used on the parts of the statue rich in details, such as the face and the hands. The combination of said technologies achieved good results in terms of both resolution and scale.
Fig. 11 3D scan of Michelangelo’s David captured by Hexagon scanners
1094
O. Menaguale
Fig. 12 Digital Twin of Michelangelo’s David, detail
During the second phase, the physical digital twin finally emerged, thanks to 3D printing technology provided by 3D Venice. They used a type of gel that, once printed, was polymerized by UV lamp. The third phase involved a team of expert restorers from Opificio delle Pietre Dure, one of the most important conservation and restoration institute, and from the workshop of Nicola Salvioli in Florence. This phase, which lasted 2 months, focused on making the printed copy more realistic and closer to the original. The restorers coated the copy with resin and marble powder in order to give it an original appearance, thus bringing to the surface all its most characteristic features, cracks, fractures, abrasions, stains, and rough texture (Fig. 12). Also in this case, the finished digital twin will be analyzed to acquire further knowledge. The extraordinary, detailed information will be essential for the conservation and study of this extraordinary piece. The acquired information will detect, for example, the level of conservation, the status of the surface, the materials, and the structure, for the purpose of preventing the damages caused by external agents and by age.
6.6 Digital into Immersive Experiences Immersive exhibitions started in the first decade of 2000 to complement the work displayed in a museum. This type of exhibitions is popular due to its power to immerse the visitors inside the artworks, delivering a unique experience. The public feels free to experience art without the rigid protocol necessary to preserve artworks, music is involved, children are engaged by the media, closer to their background and the easer vision of the works of art.
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1095
Fig. 13 Immersive Van Gogh exhibit
6.7 Case Study – Immersive Van Gogh The “Immersive Van Gogh Exhibit,” was designed by Massimiliano Siccardi, who has done previous van Gogh projects in Europe (Fig. 13). Siccardi’s exhibition takes visitors through a recreation of van Gogh’s perspective using large images and animations. The projections are from the top down onto the floor and on all the surfaces, also with stirring classical and contemporary music. The exhibition features 40 of Van Gogh’s masterpieces, including Starry Night and Sunflowers, using anywhere from 200,000 to 70,000 square feet of animated projections, depending on the venue. Deeply relevant is the fact that now the art reaches millions of people that would have not had the opportunity of travelling to see the original paintings.
6.8 Museum Operations Museums are complex organizations which, due in part to their architectural features, represent cultural assets in themselves. Therefore, museum managers end up having to manage both the works of art and the facilities housing them. The Natural History Museum in London, for example, has over 80 business systems supporting the museum management, from environmental assessments to the enterprise. The staff working with all these different business systems has tried to find a way to incorporate all the data resulting from the different systems into a digital simulation, the Digital Twin [12]. With its 15,000 active sensors inside a 100,000 m2 area, the museum collects data related to the humidity level, vibrations, temperature, lights, and much more. The Digital Twin shows a 3D model of the museum in an easily understandable format and is able to read the static data from BIM models and the dynamic data from the building’s management systems, from the IoT sensors to the other sources of data.
1096
O. Menaguale
Using a Digital Twin to manage the facility itself allows significant optimization and savings on business costs. Many institutions are taking steps towards using H-BIM (Historical Building Information Modeling) and at the same time, computational techniques are being increasingly applied to architecture, thanks to user- friendly VPL (Visual Programming Language), such as Grasshopper, Dynamo, Node Red, Ardublock, NETLab Toolkit, ReactiveBlocks, GraspIO, and Wyliodrin [2]. Studies are being conducted to develop a Sentient building, a digital model able to acquire the ability to perceive external impulses and develop strategies to support management and conservation of the building itself. Said experiments entail the integration of an H-BIM model with a Decision Support System based on Artificial Intelligence, which through Machine Learning techniques, can assist in managing the art collections stored inside antique buildings. The flexibility of said computational instruments enriches the H-BIM models with new concepts, actions, and knowledge layers, in addition to simplifying the management, classification and recording of data and of the different connections within the models [1, 5, 13]. Said technologies are often used together with a WSN (Wireless Sensor Network), thus providing constant monitoring in real time of all the parameters indicating the specific needs related to the layout of the pieces and of the facility housing them. This technique has actually been implemented in the Hall of the Five Hundred (Sala dei Cinquecento) at Palazzo Vecchio in Florence [15], and in the Egyptian Museum in Turin [10, 11]. One of the objectives is to develop a system that could assess every single space using datasets resulting from the reviews of visitors of other buildings, in order to identify recurrent patterns and therefore, ensure overall wellbeing [7, 12]). A study conducted in 2020 by La Russa and Santagati [8], titled HS-BIM (Historical Sentient – Building Information System), analyzed a possible transformation of the concept of Digital twin from a parallel and external digital model to an artificial evolution of the real system, enhanced by a “cognitive structure”. According to this vision, through the extensive use of Artificial Intelligence, future buildings will be able to feel “comfort and discomfort” and will be able to learn from the experience of their life cycle, but also from the experience of other sentient- buildings, thanks to the transfer of knowledge already applied to the AI sector.
6.9 CASE STUDY-The Ballroom and St. Francis of Assisi Church in the Pampulha Modern Ensemble, Belo Horizonte, Capital of Minas Gerais State, Brazil Architect Oscar Niemeyer and landscaper Burle Marx designed in 1940 the Pampulha Modern Ensemble (PME), center of a garden city project created at Belo Horizonte, the capital of Minas Gerais State in Brazil. The PME is composed of
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1097
four buildings: the Casino (current Pampulha Art Museum), the St. Francis of Assisi Church, the Ballroom (current Centre of Reference in Urbanism, Architecture, and Design), and the Yacht Golf Club (Figs. 13 and 14) [3]. The digital twins’ creation employed terrestrial laser scanning (TSL) and unmanned aerial vehicle (UAV). The process was based on three fundamental steps: • collection of spatial and documentary data • data processing and dense surface model (DSM) creation • HBIM modelling The framework includes creating the HBIM model in Autodesk’s REVIT authoring. The heritage elements were organized in the HBIM model using the Dynamo visual programming tool. AR VISUALIZATION Augmented reality (AR) overlays digital content on real-world objects that a computer sees employing a regular camera, which enriches our view of our surroundings. Concept of a virtual environment for the consolidation and dissemination of heritage information. Developing an AR APP from the digital twin of two buildings, the Ballroom and the St. Francis of Assisi Church, part of the Pampulha Modern Ensemble. Our goal is to create an environment where any user would do the same using the digital world. The intention is not to replace the experience of seeing these buildings in the site but bringing some aspects of them, which include the accurate 3D models, the surrounds, and some modern architectural supplementary information. This work aimed to use AR to promote the diffusion of historical content differently and innovative with the use of immersive environments, fully synthetic or mixed, with Augmented and Virtual Reality mediation, enabling to offer an interpretation and understanding of architectural objects of high historical and cultural value.
Fig. 14 Pampulha Modern Ensemble, Belo Horizonte capital of Minas Gerais State, Brazil
1098
O. Menaguale
6.10 Virtual Connectivity Generates Public: Some Numbers Museums have invested considerable energies trying to stay connected with their audience, offering 360-degree tours, digitalization, webinars, and social media challenges. Museums are finally opening to the idea of the potential earnings resulting from tours of their online contents. The Louis Vuitton Foundation in Paris has offered “micro-tours” of Cindy Sherman retrospective to 9 people for €4 each. The Metropolitan Museum of Art in New York organizes 1-h virtual tours for groups of up to 40 people at a price of $300 ($200 for students). The Kunsthistorisches Museum in Vienna organizes personalized tours for €150–€200. The online Artemisia Gentileschi exhibit at the National Gallery in London as well as additional digital contents displayed on the museum website generated, between March 2020 and January 2021, a 1125% increase in traffic on its pages. As of today, the website presents over 200 virtual events that are very successful in terms of participation. A path is finally opening for a new type of digital content offer, which will open new scenarios of participation at different levels and new sources of economic revenues.
7 Using Digital to Make Art In his book Concerning the Spiritual in Art, Russian artist Vasilij Kandinskij wrote, “Every work of art is a child of its time, while often it is the parent of our emotions. Thus, every cultural period creates art of its own…”. Today’s language is digital, and this is reflected in the language of art as well. The introduction of digital media in the creation of works of art dates back to the ’60s. For several years, we have had artists who are defined, and who define themselves, as “digital artists”. They use technology as a means to create their pieces, thus bringing on the market digital works of art that are not digital twins but original digital creations.
7.1 Digital Artists There are digital art pioneers who have spent their entire lives in the world of technology and art, such as Tamiko Thiel, Claudia Hart, Auriea Harvey, and Zhou Xiaohu and Miao Xiaochun, Tim Deussen and Rebecca Allen. Rebecca Allen is the one who created the Life without Matter installation (Fig. 15), which considers a future life in a virtual reality where material things will be almost completely gone, and our identity will have to be completely redefined. Spectators confront themselves with their own digital reflection and because a
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1099
Fig. 15 Rebecca Allen, Life Without Matter, virtual reality installation
virtual world is immaterial, a virtual mirror cannot reflect their true physical appearance, but rather the female, the male, and the animal in each of us. While spectators interact with their own virtual reflection, their shadow is projected onto a special screen placed within the expository space, where the public can observe their actions.
7.2 NFT – Non-fungible Token In recent times, a new type of work of art, the NFT (non-fungible token) has emerged, becoming a digital twin in all respects. Basically, this special type of token, which is an ensemble of digital data, certified by blockchain, is establishing a new market in the art world. This type of work of art is successful among young collectors, often coming from the financial world, who are more experienced in understanding the nature of files and already own rich digital portfolios, using cryptocurrencies as payment method. But what do we mean for a NFT work of art? For Christie’s, one of the most prestigious auction houses in the world, a NFT is a unique encrypted digital token signed by the artist, which is individually identified on the blockchain in order to verify its legitimate owner and authenticity. Blockchain seems able to solve some of the problems of multimedia art, for example, the difficulty in selling multimedia pieces, due to their technical reproducibility and copyrights issues, since the various passages (the so-called chain “blocks”) are obvious. The file with the image or video can be saved on one’s computer and can be enjoyed at any time. The token is an authenticity certificate which proves ownership of the work of art and therefore the ability to resell it. Ether, the most used cryptocurrency in DeFi transactions.
1100
O. Menaguale
DeFi, Decentralized Derivatives Trading, in other words, decentralized finance, is a system not based on central intermediaries, such as brokers, exchanges or banks, but instead using smart contracts on blockchain. These kinds of platforms allow users to lend or receive funds, exchange cryptocurrency, but also speculate on price fluctuations using derivatives. NASDAQ decided to create a specific index, called DEFX, to keep track of the major DeFi products. The Ethereum blockchain offers two NFT standards. The first standard is called ERC-721 (Ethereum Request for Comments 721) and is defined by its uniqueness, in other words, one token manages only one contract. This type of token is perfect for use with works of art, collectibles, and access keys. The second standard is called ERC-1155 and differs from the first for its ability to manage multiple tokens with only one contract. This type of standard is suitable to certify a series of digital assets, such as a collection of game cards or of any other type of asset. Digital artists consider NFT as a facilitating element in sales of digital works of art and maybe in time, the acronym NFT might even come to indicate a technical feature and be inserted in the list of supports (Fig. 16). The NFT for works of art has been created back in 2014 by Kevin McCoy, digital artist, and Anil Dash, consultant for auction houses, during the annual event Seven on Seven, which aims to connect artists and technologists. McCoy understood the potentials of the blockchain in resolving a key issue for digital artists: guarantee authorship and protect the artworks. The first NFT was born: McCoy and Dash created the first art NFT, a video by artist Jennifer McCoy, Kevin’s wife. In order to show how to create and sell an NFT, McCoy showed the creation steps and sold it to Dash for 4USD. The NFT took a while to have an impact on the art market. In October 2020, Christie’s Auction House sold the piece Block 21 (2020) by Robert Alice for over 130,000 dollars, stating that it was a “new and radical artistic medium”: in addition to being a physical piece, it was also registered as a NFT (Fig. 17).
Fig. 16 Creating an NFT, O. Menaguale
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1101
Fig. 17 Robert Alice, Block 21, 2020. (Image courtesy of Christie’s)
The first 100% digital work of art based on NFT (Non-Fungible Token), which was auctioned off by Christie’s in March 2021, is the collage Everydays-The First 5000 Days by Beeple (Fig. 18), sold for 69.3 million dollars. The NFT for this piece was generated by MakersPlace, a primary market for digital creators. Everydays is an unalterable and irreplaceable piece and therefore non fungible. The value of the NFT goes even further than guarantying authorship and protecting the artworks, but it has some weaknesses too (Figs. 19 and 20). The NFT landscape is at present heterogeneous resulting in the following scenarios: • Registering NFT of unique physical artworks: as for Robert Alice, Block 21, 2020 work of art (mentioned above, Fig. 17) • NFT for digital artworks: as for Beeple, Everydays-The First 5000 Days work of art (mentioned above, Fig. 18) • NFT as Substitutes of physical artworks, for example Morons by Banksy and The Currency by Damien Hirst (Fig. 21) • NFT of physical artworks sold as replicas (Fig. 22) The art world continues to pour energy into NFTs: Pace Gallery launched an NFT platform exclusively for their artists called Pace Verso. Virtual environments in which to view and trade digital artworks are more and more appealing. Auction houses and galleries are partnering into virtual reality spaces (metaverse) to exhibit and sell artworks this year. Sotheby’s auction house created a permanent space in Decentraland, a decentralized virtual reality platform powered by the Ethereum blockchain.
1102
O. Menaguale
Fig. 18 Beeple, Everydays-The First 5000 Days. (Image courtesy of Christie’s)
Fig. 19 Value of the NFT, O. Menaguale
8 Technologies in Use for Art Conservation and Art Restoration: Some Examples Art conservation and art restoration are key to the maintenance, preservation and needed repair of artworks. The conservation concentrates on the safeguard against future damage and deterioration, while art restoration performs the repair or renovation of works that have already sustained decay with the attempt to restore a work to its original, undamaged appearance. Art restoration continues to evolve, and the techniques used to preserve and address condition issues are constantly evolving.
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1103
Fig. 20 Weaknesses of the NFT, O. Menaguale
Fig. 21 NFT as substitutes of physical artwork
Last century was particularly important in expanding the field of art conservation: museums established dedicated departments and analytical laboratories and art technical journals were created. Edward W. Forbes, art historian and museum director of the Fogg Art Museum in Cambridge from 1909 to 1944, encouraged technical investigations and X radiography, an imaging technique using x-rays, gamma rays, or similar radiation to view the internal composition of an object or work of art. Rutherford John Gettens and George L. Stout, both working at the Fogg Art Museum, authored Painting Materials: A Short Encyclopedia, an essential resource for artists and art professionals concerned with preserving art. Throughout the mid- twentieth century, major professional societies and training programs also emerged. Today there are many different technologies imported in the Cultural Heritage sector for conservation and restoration purposes, among them the following are
1104
O. Menaguale
Fig. 22 NFT as replicas of physical artworks
highly used: X-Ray Fluorescence Spectroscopy (XRF), Infrared Reflectography (IRR), 3D Imaging and Reflectance Transformation Imaging (RTI), Reflectance Imaging Spectroscopy (RIS). Infrared Reflectography (IRR) The technique of infrared reflectography (IRR) helps understanding knowing what is under the surface. The camera directs beams of infrared light onto the work of art. The infrared light can penetrate pigment layers made of animal or vegetable compounds, but is eventually either reflected, typically by the surface the work was painted on, or absorbed, with carbon-rich material being particularly absorbent. That means the paint itself is not picked up to any great extent, but sketches with charcoal pencils, made as they are of carbon, do show up by contrast with the substrate. This approach therefore lets the conservation team build a picture of what was going on in the original design. Fourier Transform Infrared Spectrometer (FTIR) The Fourier transform infrared (FTIR) spectrometer can be used to identify chemical compounds present in a sample. The machine can be used to find out what compounds are present in a sample by measuring the energy absorbed when chemical bonds are stretched, twisted, or bent. After a few seconds, this data is fed into software that can tell which chemical bonds are present, and therefore what a paint is likely to be made from. In addition to telling us which compounds were used in the pigment, this technique can also help work out if a painting is a fraud. For example, cadmium yellow is a type of paint first introduced in the 1800. If a painting that is claimed to be datable to 1650 and it shows to contain cadmium yellow, investigators immediately know they’re looking at a fake. X-Ray Fluorescence Spectrometer (XR-F) For investigating cultural and historical works made of metal, glass, ceramic, and stone, it’s essential to understand what these pieces are made of at an elemental level. For this purpose, the X-ray fluorescence spectrometer is one of the
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1105
most useful tools. It uses a self-contained X-ray tube, which exposes the material to a beam of radiation that bumps some of its constituent atoms up to higher energy levels. These, in turn, emit X-rays with characteristic energies as the electrons return to a lower energy level. The unique patterns emitted by different materials then tell the researcher what the material is made of at an elemental level.
9 Educating to Digital Cultural Heritage Today, the educational potential offered by Digital Cultural Heritage has finally been widely acknowledged. Introducing it in school programs provides in fact high added value and should be done starting at the lower grades combined with physically experiencing the works of art. Acknowledging the importance and value of our cultural heritage represents the foundations on which to build our cultural identity. The digital version of cultural heritage creates an effective communication bridge with users.
9.1 The Future Digital Culture Heritage Professionals The art world is still at the beginning of its digital journey. However, projects in this sector are growing exponentially. Numerous university courses and master’s programs on the subject have been established in order to satisfy a growing demand for new professional profiles that can help unite these two apparently distant worlds. Continuing to develop courses that can prepare new types of professionals with knowledge in both fields, still too distant from each other today, is absolutely necessary. Those studying archeology should also study software in order to create 3D models of the archeological sites. Those studying art history should also learn how to create data bank platforms and how to 3D scan. The educational programs that will embrace the digital cultural heritage will promote an increase in both conservation and valuation of the actual cultural assets.
9.2 Active Participation of Viewers By providing adequate ICT literacy in school to actively experience one’s cultural heritage, we will trigger a new phenomenon that will in turn further promote Digital Cultural Heritage, an active and dynamic participation of visitors. Visitors are already expressing the desire to be prime actors in this revolution through blogs, forums, videos, and dedicated platforms. They are participating in the creation of a
1106
O. Menaguale
digital heritage, by offering the point of view of users. Current studies on the subject are proposing the active involvement of visitors in protecting and conserving our cultural heritage.
10 Digital Cultural Heritage and Tourism in Europe Many European countries rely heavily on the revenues coming from tourism. Furthermore, culture, and specifically cultural heritage, and tourism are highly interdependent. Many tourists choose their destinations based on the cultural heritage sites they offer. It is therefore necessary to highlight how increasingly important cultural heritage digitalization is becoming in relation to the tourism sector [14]. Digital technologies, as well as immersive, virtual, and augmented reality, and 3D technology represent an increasingly important component to successfully attract tourists towards a specific destination. Here are some data on pre-pandemic tourism in Europe: • 40% of tourists coming from EU countries choose a destination based on the culture and cultural heritage that it offers. • 62% of Europeans take a minimum of one vacation trip a year and the great majority remains in Europe. • €190,000,000,000 are spent during a normal summer season. • 10% of the European GDP comes from tourism (Figs. 23, 24, and 25).
10.1 CASE STUDY-The Europeana Pro Platform The tourism sector was hit extremely hard by the COVID-19 pandemic crisis. The European Commission is intensifying its support to the tourism sector by promoting sustainable, local tourism through Europeana. The following statement well expresses Europeana’s mission, “We want to transform the world through culture! We want to build on top of the rich European heritage and make it more accessible to people for work purposes, as well as for educational and recreational purposes.” This digital platform, developed in 2008, currently provides approximately 58 million digitalized documents coming from over 3600 digital heritage institutions and organizations, from collectibles to music, from audio files to images of buildings, from cultural sites to 3D images. Europeana has launched Europeana Pro and Discovering Europe (Fig. 26). Europeana Pro is a specific hub for tourism, that helps cultural heritage professionals discover initiative supporting tourism in the entire EU, and Discovering Europe is a special section leading users in virtual tours of Europe.
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
Fig. 23 World Travel & Tourism Council, 2019
Fig. 24 United Nations World Tourism Organization Report, 2019
Fig. 25 Cultural Tourism key factors
1107
1108
O. Menaguale
Fig. 26 Europeana platform website
11 Creating Standards – The Road to Success We have been talking about standards for years because we know that’s at the center of the digital revolution. For example, in 2002, the project MINERVA, requested by the European Council and coordinated by the Italian Ministry of Cultural Heritage and Activities, was launched with the objective of comparing and harmonizing policies and programs of cultural heritage, through a network involving government agencies and institutions protecting and promoting the cultural heritage of the countries members of the European Union. However, as of today, we still don’t have true global standards, in part due to the constant evolution of technology. To make all the new digital contents more efficient, there should be an ecosystem where reference rules and standards are the same for everyone. The activation of an interoperability framework based on rules, standards, API, and on circulation of open data according to shared models, is the real challenge. The interoperability framework would allow the different systems and services in the ecosystem to communicate without any problems or needs to “translate” formats and would also promote the development of innovative applications through the aggregation of different services and subjects, without having to worry about communication formats or protocols. At a semantic level, creating common vocabularies and shared ontologies would allow us to speak the same language and fully understand every piece of information that is exchanged. At a technical level, it would ensure that the different ICT systems could use technology instruments that would allow them to communicate effectively between them and to receive and send information in a coherent and correct manner for everyone; it would also be important to adopt easily identifiable interfaces, whose communication systems can be shared by everyone. Often today, there is still no
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1109
interoperability and therefore, there is no opportunity to open one’s services to other systems or to offer one’s data in an intelligible way, thus collecting the information that might arrive in return from other systems.
References 1. Argiolas, C., Prenza, R., & Quaquero, E. (2015). BIM 3.0 Dal disegno alla simulazione: Nuovo paradigma per il progetto e la produzione edilizia. Gangemi editore. 2. Cogima, C. K., Paiva, P. V. V., Dezen-Kempter, E., Carvalho, M. A. G., & Soibelman, L. (2019). The role of knowledge-based information on BIM for built heritage. In I. Mutis & T. Hartmann (Eds.), Advances in informatics and computing in civil and construction engineering. Springer. 3. Dezen-Kempter, E., Lopes Mezencio, D., de Matos Miranda, E., Pico de Sá, D., & Dias, U. (2020). Towards a digital twin for Heritage interpretation, From HBIM to AR visualization, School of Technology, UNICAMP, Proceedings of the 25th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA) 2020, Volume 2, pp. 183–191. 4. El Saddik, A. (2018). Digital twins: The convergence of multimedia technologies. IEEE Multimedia, 25(2), 87–92. https://doi.org/10.1109/MMUL.2018.023121167 5. Giovannini, E. C. (2017). VRIM workflow: Semantic H-BIM objects using parametric geometries. 3DModeling & BIM. In T. Empler (Ed.), Progettazione, design, proposte per la ricostruzione. 3D Modelling & BIM. 6. Grieves, M., & Vickers, J. (2017). Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems. In F. J. Kahlen, S. Flumerfelt, & A. Alves (Eds.), Transdisciplinary perspectives on complex systems (pp. 85–113). Springer. 7. Kim, J., Zhou, Y., Schiavon, S., Raftery, P., & Brager, G. (2018). Personal comfort models: Predicting individuals’ thermal preference using occupant heating and cooling behavior and machine learning. Building and Environment, 129, 96–106. https://doi.org/10.1016/j. buildenv.2017.12.011 8. La Russa, F. M., & Santagati, C. (2020). Historical sentient-building information model: A Digital Twin for the management of museum collections in historical architectures. In The international archives of the photogrammetry, remote sensing and spatial information sciences, Volume XLIII-B4–2020, 2020 XXIV ISPRS Congress. 9. Lo Turco, M., & Calvano, M. (2019). Digital Museums, Digitized Museums. The case of the Egyptian Museum in Turin. In: Proceedings of the 1st international and interdisciplinary conference on digital environments for education, arts and heritage. EARTH 2018. 10. Lo, T. M., Piumatti, P., Calvano, M., Giovannini, E. C., Mafrici, N., Tomalini, A., & Fanini, B. (2020). Interactive digital environments for cultural heritage and museums. Building a digital ecosystem to display hidden collections. Disegnare Con, 12, 1–11. 11. Pramartha, C., & Davis, J. G. (2016). Digital preservation of cultural heritage: Balinese Kulkul artefact and practices. In M. Ioannides et al. (Eds.), Digital heritage. Progress in cultural heritage: Documentation, preservation, and protection. EuroMed 2016 (Lecture notes in computer science) (Vol. 10058). Springer. 12. Richardson, J. (2020). What Digital Twin technology means for museums. Link: https://www. museumnext.com/article/whatdigital-twin-technology-means-for-museums/ 13. Tono, A., Tono, H., & Zani, A. (2019). Encoded memory: Artificial intelligence and deep learning in architecture. In C. Bolognesi & C. Santagati (Eds.), Impact of industry 4.0 on architecture and cultural heritage (pp. 283–305). IGI Global. https://doi. org/10.4018/978-1-7998-1234-0.ch012
1110
O. Menaguale
14. Varbova, V., & Zhechkov, R. (2018). Digital solutions in the field of cultural heritage. A Policy Brief from the Policy Learning Platform on Environment and resource efficiency. Interreg Europe Policy Learning Platform on Environment, and resource efficiency. European Regional Development Fund. 15. Viani, F., Robol, F., Giarola, E., Polo, A., Toscano, A., et al. (2014, November). Wireless monitoring of heterogeneous parameters in complex museum scenario. In IEEE Antenna Conference on Antenna Measurements and Applications (IEEE CAMA 2014), Antibes Juan- les-Pins, France. Olivia Menaguale is an art historian and archaeologist, whose lifelong passion in her studies and career, over the last 30 years, has been to improve the communications and access to Cultural Heritage and Tourism in Europe and the US, working alongside experts in technology from digital photo-archives to the financial art world, from audio-video guides to 3D reconstructions. She has worked alongside national and local governments, superintendencies, art experts, archaeologists, collectors and auction houses, and in projects for museums to historical and archaeological sites, from theme parks to government committees, from art exhibitions to world expos. As Chair of IEEE IoT for Tourism and Cultural Heritage, she is pioneering a multidisciplinary approach creating innovative synergies between technology, tourism and cultural heritage in order to create and promote a new global standard and business and economic growth. Menaguale received a Doctorate in Modern Literature, with a specialisation in Art History at the Università Cattolica del Sacro Cuore in Milan (Italy), with 1st class honours 110/110 Cum Laude. Merging her knowledge and skills in art history and archaeology with technology, she has developed projects to research and provide access to art, both onsite and virtually offsite. She started in the early 1990s, with the development and management of a financial photographic database on works of art that pass at auction called “I-on-Art”. It was commercialised and used by art collectors, auction houses and museums, internationally. This led to two pioneering digitisation projects, in London and New York, for the world’s two largest art history photo-archives of a total of more than 3.5 million annotated photos held at the Witt Library at the Courtauld Institute (London, UK) and the at Frick Museum & Art Reference Library (New York, NY, USA). Since then, Olivia Menaguale has led and been involved in other digital projects, including the 3D reconstruction of ancient Rome – in collaboration with the University of Virginia (Charlottesville, VA, USA), UCLA (Los Angeles, CA, USA), and the Italian Ministry of Culture. This remains the largest 3D model of any historical site. In 2008, Google Inc. (Mountainview, CA, USA) partnered the project, and together launched the first historical city, “Ancient Rome” on Google Earth which could be flown over in 24 languages and was explored by 78 million people worldwide in the first week of going public. Some other projects she has led, include the creation of a mini theme park on the Colosseum called “Rewind Rome” for tourists visiting the City of Rome; and a 3D CAVE inside the historical
Digital Twin and Cultural Heritage – The Future of Society Built on History and Art
1111
site of Pompeii offering the 2.5 visiting visitors with a 360° fully immersive reconstruction of a bustling street in 64 A.D. before the eruption. Olivia also led the team to create and manage the world’s first audio-video guide for a historical site, at the Colosseum in 2006. This service, providing over 100 min of videos, photographic explanatory materials and narration, is still offered to the four million visitors to the site today, available in 9 languages. Due to her unique knowledge and experience in bringing together the very differing worlds of technology with art history and archaeology, she has represented her home country Italy, in state missions, such as to the Middle East with the former Italian President, On. Napolitano, and in curating an Old Master Exhibition on Italian Art at the 2015 World Expo in Milan (Italy). She has also taught a series of lectures on art and technology at the Università di Roma La Sapienza in Rome (Italy), called “I mestieri dell’arte” (“The jobs of the art world” which was then published under Electa (Mondadori).
Digital Twin and Education in Manufacturing Giacomo Barbieri, David Sanchez-Londoño, David Andres Gutierrez, Rafael Vigon, Elisa Negri, and Luca Fumagalli
Abstract Learning Factories (LFs) enable learning in a factory environment, and – due to the possibility of experiential learning – in manufacturing they are considered the most promising approach to acquire the skills necessary to succeed in the increasingly complex and technologically driven workplace, political, and social arenas of the twenty-first century. Due to the modelling capabilities at the basis of this technology, Digital Twin (DT) can support the implementation of LFs.. In this chapter, the role of DT in manufacturing education is explored through two illustrative examples. Here, the DT technology is utilized to build digital LFs adopted for learning purposes. The first example shows a virtual flow shop that allows students to learn about: (i) Scheduling; (ii) Condition-based Maintenance; (iii) Internet of Things. Whereas in the second example, Virtual Commissioning (VC) is utilized to virtually verify the PLC (Programmable Logic Controller) code before its deployment, allowing students to learn both PLC programming and code verification techniques. The implemented teaching activities were targeted both to students from university and vocational schools. Furthermore, they dealt with different phases of the lifecycle of manufacturing processes. Throughout this chapter, it will be demonstrated that the application of the DT technology to LFs enables the building of a flexible teaching environment that can be customized based on the type of students and the competences that must be taught.
G. Barbieri (*) · D. Sanchez-Londoño Department of Mechanical Engineering, Universidad de los Andes, Bogotá, Colombia e-mail: [email protected]; [email protected] D. A. Gutierrez Xcelgo A/S, Ry, Denmark e-mail: [email protected] R. Vigon · E. Negri · L. Fumagalli Department of Management, Economics and Industrial Engineering, Politecnico di Milano, Milan, Italy e-mail: [email protected]; [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_35
1113
1114
G. Barbieri et al.
Keywords Augmented reality · Condition based maintenance · Education · Learning factories · Manufacturing · Model visualization · Virtual commissioning · Virtual reality · Vocational school
1 Introduction Digital Twin (DT) technology has been utilized in different application domains spacing from manufacturing towards aviation, healthcare, farming and construction amongst others [1–3]. In education, its application has been demonstrated to increase student motivation and improve learning [4, 5], especially within online educational programs [6, 7]. In this chapter, the role of DT within education in manufacturing related areas is explored through illustrative examples tested with students from universities and vocational schools. Concerning education in manufacturing, Learning Factory (LF) is considered the most promising approach to acquire the skills necessary to succeed in the increasingly complex and technologically driven workplace, political, and social arenas of the twenty-first century due to its possibility of experiential learning [8– 11]. After some precursor initiatives such as the grant to develop a “learning factory” from the National Science Foundation (NSF) [12], the “Lernfabrik” (German for “learning factory”) for a qualification program related to Computer Integrated Manufacturing (CIM) [13] and the teaching factory at the California Polytechnique [14] amongst others, the Initiative on European Learning Factories was established together with the “1st Conference on Learning Factories” in Darmstadt in 2011. Additionally, in 2014 a Collaborative Working Group on “Learning factories for future-oriented research and education in manufacturing” (also short: CWG on learning factories) was started within CIRP to: • • • • •
Organize learning factory related research globally; Form a joint understanding of terms in the field; Gather knowledge on the global state-of-the-art of learning factories; Strengthen the link between industry and academia in this topic; Provide a comprehensive overview of the basics, the state-of-the-art as well as the future challenges and research questions in the field.
In general terms, the primary purpose of learning factories is “Learning” in a “Factory” environment [15]. However, learning factories differ in terms of target groups, teaching methods, learning and research content, implemented production process and manufacturing product, and utilized manufacturing techniques and software [16]. Furthermore, they can be operated by universities or in cooperation with universities, but also can be utilized within industrial companies, consulting firms or at vocational schools [17]. A comprehensive and generally accepted definition was agreed within the CIRP CWG [18] and published in the CIRP Encyclopedia [19]. In this, a learning factory in a narrow sense is defined as a learning environment specified by:
Digital Twin and Education in Manufacturing
1115
• Processes that are authentic, include multiple stations, and comprise technical as well as organizational aspects; • A setting that is changeable and resembles a real value chain; • A physical product being manufactured; • A didactical concept that comprises formal, informal and nonformal learning, enabled by own actions of the trainees in an onsite learning approach. It can be noticed that this definition implies a manufacturing process able to manufacture a physical product and an onsite learning approach. However, in a broader sense a learning factory can also be considered as a learning environment that meets the definition above but either with [20]: • A setting that resembles a virtual instead of a physical value chain; • A service product instead of a physical product; • A didactical concept based on remote learning instead of onsite learning. Due to the abilities to visualize processes and layouts, presenting a holistic view of production systems as well as providing numerous data and analyses in a short amount of time (which cannot be easily obtained in real factory environments), virtual environments are useful for the purpose of implementing LF with a setting that resembles a virtual instead of a physical value chain. According to [21], a Digital Learning Factory (DLF) is an integrated IT environment where all real LF resources, processes and products are tracked on a digital model. Whereas, a Virtual Learning Factory (VLF) provides visual software tools through Virtual Reality (VR) or Augmented Reality (AR) technology to improve digital LFs. DLFs and VLFs can be defined as LFs in a broader sense in which a virtual production process is utilized with a didactical concept based on remote learning; see Fig. 1. In [22], DLFs and VLFs have been compared with physical LF resulting in: (i) lower investment and operational costs, paramount in improving the accessibility to alike resources in developing countries; (ii) greater scalability in terms of number of participants which access with greater ease, especially considering online educational programs; (iii) greater flexibility both in the sense of applications/projects and possibilities of simulation and testing inside a specific project. For all these benefits, DLFs are considered within this chapter, while VLFs are left as future work. An increasing number of papers can be found in the literature concerning the application of DT in LFs. Additionally, the integration of DT within existing LFs is described through the illustration of the technologies applied for its achievement [23–25] or planned as future work [26, 27]. Next, the most relevant works concerning the use of DT in LFs are illustrated focusing on the added value generated by its utilization in education. According to [28], approaches concerning fully automated data acquisition systems are not widely spread among SMEs and the advantages of DT technology are not sufficiently known due to the lack of competence concerning matters of Industry 4.0. Therefore, authors retrofit an existing LF with the DT technology to demonstrate to SMEs the potentials and advantages of real-time data acquisition and the subsequent simulation-based data processing.
1116
G. Barbieri et al.
Fig. 1 DLFs and VLFs and their relationship with LFs in the narrow and in the broader sense. (The image has been adopted from Ref. [20])
Grube et al. [29] utilize a LF to show to SMEs how the DT enables factory design and re-design in a collaborative environment. Their approach consists in the linking of a discrete event simulation with physical objects placed on a DT module. Then, users can manipulate the physical objects as physical counterparts to the machines and equipment in the virtual space and visualize the designed factory and make further analysis. Martinez et al. [30] build a demonstrator to support the supervision activity of the operator in the context of flexible manufacturing with robotics. The supervision is achieved through a Human-Computer-Machine Interaction enabled with the DT technology. The demonstrator is utilized to illustrate to enterprises how the DT can support the supervision activity of flexible manufacturing processes. Umeda et al. [31] point out that digital transformation might not fit the style of Japanese manufacturing industry, which deeply depends on kaizen (continuous improvement) by high skilled engineers and technicians. For this reason, they utilize the integration of DT and LF for supporting the teaching and widespread of kaizen (continuous improvement) activities within digitalized industrial processes. The above state-of-the-art analysis outlines that very few works can be found concerning the application of DT in education through LFs. Additionally, these works are mainly targeted to enterprises and not to students from universities and vocational schools. Finally, physical LFs are generally utilized, and DTs with different levels of data integration are not linked to educational learning outcomes. For this reason, two illustrative examples are shown in this chapter through the adoption
Digital Twin and Education in Manufacturing
1117
of different DTs applied to DLFs, and targeted to students from universities and vocational schools. Given the above, the chapter is structured as follows: a theoretical framework concerning DT in education is illustrated in Sect. 2. The two illustrative examples are shown in Sect. 3. Finally, Sect. 4 reviews the role of DT in manufacturing education and outlines challenges and future trends.
2 Theoretical Framework This section first reviews DTs and their role in the modelling of the lifecycle of manufacturing plants. Then, it outlines some of the technologies that enable visualization capabilities, which are fundamental for the student understanding of the manufacturing processes.
2.1 Digital Twin A Digital Twin is a “virtual and computerized counterpart of a physical system that can be used to simulate it for various purposes, exploiting a real-time synchronization of the sensed data coming from the field” [32]. DTs are characterized by multi- discipline simulations, which are fully integrated with the physical system and allow for real-time optimizations and decision making [32]. A DT consists in a digital model enhanced with the bidirectional communication between the physical system and the model. According to the level of data integration with the physical systems, models can be classified as [33]: Digital Models, Digital Shadows, and Digital Twins; see Fig. 2. Digital Models (DM) do not perform any automatic data exchange with the physical system. This means that the digital representation does not have access to real-time sensor data, and any optimization in the digital representation is not implemented in the physical system. In Digital Shadows (DS), sensor data are collected in real-time and automatically transferred into the digital representation. Any change in the physical object is reflected in the digital object, but not the other way
Fig. 2 Manual and automatic data flows in DMs, DSs, and DTs. (The image has been adopted from Ref. [33])
1118
G. Barbieri et al.
around. Only when data flows automatically in both directions a model is classified as a DT. This bidirectional flow allows changes in the physical realm to be reflected in the digital object, while any change and optimization in the digital object are implemented in the physical system [33].
2.2 Lifecycle Modelling The previously illustrated DTs with different levels of data integration can be utilized to model the phases of the lifecycle of manufacturing processes as shown in Fig. 3. Since a physical plant is not available yet, DMs are utilized within the pre- operational phases; i.e. production planning, engineering and build. Whereas, DSs and DTs are adopted within the operational and post-operational phases. According to [34], the following twins can be generated to model the lifecycle of manufacturing processes: • Planning twin: based on the product to be manufactured, a production process is designed. Discrete event simulations are generally utilized for this purpose; • Engineering twin: domain specific models are developed to design the necessary machines or to select them for the acquisition. Examples of models typically utilized within this phase are CADs, dynamic simulations and FEMs (finite element method). Furthermore, the control software is designed and validated using virtual commissioning simulations; • Control twin: this twin has the purpose to monitor and control the operation of the production plant. It visualizes real-time signals and information directly from Fig. 3 DTs for modeling the different phases of the lifecycle of manufacturing processes. (The image has been adopted from Ref. [34])
Digital Twin and Education in Manufacturing
1119
the plant controllers. Furthermore, it displays important diagrams and graphs to monitor the production and it allows the remote access to the production plant; • Maintenance twin: on the one hand, it enables early troubleshooting before the manufacturing plant is accessible to the maintenance technicians. On the other hand, it monitors the different items of the plant suggesting maintenance actions through the use of algorithms that enable condition-based maintenance; • Integration planning twin: allows production planning to better integrate a new product into an existing production line. The same software of the planning and engineering phases are utilized; • Reconfiguration twin: accelerates reconfiguration of production lines. Again, the same software of the planning and engineering phases are utilized.
2.3 Visualization Technologies Multiple technologies are required to realize the potential of DTs. For instance, the Internet of Things (IoT) and Cyber-Physical Systems (CPS) enable devices to be sensed and controlled remotely, allowing for a deeper integration between the physical and virtual domains [32]. In education, visualization technologies are important since visually present data and allow students to better understand information than if it was presented numerically [35]. Next, three visualization technologies are illustrated: model visualization, virtual reality, and augmented reality. 2.3.1 Model Visualization Computer models (whether 2D or 3D) allow for computer-aided simulations that study physical phenomena. Such simulations have been exploited in a multitude of fields. For instance, multibody system dynamics of 3D models (including non- linear dynamics like contact mechanics [36] and solid fragmentation [37]) can be simulated and then visualized through computer assisted design (CAD) software [38]. In other cases, precise representations of fluids allow users to study complex flow behaviour [35], including problems where fluids and solid structures interact [39] or scenarios with non-Newtonian fluids such as mudslides [40]. Model visualization allows users to visualize the results from highly complex simulations in ways that would be otherwise difficult to understand. 2.3.2 Virtual Reality In virtual reality (VR), headsets like the Oculus allow users to visualize and interact with a completely virtual space. Sherman and Craig [41] define VR as “a medium composed of interactive computer simulations that sense the participant’s position and actions and replace or augment the feed-back to one or more senses […]”. The
1120
G. Barbieri et al.
ability to intuitively navigate through a virtual space has enhanced the ability to communicate ideas and collaborate in various areas [42]. For instance, a planning twin (see Fig. 3) of a simulated factory was created for operators to interact with it (through VR) before it was physically built [43]. This interaction led to human- based analysis. Operators were able to provide feedbacks concerning the factory configuration, being able to move elements in real-time. Such interaction allowed planners to search for plant trajectories that were easy to navigate without losing their effectiveness or efficiency. 2.3.3 Augmented Reality While VR aims to replace reality with a fully virtual visualization, augmented reality (AR) extends the visualization of reality, adding graphics and information over it [44]. Whether using specialized devices such as HoloLens, or through “off-the- shelf” solutions such as smartphones, AR allows the user to visualize new information about their current reality [45, 46].
3 Illustrative Examples This section presents two illustrative examples that showcase the adoption of different DTs applied to DLFs, targeted towards universities and vocational schools.
3.1 University The first illustrative example consists in a DLF for university education. A DT involving different phases of the factory lifecycle was created and an educational activity was defined for each phase. All the actors of the DLF, even the physical component itself, are virtual. This DLF can be used to teach engineering topics, along with the DT technology itself. With respect to the different phases of the lifecycle of manufacturing processes (Fig. 3), the built DLF is proposed for the teaching of both control twins and maintenance twins. Up to now, it has been adopted within a course of “Modern tools for Automation” of the MSc of Mechanical Engineering of the Universidad de los Andes (Bogotá, Colombia) to teach scheduling in the context of DT. However, the developed DT has the potential to introduce concepts of condition-based maintenance, and the Internet of Things as it will be next illustrated. The DLF is characterized with the manufacturing plant proposed in [47]; see Fig. 4. It consists of three machines in series able to process nine different types of jobs. Each job is characterized by an identification number from 1 to 9. Machine 1 (M1) and machine 3 (M3) can manufacture any job, while M2A odd jobs and M2B
Digital Twin and Education in Manufacturing
1121
Fig. 4 Flow shop scheme used for the “virtual commissioning methodology to integrate Digital Twins into manufacturing systems” in [47]
Fig. 5 3D flow shop representation in Experior
pair jobs. Each job j is defined with a processing time on each ith machine (pij). The resource buffer and the warehouse are assumed with infinite capacity. The flow shop consists in a Kanban Pull system since a new job is generated when the previous one enters machine M2. The flow shop is simulated through the Experior1 model shown in Fig. 5. This model consists in a continuous time simulation that acts as the physical system of the DLF. The control and optimization of the flow shop is made possible due to the communication of different actors. Figure 6 shows the interaction between the components of the DLF. The Experior flow shop is controlled through a CoDeSys soft PLC (Programmable Logic Controller) by means of a Virtual Commissioning (VC) simulation. Furthermore, the PLC enables the communication between a MES https://xcelgo.com/experior/
1
1122
G. Barbieri et al.
Fig. 6 Interaction of the flow shop, PLC, MES, Intelligence Layer and DT in [47]
(Manufacturing Execution System), which receives from the operator the list of jobs to manufacture, and the flow shop that executes the production. The PLC is also responsible for the synchronization of the DT model with the status of the flow shop. The DT model is a Discrete Event Simulation (DES) executed in the Simulink toolbox. Finally, an intelligence layer running in Matlab is responsible for the optimization of the scheduling sequence of the jobs to manufacture. The DT acts as a virtual test bed to evaluate the different ‘what-if’ scenarios that may optimize the production, while the intelligence layer hosts the rules and the knowledge to choose among the scheduling sequences developed for the decision-making. Next, the different learning outcomes that can be taught with the built DLF are illustrated. 3.1.1 Scheduling The first learning activity that can be implemented with the DLF concerns scheduling. Scheduling is a decision-making process that deals with the allocation of resources to jobs or tasks [48]. The goal of this allocation is to optimize one or more objectives, such as completing the tasks in the least time possible or minimizing the number of tasks that are completed after their due date. In particular, the proposed learning activity deals with finding a scheduling sequence for the jobs to manufacture that reduces the overall production time, or makespan (Cmax). Scheduling optimization is achieved through simulation models that consider multiple pieces of information, such as the processing time of the jobs on each machine (pij) and the setup time that a machine needs to switch from one type of job to another. Although
Digital Twin and Education in Manufacturing
1123
scheduling requires an investment of time and effort, the choice of schedule in a system has a significant impact on its performance and it makes sense to spend resources in its optimization [48]. In the defined DLF, the DT consists in a DES that runs in Simulink. The DES model allows users to simulate scheduling sequences and calculate their makespan Cmax. As depicted in Fig. 6, the DT model is connected to the intelligence layer. This latter has the objective to test different scheduling sequences in the DES model and select the one that optimizes the scheduling objective. Within this activity, students learn how reactive scheduling – a process that can change schedules when unexpected events occur, while still accommodating the original objectives [48] – can be enabled by the DT technology. A failure is injected in the Experior model in either machine M2A or M2B, and students must define an algorithm within the intelligence layer able to schedule the jobs into the remaining working machines. To complete the activity, students must first design a DES model able to reproduce the behavior of the flow shop. The model must also include the failure and repair functionalities of machines M2A and M2B. Then, they must program an algorithm in the intelligence layer that can react to the failure event and find a new scheduling sequence that works on the flow shop with a machine in a fault state. The algorithms taught within this activity belong to the family of genetic algorithms, which are optimization methods that borrow concepts from evolutionary biology (such as generations, mutations, and crossovers) [49]. Genetic algorithms can select a scheduling sequence for the flow shop by simulating multiple scheduling sequences and finding the one with the shortest makespan. These scheduling sequences are tested in the DES model, since it can simulate all these sequences in a short amount of time. This activity allows students to learn about scheduling in the context of DTs. To simulate the re-scheduling of a set of jobs due to the occurrence of an event in the physical system, students must implement: (i) a DES model of a flowshop that includes failure and repair functionalities; (ii) a scheduling algorithm in the Intelligence Layer, which in this case involves genetic algorithms. They must then inject the triggering event and verify that the resulting scheduling sequence fulfills the production requirements. This exercise then teaches students about reactive scheduling and its implementation through the DT technology. The model used for this exercise is a control twin (as it has the purpose to monitor and control the operation of the flowshop, see Fig. 3) of the DT (bidirectional) level of data integration. 3.1.2 Condition-Based Maintenance A second activity planned with the DLF is the teaching of algorithms for condition- based maintenance (CBM). This is possible through the integration of a State Assessment (SA) module within the intelligence layer of the DLF. SA refers to the use of computer algorithms to classify the health status of a machine [50]. Within the built DLF, the SA module must classify the health status of machines M2A or M2B by analyzing vibration data.
1124
G. Barbieri et al.
Fig. 7 Visual representation of the 5 states in [51]. In increasing order of failure, they are: 0 N, 1 N, 3 N, 1 W and 2 W. Only 2 W is considered as a functional failure, while 0 N through 1 W are different levels of potential failures
To develop the SA module, a dataset obtained from [51] is provided to students to emulate possible vibration data sensed from machines M2A and M2B. This dataset contains vibration data captured from an induction motor in four different out- of-balance fault states (plus a healthy state) as indicated in Fig. 7. Students are asked to classify the different heath status through ML methods. Since the intelligence layer is programmed in Matlab, the same software must be used for the SA module. The SA module that students must develop is characterized with two main tasks: data preprocessing and data analysis. Data preprocessing involves transforming raw sensor data into usable data. This consists in the following steps [51]: analog to digital conversion, data quality inspection, regime identification, abnormality removal, and denoising and filtering. After the data has been conditioned, data analysis takes place. Here, features are extracted from the preprocessed data and provided to the ML algorithm for the classification of the health status. Through this activity, students can become familiar with ML algorithms in the context of CBM. Furthermore, the integration of the SA module within the DLF provides an understanding of the role of the assessment of machine conditions within the operation of a manufacturing plant. The model used for this exercise is a maintenance twin (Fig. 3), as it monitors the condition of the machines and preemptively warns about possible failures. However, it is not yet a DT since vibration data are not automatically transmitted to the SA module but are instead input manually to the ML algorithm. The conversion of the SA module into a DT can be achieved through the utilization of IoT technologies within the DLF.
Digital Twin and Education in Manufacturing
1125
3.1.3 Internet of Things Once the SA module is working, vibration data can automatically reach the SA module through IoT technologies, converting it into a full maintenance DT. IoT is a paradigm where multiple uniquely identify-able objects (or things) can communicate between each other through a network [52]. The communications that students must build within the learning activity is illustrated in Fig. 8. Here, vibration data are sent to the cloud through IoT, stored there and downloaded by the SA module for the classification of the health status of machines M2A and M2B. With the objective to build the aforementioned connections, the activity has been decomposed into three steps with increasing complexity. Through these steps, students can connect the developed SA module to a (simulated) IoT device that feeds real-time vibration data. The three steps are: 1. Local folder: vibration data are stored in a local folder, and the SA module must be modified to get access to it. In the CBM activity, the SA module processed data input manually to the ML algorithm. Here, the SA module must autonomously look for the data within the local folder. 2. Cloud folder: the dataset input to the SA module is now stored as a file in a remote location (i.e. cloud). Therefore, students must create an interface between their local SA module and the cloud. For this case study, XRepo (an education- focused information system for predictive maintenance [53, 54]) is adopted as cloud. 3. Real-time data: once the data can be accessed remotely, the complexity is increased by simulating a real-time stream of data instead of reading data from
Fig. 8 Communication of the flowshop, PLC, MES, Intelligence Layer and DT, with the addition of a cloud that simulates data acquisition from physical assets with increasing degrees of complexity
1126
G. Barbieri et al.
the cloud. This occurs using XWare,2 a software that allows multiple IoT devices to send sensor data to a central location through a network. A real-time data stream at a given frequency is simulated, and XWare is used to send these data to the SA module as though it came from a real device. Here, students need to interface the SA module with XWare for the acquisition of real-time data stream. With these three steps, the SA module is transformed into a DT since: (i) data is collected in real-time from the simulated physical system and sent to the intelligence layer; (ii) the health status computed within the SA module is used to trigger the calculation of a new scheduling sequence. By completing this activity, students can integrate the two models they previously worked on (a control twin that allows scheduling and a maintenance twin that enables CBM) into a single DT that reschedules the flow shop depending on its condition, learning about IoT in the process.
3.2 Vocational School This section presents the implementation within a vocational school of an engineering twin based on the VC technology. In the context of manufacturing, VC is utilized to virtually verify the PLC code before its deployment [55]. Therefore, a DLF based on VC allows learning both PLC programming and code verification techniques. Nowadays, these skills are highly demanded from enterprises due to the requirements of reducing the commissioning time of manufacturing plants. For this reason, different vocational and academic institutions have started integrating these skills within their educational programs; see [56, 57]. In addition, the utilization of a DLF based on VC enables the generation of a cheaper, more effective, and safer didactical environment. This virtual environment can be concurrently utilized from students avoiding the need of experimentation with expensive physical systems without proper expertise, technical supervision, or lack of background in the automation domain. This reduces the occupancy rate of laboratories and enables online learning in case of virtual educational programs. Xcelgo A/S supported the Copenhagen School of Marine Engineering and Technology Management located in Denmark for integrating VC within its educational program.3 A DLF of a pneumatic drill cell (Fig. 9) was built with the software Experior to enable the learning of: (i) VC modeling; (ii) PLC code programming; (iii) PLC code validation. Xcelgo A/S virtualized the pneumatic drill cell within Experior by building a library of its physical components. Along with the geometric representation, the elements of the library have the same software interface of the physical components. For instance, a single acting cylinder with a limit switch is modeled with two Sanchez-Londono, D., Barbieri, G., & Garces, K. (2022). XWare: a Middleware for Smart Retrofitting in Maintenance. IFAC-PapersOnLine, 55(19), 109–114. 3 https://xcelgo.com/more-schools-use-digital-twins/ 2
Digital Twin and Education in Manufacturing
1127
Fig. 9 pneumatic drill cell of the Copenhagen School of Marine Engineering and Technology Management and Experior model
Fig. 10 software interface of the components of the pneumatic drill cell
1128
G. Barbieri et al.
signals as interface: a digital input (limit switch) and a digital output (solenoid valve). In this way, the virtual components present the same software interface of the physical system, and the PLC code developed with the model also applies for the physical pneumatic drill. The software interface of the components of the pneumatic drill cell is represented in Fig. 10. After the presented preliminary phases, the learning activity was designed and implemented. This consisted of fifth phases: 1. Pneumatic drill model: using the library built by Xcelgo A/S, students modeled the pneumatic drill cell by dragging and dropping its components into the canvas, and connecting them to reproduce the physical system 2. PLC code development: given the software interface of the pneumatic drill cell (Fig. 10) and the software requirements, students developed the PLC code for controlling the system. This phase was performed in the students’ laptop using a soft PLC. The control code had to implement the following sequence of operations:
(a) Cylinder C1 is ejected to supply a load to the drill station. This latter is performed with the extension of cylinder C2 (b) As soon as cylinder C2 returns to the idle position after drilling the load, cylinder C3 is ejected if and only if sensor RM indicates the presence of the transfer vehicle (c) Once the load is positioned on the transfer vehicle, sensor OF-M recognizes if the color of the load is white. In this case, the transfer vehicle moves to the right until the sensor RH detects its presence (d) The rotative arm is activated to grab the load through the suction cup removing the load from the transfer vehicle and placing it into the buffer (e) If a brown load is processed instead, it is sent to the left section of the module (f) Once the transfer vehicle returns to the middle position, the sequence is repeated 3. Virtual commissioning: students built the VC simulation by interfacing the control code with the Experior model. In particular, the interface variables of the PLC code were linked to the ones of the Experior components. Then, the soft PLC was connected to Experior using the OPC UA communication protocol 4. Code verification: students implemented test cases to verify the consistency between the software requirements and the developed PLC code 5. Deployment: after the verification of the PLC code, students deployed the code within the physical PLC of the pneumatic drill cell Along with skills in VC modeling, PLC code design and verification, the presented activity showed to students the main benefit of the VC technology: reducing the time taken for the commissioning of manufacturing plants.
Digital Twin and Education in Manufacturing
1129
4 Future Trends and Challenges In this chapter, the role of DT in manufacturing education has been presented. Due to the modeling capabilities at the basis of this technology, DT can support the implementation of Learning Factories (LFs). LFs enable learning in a factory environment, and – due to the possibility of experiential learning – they are considered the most promising approach to acquire the skills necessary to succeed in the increasingly complex and technologically driven workplace, political, and social arenas of the twenty-first century. LFs can be utilized to teach engineering topics, along with the DT technology itself. In this chapter, two illustrative examples of Digital Learning Factory (DLF) have been presented. DLF is an integrated IT environment where all real LF resources, processes and products are tracked on a digital model. The first illustrative example shows a DLF of a flow shop that allowed students to learn about: (i) Scheduling; (ii) Condition-based Maintenance; (iii) Internet of Things. Even if each module provides important technical competences to students, we think that the main added value is the student understanding of how three different technologies can be integrated for the realization of a reactive scheduling enabled by DT. Therefore, DT and DLF can be utilized to showcase the integration of different technologies. Whereas in the second illustrative example, Virtual Commissioning (VC) is utilized to virtually verify the PLC code before its deployment. A DLF based on VC was built allowing students to learn both PLC programming and code verification techniques. In this example, students understand how the DT can support the design process of manufacturing systems. The implemented teaching activities were targeted both to students from university and vocational schools. Furthermore, they dealt with different phases of the lifecycle of manufacturing processes. The first illustrative example was implemented with university students and dealt with control and maintenance twins, while the second to students from vocational schools and dealt with engineering twins. It can be noticed that the application of DT and DLF enables a flexible teaching environment that can be customized based on the type of students and the competences that must be taught. Next, some challenges and future trends are outlined: • Virtual Learning Factory (VLF): in this chapter, only illustrative examples of DLFs have been presented. A VLF provides visual software tools through Virtual Reality (VR) or Augmented Reality (AR) technology to improve DLFs. Therefore, it is important to explore the role of VR and AR in education and the added value that these technologies can provide when integrated to DT and DLF; • Lifecycle phases of manufacturing systems: in this work, the DT technology has been applied to different phases of the lifecycle of manufacturing systems. How the DT can support the teaching of the activities involved in the planning, integration and reconfiguration twins must be explored, along with applications different from the ones showcased in this chapter within the engineering, control and maintenance twins;
1130
G. Barbieri et al.
• Twenty-first century skills: twenty-first century skills comprise essential and technical skills that have been identified as crucial for success in the twenty-first century society and workplaces by educators, business leaders, academics, and governmental agencies. In this work, only the acquisition of technical skills has been dealt. However, how DT and LFs can support the acquisition of essential skills must be investigated.
References 1. Barricelli, B. R., Casiraghi, E., & Fogli, D. (2019). A survey on digital twin: Definitions, characteristics, applications, and design implications. IEEE Access, 7, 167653–167671. 2. Liu, M., Shuiliang, F., Huiyue, D., & Cunzhi, X. (2021). Review of digital twin about concepts, technologies, and industrial applications. Journal of Manufacturing Systems, 58, 346–361. 3. Lu, Y., Chao, L., Kai Wang, I. K., Huiyue, H., & Xun, X. (2020). Digital twin-driven smart manufacturing: Connotation, reference model, applications and research issues. Robotics and Computer-Integrated Manufacturing, 61, 101837. 4. Liljaniemi, A., & Paavilainen, H. (2020). Using digital twin technology in engineering education – Course concept to explore benefits and barriers. Open Engineering, 10(1), 377–385. 5. Nikolaev, S., Gusev, M., Padalitsa, D., Mozhenkov, E., Mishin, S., & Uzhinsky, I. (2018). Implementation of “digital twin” concept for modern project-based engineering education. In IFIP international conference on product lifecycle management. 6. Rassudov, L., & Korunets, A. (2020). COVID-19 pandemic challenges for engineering education. In XI international conference on electrical power drive systems (ICEPDS). 7. Sepasgozar, S. M. (2020). Digital twin and web-based virtual gaming technologies for online education: A case of construction management and engineering. Applied Sciences, 10(13), 4678. 8. Hamid, M. H. M. I., Masrom, M., & Salim, K. R. (2014). Review of learning models for production based education training in technical education. In International conference on teaching and learning in computing and engineering. 9. Hempen, S., Wischniewski, S., Maschek, T., & Deuse, J. (2010). Experiential learning in academic education: A teaching concept for efficient work system design. In 4th workshop of the special interest group on experimental interactive learning in industrial management. 10. Plorin, D., & Müller, E. (2013). Developing an ambient assisted living environment applying the advanced learning factory. In International simulation and gaming association conference. 11. Barbieri, G., Garces, K., Abolghasem, S., Martinez, S., Pinto, M. F., Andrade, G., Castro, F., & Jimenez, F. (2021). An engineering multidisciplinary undergraduate s pecialty with emphasis in society 5.0. International Journal of Engineering Education, 37(3), 744–760. 12. Lamancusa, J. S., Zayas, J. L., Soyster, A. L., Morell, L., & Jorgensen, J. (2008). 2006 Bernard M. Gordon Prize Lecture*: The learning factory: Industry‐partnered active learning. Journal of Engineering Education, 97(1), 5–11. 13. R. S. (1988). Außerbetriebliche CIM-Schulung in der Lernfabrik. In Produktionsforum’88 (pp. 581–601). 14. Alptekin, S., Pouraghabagher, R., McQuaid, P., & Waldorf, D. (2001). Teaching factory. In Annual conference. 15. Wagner, U., AlGeddawy, T., ElMaraghy, H., & Mÿller, E. (2012). The state-of-the-art and prospects of learning factories. Procedia CiRP, 3, 109–114. 16. Sudhoff, M., Prinz, C., & Kuhlenkötter, B. (2020). A systematic analysis of learning factories in Germany-concepts, production processes, didactics. Procedia Manufacturing, 45, 114–120.
Digital Twin and Education in Manufacturing
1131
17. Wienbruch, T., Leineweber, S., Kreimeier, D., & Kuhlenkötter, B. (2018). Evolution of SMEs towards Industrie 4.0 through a scenario based learning factory training. Procedia Manufacturing, 23, 141–146. 18. Abele, E., Metternich, J., Tisch, M., Chryssolouris, G., Sihn, W., ElMaraghy, H., Hummel, V., & Ranz, F. (2015). Learning factories for research, education, and training. Procedia CiRp, 32, 1–6. 19. Abele, E. (2016). Learning factory. CIRP Encyclopedia of Production Engineering. 20. Abele, E., Chryssolouris, G., Sihn, W., Metternich, J., ElMaraghy, H., Seliger, G., Sivard, G., ElMaraghy, W., Hummel, V., Tisch, M., & Seifermann, S. (2017). Learning factories for future oriented research and education in manufacturing. CIRP Annals, 66(2), 803–826. 21. Andrés, M., Álvaro, G., & Julián, M. (2019). Advantages of learning factories for production planning based on shop floor simulation: A step towards smart factories in Industry 4.0. In World conference on engineering education (EDUNINE). 22. Haghighi, A., Shariatzadeh, N., Sivard, G., Lundholm, T., & Eriksson, Y. (2014). Digital learning factories: Conceptualization, review and discussion. In 6th Swedish production symposium. 23. Al-Geddawy, T. (2020). A digital twin creation method for an opensource low-cost changeable learning factory. Procedia Manufacturing, 51, 1799–1805. 24. Protic, A., Jin, Z., Marian, R., Abd, K., Campbell, D., & Chahl, J. (2020). Implementation of a bi-directional digital twin for Industry 4 labs in academia: A solution based on OPC UA. In IEEE international conference on industrial engineering and engineering management (IEEM). 25. Brenner, B., & Hummel, V. (2017). Digital twin as enabler for an innovative digital shopfloor management system in the ESB Logistics Learning Factory at Reutlingen-University. Procedia Manufacturing, 9, 198–205. 26. Ralph, B. J., Schwarz, A., & Stockinger, M. (2020). An implementation approach for an academic learning factory for the metal forming industry with special focus on digital twins and finite element analysis. Procedia Manufacturing, 45, 253–258. 27. Hänggi, R., Nyffenegger, F., Ehrig, F., Jaeschke, P., & Bernhardsgrütter, R. (2020). Smart learning factory–network approach for learning and transfer in a digital & physical set up. In IFIP international conference on product lifecycle management. 28. Uhlemann, T. H. J., Schock, C., Lehmann, C., Freiberger, S., & Steinhilper, R. (2017). The digital twin: Demonstrating the potential of real time data acquisition in production systems. Procedia Manufacturing, 9, 113–120. 29. Grube, D., Malik, A. A., & Bilberg, A. (2019). SMEs can touch Industry 4.0 in the smart learning factory. Procedia Manufacturing, 31, 219–224. 30. Martinez, S., Mariño, A., Sanchez, S., Montes, A. M., Triana, J. M., Barbieri, G., Abolghasem, S., Vera, J., & Guevara, M. (2021). A digital twin demonstrator to enable flexible manufacturing with robotics: A process supervision case study. Production & Manufacturing Research. 31. Umeda, Y., Ota, J., Shirafuji, S., Kojima, F., Saito, M., Matsuzawa, H., & Sukekawa, T. (2020). Exercise of digital kaizen activities based on ‘digital triplet’ concept. Procedia Manufacturing, 45, 325–330. 32. Negri, E., Fumagalli, L., & Macchi, M. (2017). A review of the roles of digital twin in CPSbased production systems. Procedia Manufacturing, 11, 939–948. 33. Kritzinger, W., Karner, M., Traar, G., Henjes, J., & Sihn, W. (2018). Digital twin in manufacturing: A categorical literature review and classification. IFAC-Papers Online, 51, 1016–1022. 34. Biesinger, F., & Weyrich, M. (2019). The facets of digital twins in production and the automotive industry. In 23rd international conference on mechatronics technology (ICMT). 35. Post, F. H., & Van Walsum, T. (1993). Fluid flow visualization. In Focus on scientific visualization (pp. 1–40). Springer. 36. Bei, Y., & Fregly, B. J. (2004). Multibody dynamic simulation of knee contact mechanics. Medical Engineering & Physics, 26, 777–789. 37. Pandolfi, A., & Ortiz, M. (2002). An efficient adaptive procedure for three-dimensional fragmentation simulations. Engineering with Computers, 18, 148–159.
1132
G. Barbieri et al.
38. Schiehlen, W. (1997). Multibody system dynamics: Roots and perspectives. Multibody System Dynamics, 1, 149–188. 39. Hübner, B., Walhorn, E., & Dinkler, D. (2004). A monolithic approach to fluid–structure interaction using space–time finite elements. Computer Methods in Applied Mechanics and Engineering, 193, 2087–2104. 40. O’Brien, J. S., Julien, P. Y., & Fullerton, W. T. (1993). Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering, 119, 244–261. 41. Sherman, W. (2003). Understanding virtual reality: Interface, application, and design. Morgan Kaufmann. 42. Soete, N., Claeys, A., Hoedt, S., Mahy, B., & Cottyn, J. (2015). Towards mixed reality in SCADA applications. IFAC-Papers Online, 48, 2417–2422. 43. Havard, V., Jeanne, B., Lacomblez, M., & Baudry, D. (2019). Digital twin and virtual reality: A co-simulation environment for design and assessment of industrial workstations. Production & Manufacturing Research, 7, 472–489. 44. Wursthorn, S., Coelho, A. H., & Staub, G. (2004). Applications for mixed reality. In XXth ISPRS congress, Istanbul, Turkey. 45. Cipresso, P., Giglioli, I. A. C., Raya, M. A., & Riva, G. (2018). The past, present, and future of virtual and augmented reality research: A network and cluster analysis of the literature. Frontiers in Psychology, 9, 2086. 46. Matuszka, T., Gombos, G., & Kiss, A. (2013). A new approach for indoor navigation using semantic webtechnologies and augmented reality. In International conference on virtual, augmented and mixed reality. 47. Barbieri, G., Bertuzzi, A., Capriotti, A., Ragazzini, L., Gutierrez, D., Negri, E., & Fumagalli, L. (2021). A virtual commissioning based methodology to integrate digital twins into manufacturing systems. Production Engineering, 15, 397–412. 48. Pinedo, M. (2016). Scheduling: Theory, algorithms, and systems. Springer. 49. Mitchell, M. (1998). An introduction to genetic algorithms. MIT Press. 50. Li, R., Verhagen, W. J., & Curran, R. (2020). A systematic methodology for prognostic and health management system architecture definition. Reliability Engineering & System Safety, 193, 106598. 51. Barbieri, G., Sanchez-Londoño, D., Cattaneo, L., Fumagalli, L., & Romero, D. (2020). A case study for problem-based learning education in fault diagnosis assessment. IFAC-Papers Online, 53, 107–112. 52. Borgia, E. (2014). The internet of things vision: Key features, applications and open issues. Computer Communications, 54, 1–31. 53. Romero, N., Medrano, R., Garces, K., Sanchez-Londono, D., & Barbieri, G. (2021). XRepo 2.0: A big data information system for education in prognostics and health management. International Journal of Prognostics and Health Management, 12. 54. Ardila, A., Martinez, F., Garces, K., Barbieri, G., Sanchez-Londono, D., Caielli, A., Cattaneo, L., & Fumagalli, L. (2020). XRepo-towards an information system for prognostics and health management analysis. Procedia Manufacturing, 42, 146–153. 55. Lee, C. G., & Park, S. C. (2014). Survey on the virtual commissioning of manufacturing systems. Journal of Computational Design and Engineering, 1(3), 213–222. 56. Hofmann, W., Langer, S., Lang, S., & Reggelin, T. (2017). Integrating virtual commissioning based on high level emulation into logistics education. Procedia Engineering, 178, 24–32. 57. Mortensen, S. T., & Madsen, O. (2018). A virtual commissioning learning platform. Procedia Manufacturing, 23, 93–98.
Digital Twin and Education in Manufacturing
1133
Giacomo Barbieri received the bachelor’s and master’s degree in mechanical engineering (respectively 2010 and 2012), and Ph.D. in Automation Engineering (2016) from the University of Modena and Reggio Emilia. The master’s degree and the Ph.D. were in collaboration with Tetra Pak Packaging Solutions. From June 2016, he joined the Universidad de los Andes before as Postdoctoral Researcher and then as Assistant Professor. Member of the technical committees of IFAC (International Federation of Automatic Control) in ’Computers for Control’ (3.1) and ‘Manufacturing Plant Control’ (5.1), he wrote approximately 40 article in international journals and conference papers. His main expertise is Maintenance, Asset Management and Industrial Automation.
David Sanchez-Londoño is a doctoral student at the Politecnico di Milano and the Universidad de los Andes. He has a Mechanical Engineering degree from Universidad EAFIT and a Master’s degree in Mechanical Engineering from Universidad de los Andes. He has worked on various research subjects related to industrial digitization for the last three years. Such topics include intelligent maintenance, data analysis, cyber-physical systems (CPS), Internet of Things devices, and smart maintenance management. His research trajectory has led him to research the intersection of value-based asset management and the implementation of smart maintenance in industrial settings.
David Andres Gutierrez received the bachelor’s degree in Mechatronic Engineering and master’s degree in Mechanical Engineering respectively from the Universidad Santo Tomás (2016) and the Universidad de los Andes (2019). Since May 2019, he has joined the company Xcelgo A/S located in Denmark to work as a digital factory developer which involves areas of knowledge such as automation, control, PLC programming and OOP.
Rafael Vigon is a master student in the mechanical engineering of the Politecnico di Milano. His main experties are digital twin and internet of things.
1134
G. Barbieri et al. Elisa Negri is Senior Assistant Professor in the Manufacturing Group of the Politecnico di Milano (Italy) at the Department of Management, Economics and Industrial Engineering. She pursued her education as Industrial Engineer at Politecnico di Milano, achieving her PhD degree in December 2016. She is now focussing her research on smart manufacturing and digital technologies for manufacturing, spanning from the data modelling for the manufacturing systems to the advanced simulation, to support production planning and control activities. She has been and is still involved in European, national and regional research projects, such as MIICS (Circular and Sustainable Made In Italy), NePRev, L4MS, Africa Innovation Leaders projects, and industrial projects. She is Professor of the BS Course “Industrial Plants Management” (“Gestione degli Impianti Industriali”) at Politecnico di Milano. She has also been involved in program direction and teaching activities for the POLIMI Graduate School of Management. By now she is co-author of 15 papers in International Journals, 30 papers presented at international conferences and 5 international book chapters.
Luca Fumagalli is Associate Professor in the Manufacturing Group of the School of Management of Politecnico di Milano (Italy) at the Department of Management, Economics and Industrial Engineering. He is Mechanical Engineer, graduated at Politecnico di Milano in 2006, and obtained PhD in Industrial Engineering at Politecnico di Milano in 2010. He works on research topics about production management, industrial services and in particular maintenance management related topics, with a specific concern on new technological solutions. His research activity has been related also with European research funded projects. Luca Fumagalli is co-director of Industry 4.0 Lab at the School of Management of Politecnico di Milano. He has been visiting professor at Warsaw University of Technology (Poland), Universidad de Los Andes (Colombia) and Pontificia Universidad Catolica de Valparaiso (Chile). He is coauthor of morae than one hundred scientific publications.
Part V
Conclusion
Future Evolution of Digital Twins Roberto Saracco and Michael Lipka
Abstract In spite of the increasing usage and development of Digital Twin solutions, the evolution is characterized by different niches with different needs and requirements. Some application domains are more advanced of others and they can be used to explore the possible “futures”. Different steps in the evolution are identified and presented. The next steps of evolution are identified in terms of major technological challenges to be solved (such as (Data Capture and Management, increased Intelligence, Visualization, Autonomy, Swarm Intelligence and so forth). This chapter sheds a light on the future by introducing some future application cases ranging from healthcare, automotive and construction up to the spatial web and the digital Twin of everything. The chapter concludes providing a perspective on open issues (from standards to federation to platform creation) and points to relevant technological enablers that may accelerate a wide adoption of the Digital Twin. Keywords Artificial intelligence · Data capture · Data management · Healthcare · Integrity and security · Personal digital twin · Swarm intelligence · Use cases · Visualization
1 Introduction The future is already here, it is just not very evenly distributed [William Gibson].
In looking at the future evolution of Digital Twins we feel appropriate to start with this quote because by looking at what is already present in some niches it is possible to explore the possible “futures”. R. Saracco (*) IEEE Volunteer, Turin, Italy e-mail: [email protected] M. Lipka Senior Technology Consultant, Huawei Technologies, Munich, Germany © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_36
1137
1138
R. Saracco and M. Lipka
Digital Twins are been used, or claim to be used in areas as diverse as breweries,1 hospitals,2 building construction3 and maintenance,4 automotive,5 energy innovation,6 logistics,7 retail,8 tower and drones facility management,9 aviation,10 farming11… For each of these areas we are including a reference for a deep dive, some of these will be picked up as future use cases and some are discussed in other section of this book. Of course, we are not mentioning manufacturing, since this is the place that saw the first application –and evolution– of digital twins and it is well represented in other chapters. Farming represents an interesting area of digital twins’ application since they are not just applied to farming machinery (or farming processes) but also to cattle: there are already digital twins for cows,12 able to monitor their wellbeing, the amount of milk produced and record their health and productivity. AI applications are used to find out if something is –or even better con go– wrong supporting prompt recovery and/or risk avoidance actions. If digital twins can find a useful application for cattle, why not for humans? Indeed, digital twins are also starting to be adopted to model people,13 individual persons as well as groups of people, like a team in a company. In the same way we can use a digital twin to model object characteristics we could use them to model some characteristics of a person, with the obvious starting point of biometrics that can be harvested through sensors (wearable, contact and embedded plus ambient sensors) and applied to healthcare. This specific application is likely to see a strong uptake, courtesy of the Covid-19 pandemic that has boosted the demand for remote monitoring. The use of Personal Digital Twins in pandemics will be the focus of one of the use cases.
https://www.youtube.com/watch?v=NQvBukVNqG4 https://www.hindawi.com/journals/ace/2020/8846667/ 3 https://www.the-possible.com/digital-twin-in-construction-industry/ 4 https://www.intellias.com/digital-twins-in-facility-management-the-clear-pathforward-for-intelligent-buildings/ 5 https://www.hannovermesse.de/en/news/news-articles/cars-get-digital-twins 6 https://www.bentley.com/en/perspectives-and-viewpoints/topics/perspectives/2019/ digital-twins-drive-innovation-in-energy-sector 7 https://www.dhl.com/content/dam/dhl/global/core/documents/pdf/glo-core-digital-twins-inlogistics.pdf 8 https://www.altiusdata.com/digital-twin-2/ 9 https://www.brighttalk.com/webcast/5648/441408/the-application-of-drones-and-digitaltwins-in-the-tower-industry 10 https://kontakt.io/blog/digital-twins-in-aviation-and-8-ways-they-can-transform-workflows/ 11 https://www.challenge.org/knowledgeitems/why-modern-farming-need-the-digital-twins/ 12 https://www.vmware.com/radius/cows-in-the-cloud/ 13 https://www.rd.ntt/e/ai/0004.html 1 2
Future Evolution of Digital Twins
1139
The following step is the modelling of cognitive capability. This is an area being explored14 within academia (including IEEE) and it is raising interest in companies like IBM and SAP that are supporting enterprises resource management, with knowledge management becoming a very important aspect. Cognitive Digital Twins have first been proposed by IBM in the manufacturing context and will likely be extended in the coming years to people’s knowledge. Also the decentralisation and ubiquity provided by the web is creating an opportunity to leverage on personal digital twin instances for knowledge distribution and leverage. This will be the focus of another use case. In this chapter we are addressing first the evolution trends, i.e. how Digital Twins can evolve and what technologies are needed, secondly the Future that is already here, that is the advanced use of Digital Twins today and how this can evolve in the coming years and thirdly the Future application of Digital Twins, presented in terms of scenarios.
2 Evolution Indeed, the concept and application of Digital Twins have been evolving rapidly in the last fifteen years and all signs point towards further rapid, and accelerated, evolution in this decade. The evolution can be considered in terms of: –– increased capabilities of the digital twin –– broader fields of application (discussed in the next section) –– Interoperability of different application fields of digital twins In the last decade we have seen, see Fig. 1, an evolution of the concept of digital twin as a model of a physical entity (first stage), mostly used in the design phase, to a mirroring of an existing physical entity (second stage) that can be used as a reference model for a specific physical entity, thus introducing the aspect of “instance”
Fig. 1 Evolution of DT concept
14
https://digitalreality.ieee.org/webinars#16
1140
R. Saracco and M. Lipka
(at stage 1 this was not existent, a digital twin would represent a turbine model, not a specific turbine that was manufactured). This has introduced in the Digital Twin by the Thread component, i.e. the record of the evolution of that specific instance (how the physical entity was manufactured, how it has been customised, sold, …). A subsequent evolution, stage 2.5, led to a mirroring of the present status of the physical entity (here the concept of instance is crucial since several instances of a product will be in different states, depending on how they are being used). The means to synchronise a given physical entity with its associated instance. Synchronisation could differ, from a mere data entry to a continuous flow of data from the physical entities, usually leveraging on sensors, IoT, embedded in the physical entity, connected through a network to the digital twin. This has added the Digital Shadow component to the Digital Twin (hence digital model, digital shadow, digital thread are completing the three components of today’s digital twins). In the future we might foresee a fourth component, that of embedded AI, with reasoning capabilities, extended with time, and endowing the digital twin with autonomous capabilities. At stage 3, the one most of industry is in the adoption of Digital Twins, we have a bi-directional interaction between the physical entity and its Digital Twin. This resulted in an expanded Digital Thread, since now it is recording all significant events occurring in the operation (and maintenance) as well as in the capability of affecting the operation of the physical entity, potentially in an autonomous way. This step signals a departure from the original concept of Digital Twin, where it was a passive representation of a physical entity. As soon as the digital twin can influence the physical entity it –potentially– can become an active component. Today, as represented in Fig. 1, in most cases this does not happen. The digital twin “influences” the physical twin just by acting as a gateway for an external controller to operate on the physical twin. The interaction is not the result of an autonomous decision on the side of the Digital Twin. The next step, mostly in the future, takes the digital twin to stage 4 where a certain degree of autonomy in the pair digital-physical twin is present. At this stage some of the entity functionalities may be provided by the interworking of the digital/physical twin. In addition, local data analytics are used as well as a minimum of intelligence at the digital twin level. The level of autonomy increases as Digital Twins move to stage 5 where they can interact, as autonomous agents, in the cyberspace expanding the local data analytics to global data analytics. On the horizon there is the possibility for self-generation of digital twins, of hierarchies of digital twins, meshed networks of digital twins and hybrid digital twins having a human and a machine component, thus embedding deterministic and non-deterministic behaviour, all together giving rise to an emergent ambient intelligence (not represented in Fig. 1). Notice that when we talk here of Digital Twins in executable terms, we are actually talking of Digital Twin Instances: they all share the same digital model but have their own specific/unique digital thread and digital shadow. It is worth noting, discussing the Future of Digital Twin, that the embedding of self-learning AI will lead to a divergence of each instance also with respect
Future Evolution of Digital Twins
1141
to the “digital model” since this will be affected by the AI and can change over time. Also, it is worth noting that a Digital Twin can be created as a cluster of Digital Twins, as an example the Digital Twin of a city is actually a cluster comprising many digital twins each referring to a specific aspect (like infrastructure) of the city. To move from stage 3 to the following ones a number of technologies, in part already used at stage 3, are required and their expected evolution will influence the progress in terms of performance, functionalities and adoption of digital twins. Hence, in the following subsections we consider the leading technologies clustered by DT function enabled: • • • • • • • •
Data Capture Data Management Intelligence Connectivity Visualisation Autonomy Swarm Integrity and Security
2.1 Data Capture Data capture so far has been seen as external to DT. However, in the last few years there has been a growing attention to the coupling of DTs with IoT.15 This first has been seen as a way to mirror IoT in the cyberspace using the related DT as a mirror to the IoT (like a representation of the status of an actuator, the mirror of the last data captured by a sensor…). Now, and in the future, the DT will take an active role in the capturing of data through a control on the connected IoT (its physical counterpart). Smart IoT are likely to derive their smartness from their DT, rather than increasing the local smartness. This keeps the cost of the physical IoT low (the HW part comprising the sensory, and related HW) leveraging the DT to provide the required smartness. A second, crucial, aspect is that in most cases being smart means to be aware of the context and this context awareness can be achieved easily through an interaction among DTs, each one mirroring an IoT (a cluster of IoTs) in a given ambient. As previously noted a DT may be created to represent the whole set of DTs in a given area. On the contrary, establishing a connection among the hundreds, if not more, of IoTs in a given ambient can be exceedingly costing. Consider, as an example, the new bridge in Genoa. It has been built with 240 sensors embedded in the structure that can measure a variety of parameters. Each of https://www.networkworld.com/article/3280225/what-is-digital-twin-technology-and-why-itmatters.html 15
1142
R. Saracco and M. Lipka
this parameter makes sense once it becomes contextualised (as an example the measure of stretching in a given point has to be evaluated in relations to stretching measured in other points, to the load and to temperature. This set of parameters is better analysed in the cyberspace and a digital twin of the bridge has been created. The DT is both capable of analysing all data flowing in from the sensors as well as instructing each sensor to detect (or use a certain procedure to detect) a given parameter. The intelligence is external to the sensor, although the sensor has the flexibility to pick up data in different ways (e.g. tune the sensitivity, the interval of detection…). In this sense the data capture is orchestrated by the DT. Notice that the “bridge” DT could also be constructed as a cluster of DTs each mirroring a specific sensor on the bridge (although this is not the case in this example, where the architectural choice has been to have a single DT with all sensors feeding it with data). This is an evolution that we may expect in the coming years: a direct involvement of the DT in the data capture. There is more: data are generated both by supervising physical entities (sensors/ IoT) and by analyzing data through spatial –different data streams– and temporal correlation. These are emerging data and they can be very useful. These data can be requested by the Digital Twin to other DTs. Again, this is an evolution where the DT is becoming active in data capture. This is typical of cognitive digital twins16 in manufacturing environment (see Sect. 4.1).
2.2 Data Management A Digital Twin, being a mirror of an entity, contains data that model that entity – the full mirroring usually implies algorithms executing that data model. In addition, it harvests data continuously (at stage 3 and above) creating an instant image of the entity and a historical record of the entity. More advanced DTs can also harvest and keep record of contextual data. As a matter of fact, advanced DTs (stage 4 and above) can be seen as micro data bases and can embed features of data management that can be visible, through APIs, by external applications. An interesting feature that can be supported through the data management by a DT is the selective disclosure of data or, usually, of information on the data (the raw data is not disclosed to the external world but only some information derived from the –set of– data is disclosed). This might be used by personal digital twins to preserve privacy (see Sect. 3.3) and yet disclosing information that might be relevant at societal level or that can be used by an external application to deliver services to the DT owner. DTs data management can be made compliant with Open Data Framework and can be supported by platforms (like Mindsphere in the Industry 4.0 environment and
16
https://www.ibm.com/blogs/internet-of-things/iot-evolution-of-a-cognitive-digital-twin/
Future Evolution of Digital Twins
1143
FIWARE in the smart city environment and in perspective by the Gaia-X architecture). It is obvious that a shared data ontology is important to ensure ease of communications among DTs and between a DT and applications. This is also an area where the Gaia-X data spaces will play a pivotal role. We can expect that DTs in the coming years may be used as sensors and interrogated as such by other DTs and by applications.
2.3 Intelligence As DTs are progressing towards stage 4 and beyond they are bound to become more and more autonomous and to be tasked with taking decisions. In turns, this requires some sort of intelligence. The problem with this –inevitable– evolution is that an intelligent/autonomous DT is no longer, strictly speaking, a DT, since it no longer mirrors its physical counterpart. On the other hand as the concept of DT evolves we are forced to abandon the original concept and acknowledge that a DT is becoming more than a mirror: however, it is important to understand that once a DT has reached stage 3 it is starting to redefine the meaning of the physical entity: this entity is no longer autonomous, rather it is the couple “physical entity”-DT that surges to the level of Entity and it is this composed entity that remains autonomous. From that stage on we have to acknowledge that the physical entity cannot be considered independently of its DT. When we are discussing the intelligence of the DT we have to acknowledge that this intelligence is an augmentation of the physical entity intelligence. At least in the next coming years the “intelligence” of a DT will be based on temporal learning, i.e. learning from the evolution of the physical entity over time. For quite some time, at least until stage V is reached, it will not be possible to acquire a spatial intelligence, in the sense of deriving the intelligence from the analyses of multiple data streams. This spatial intelligence may reside in the physical entity (the physical entity may be required to be context aware, thus to possess some sort of spatial intelligence). This expected evolution of the DT intelligence, including its increased awareness of the environment (the environment of a DT is not the same as the environment of the physical entity, this latter is limited by the geography/proximity, the former is defined by the reach of its connectivity), is in line with the evolution of AI that more and more in the coming years will see the emergence of a federated AI leveraging on multiples AIs massively distributed. At this point the DT intelligence can progress from a temporal intelligence to a spatial intelligence, based not on data (basically a DT has visibility only on its internal data) but on the federated intelligence shared with other DTs (notice how this compares with the neuron that has a temporal “intelligence” and share a contextual – spatial– intelligence through the connectivity network binding it to thousands of other neurons).
1144
R. Saracco and M. Lipka
2.4 Connectivity A DT may reside in the physical entity, although in general it is located outside of it, mostly in a cloud. In the case of Personal Digital Twin the smartphone, or other personal device, may host the DT (although it would be most likely that a copy will exist in the Cloud). Ideally, the communication between the physical entity and the DT should have unlimited bandwidth and zero delay. In practice the unlimited bandwidth is seldom a requirement (most of the time the flow of data is limited, although in the future we may expect video streaming in high definition between some type of physical entities and their DTs) and the latency within a few seconds is usually acceptable. True, for some applications there may be a need for very low latency, however in these situations it is most likely that the physical entity is equipped with safeguard mechanisms to accept longer latency, as exception, and even an interruption in the communication channel. Typical is the case of an autonomous vehicle that could rely on the external infrastructure only up to a certain extent, i.e. it has to have autonomous capability to manage any situation even if the communications get broken. A possible architectural solution to ensure seamless communications is to have a duplicate of the DT embedded in the physical entity (this is not always feasible, like for a plain vanilla IoT but it could be feasible for an autonomous vehicle, like a drone or a car) and it is this embedded DT that communicates with its copy in the Cloud. The intra DT communications may be broken but the one between the physical entity and the DT is always on. This is particularly important once stage 4 is reached since part of the functionalities is dependent on the DT (of course in some cases one could accept a functional degradation…). Notice also that this envisaged architecture, with a duplication of the DT in the entity and in the cloud, is different from the idea of a “back up” in the cloud (as previously proposed for personal digital twins embedded in a smartphone. That was a back up copy, just in case, this is a single duplicated DT that normally is in synch and whose master is the one outside of the entity with the one inside ready to take over if the communications breaks down. The future pervasiveness of 5G with edge cloud services will increase both reliability and the feasibility of a single duplicated DT with part residing in the edge cloud (and moved from one edge to another as the physical entity moves around). In a few more years, the advent of 6G will make this kind of distributed solution even more practical and at that stage an entity’s DT may be in charge of the communication management. In this view we are assuming a DT at stage 4 or above, since some of the functionality is taken over by the DT (in this case the communications functionality).
Future Evolution of Digital Twins
1145
2.5 Visualisation Since the DT represents an entity in its various aspects, the DT can also be used to visualise its physical entity. The interest lays in the fact that the physical entity might be “difficult” to see (it requires a co-presence) and in the possibility of seeing aspects of that entity that are usually invisible (like the rotational speed, pressure and temperature in one stage of a turbine). This capability is exploited today in several industries, most commonly in the testing phases and when maintenance/repair is needed. The person working on the physical entity using a screen, ideally an AR device, can directly see those parts of the entity –and the related information– that are of interest. This is possible through the DT. At CIM – Competence Industry Manufacturing Centre–, in Turin Italy, an Immersive Cave17 – the largest of its kind in Italy – let on site technicians and researchers to access and interact, virtually via AR and VR, equipment and products of several companies through their associated Digital Twin. In the coming years visualisation capabilities will be a reason to apply the DT technology to a variety of entities, including those in the healthcare domain (see Sect. 3.4). In this area a new type of Digital Twin, the Deep Twin, is now being experimented to support the visualisation (and simulation – see Sect. 3.4) of the human body and its physiological processes. We can expect that as AR (and VR for remote operation) devices will become better and more widespread the visualisation through DT will spread also in the mass market. As an example, smart cities are likely to use DTs as a bridge between the city and the citizens providing them increased awareness on what is going on, enabling a view of the city and its workings that was not possible before. The visualisation can make use of the whole DT model of the physical entity or be restricted to just a few parts of it, it can represent the current status of the physical entity, a past status or even the evolution from the past to the present and, using simulation, could display possible future states. We feel that increased visualisation capabilities will be a driver to foster the use of Digital Twins in more and more areas.
2.6 Autonomy At stage 4 and beyond DTs demonstrate various degree of autonomy. We should keep in mind, as we observed for the increased intelligence of a DT, that this autonomy has to be connected to the physical entity in the sense that the DT and the https://www.innovationpost.it/2021/07/19/linee-produttive-digitalizzate-e-connesse-e-serviziper-le-imprese-lecosistema-innovativo-del-competence-center-cim-4-0-di-torino/ 17
1146
R. Saracco and M. Lipka
physical entity are creating an entity that has autonomous characteristics both in the physical and in the digital part. If this were not true, that is the digital part is independent and not in relation with the physical part, we would no longer have a DT but an autonomous agent that may represent to a certain degree some of the characteristics of a physical entity but that is not its Digital Twin. Autonomy is an important characteristic for the evolution of Digital Twins since it provides the possibility to augment the physical entity by accessing and leveraging the cyberspace. Whilst the physical entity is necessarily constrained by its physical location and range, its digital counterpart operates in an environment where everything is potentially co-located, so whilst the physical environment has a boundary the digital one has only authorisation boundaries (including those enforced by the architectural framework and the execution platform), no physical boundary. A digital twin can reach any other digital twin (or information/data/services) as long as it has got permission to do so. It is also not limited by computational performance (in principle of course, but we know that scaling of performance in the digital space is much easier than in the physical space) so it can process any volume of data and entertain communications with any number of correspondents. The processing and communication can be managed autonomously by the DT and the issue is how the result of this activity can be reported to (influence) the physical entity. The influence is crucial to keep the DT meaningful. A lot of research is required in this post-synchronisation between the evolved DT and the physical entity. Whilst in the past the main issue was to ensure the synchronisation between the physical entity and the DT (centripetal synch) in the future the challenge will be in ensuring the synchronisation between the DT and the physical entity (centrifugal synch). Out of these synchronisation we have an augmented entity. The augmentation is one of the key characteristics of the DT in the coming years, and it results from the creation of a symbiotic entity, what is being called a Digital Reality by the IEEE DRI18 group.
2.7 Swarm We can expect in the coming years an ever-increasing pervasiveness of DTs, both associated to different types of entity (products, services, data, organisations, processes, …) and to instances of products (like all cars of a given model). This huge landscape of DTs may operate in a way where each DT is independent of all the others (although a DT may exchange information with other DTs) or, in some cases, a number of DTs can operate as a single super-entity. This super entity will have a behavior that is the result of individual behavior, like in a city the flow of people may assume specific characteristics if some triggers occur (like a
18
https://digitalreality.ieee.org
Future Evolution of Digital Twins
1147
stampede following an explosion: the behavior of each single person is impossible to predict but all together they show a behavior that is typical of that situation and that can be predicted). This is what is called a “swarm”. We expect that as the number of DTs grows and as they are going to become aware of their context, their behavior will be mutually influenced and this will be giving rise to swarms. This swarm behavior is typically emerging through simulation and AI based applications when there is a potential “learning” by the swarm. A corollary is that in the coming years we will start to consider DT swarms and leverage on them. The crucial point is the awareness of each single DT and the existence of a shared framework, among the DTs allowing the formation of the swarm (the behavior of a single DT may appear random but since each behavior is constrained by a common framework –it can be an ethical framework or a regulatory framework or even an optimization framework– the overall behavior becomes predictable and self-orchestrated.
2.8 Integrity and Security The more DTs get autonomous functionalities the more their authorization becomes an issue within the business transaction they participate. This principle will become even more serious in case of personal digital twins. There are always both sides in any business transaction to be considered requiring trust. Trust needs a well- accepted concept of security mechanisms to ensure the integrity of the DT. Due to the potential of expected mobility of DTs over various platforms, especially when we are talking about personal digital twins, a DT inherent security mechanism to provide a corresponding integrity is mandatory. Therefore, distributed ledger technologies might become an important enabler for DTs broader application.
3 The Future Is Already Here Digital Twins have stepped out of their original field, manufacturing, to enter many other areas, such as construction (Autodesk’s Building Information Model BIM [1]), automotive and logistics are already well established. Others, like healthcare or insurance are using them in a very limited way, though high expectations on significant growth in this decade might be impacted by many legal questions arising especially in these fields. Others, like retail, food, tourism, …, are testing the “water”, but might get a strong push by the challenges in global supply chains. Finally, there are few areas, like knowledge management, people management and education that are at a research stage.
1148
R. Saracco and M. Lipka
In this section we are describing the ultimate consequence of the current developments towards the digital twin of everything. Some of these aspects are admittedly of very visionary character, though we feel current research presented in the next section can be a convincing proposition (Fig. 2). In terms of broader fields of applications the evolution has been following an exponential curve from the early application in return-on-invest driven manufacturing (leveraging on the availability of digital models created in the design phase – CAD: Computer Aided Design) to their adoption in construction, automotive, aerospace, cities and more recently in agriculture, healthcare, energy, … . The expansion continues and it is now decisively moving towards the modelling of processes, immaterial goods (economy, finance, knowledge), and people. This latter is a natural extension of healthcare and of resource management starting to involve education, particularly new continuous education paradigms. The expansion has been taking place horizontally, i.e. by involving more application areas, as well as vertically, i.e. by extending the coverage of the physical entities, leveraging on the increasing number and quality of sensors and the related increase in the quality and volume of data. We are considering a number of areas, discussing them in terms of future evolution using as starting point current leading-edge experiences (Fig. 3).
Fig. 2 Digital Twins are already covering a broad set of verticals as shown in the graphic. (Image credit: Qingling Qi)
Future Evolution of Digital Twins
1149
Fig. 3 The digital twin of everything resulting in a full virtualization of our world
3.1 The Spatial Web19 – Or – Say Good Bye to the One Universe The future development of the capabilities and deployment of the digital twin cannot be separated from the development of the spatial web and the spatial web is becoming connected to the real environment [6, 7]. Real places are becoming addressable parts by the spatial web, i.e. smart spaces interconnecting with smart objects [9]. Real places and therefore real objects related to these real places can be tagged by virtual information and enriched by smart virtual objects. In turn by pervasive sensor capability and digital mapping of the physical world by spatial data a digital twin of every object in space is created and together a digital twin of the world around us comes into existence. Spatial data include all data with spatial meaning like size, shape, or position. This merging of reality with a digital information layer, making real places virtual and virtual spaces felt real, is also often called the web 3.0 and is also called the “metaverse” [8]. It is reasonable that the combination of our physical world with different digital information layers will result in an unlimited combination of digital and physical versions (the fifth stage). Digital rights management and distributed ledger technologies are going to build the security framework for the different (uni-)verses. We expect that this concept will be of high importance for the future economic development, indicated by a strong research focus of EU and European countries for the further development of the mobile communication system (from 5G to 6G) [10, 11]. https://www2.deloitte.com/us/en/insights/topics/digital-transformation/web-3-0-technologiesin-business.html 19
1150
R. Saracco and M. Lipka
3.2 The Digital Twin of Everything With progressing digitization in all domains of human activity a full digital image (DT) of the world around us is becoming a reality. A least interconnecting all digital twins, embedding them into a model of our physical world is not only forming the spatial web but also it becomes the bases for a virtual environment as the future communication and control space. 3.2.1 Automotive Digital twins are already being used by some car manufacturers both in production and in monitoring. Additionally, they are used for fleet monitoring and management. We expect their use to increase and expand throughout this decade. Their usage and evolution will be fostered by the shift towards autonomous and electrical vehicles, where management of vehicles through the cyberspace will be an essential element for multi-modal mobility concepts. The Turin, Italy, Automotive Hub [21] an advanced lab created by a consortium of industries, is using DTs to flank each of its equipment as well as a Hub-DT to allow lab virtualization and use from remote. The Mobi Initiative [22] is using DTs for modelling the vehicles and as way to foster the evolution towards autonomous vehicles. In an evolution perspective one can imagine to have DTs of vehicles cooperating with one another as well as interacting with the personal Digital Twin of the driver (or/and passengers). An example of this interaction is the adaptation of the “driving style” of the autonomous vehicles to the needs/preferences of the passenger (a winding road with passengers that are prone to motion-sickness may induce the vehicle to a more calm driving style …). 3.2.2 Construction The area of construction started already in the 1970s to think about a digital twin for buildings [2], though due to the long life-cycle of buildings and the complex value chain in building construction and operation it took nearly four decades until the BIM (Building Information Modeling) became broadly applied. The BIM is the starting point for creating and using Digital Twins in the construction area. Because of the value, especially high for commercial buildings, an exponential growth on usage of Digital Twins in this decade is expected. The digital twin for buildings is not only a value during building design and construction but even more so for monitoring and remote access during operation. Siemens published in 2010 their idea of the DT for building operation as a guiding concept for the future [3]. The concept of the DT is also going to be deployed for existing buildings, even in case of missing digital documentations. Leica Geosystems provide corresponding laser scanning
Future Evolution of Digital Twins
1151
systems to generate a digital image of any building [4], furthermore their technology roadmaps address also the scanning of infrastructure hidden in the walls. This evolution is fostered on the one hand by the full digitalization of the design phase, the adoption of the digital cadastral, the increased adoption of the Digital Twin in smart cities and the massive deployment of IoT for the building automation, in its infrastructures and in the equipment installed. On the other hand, public administrations are driving the deployment to become more efficient in the construction approval process. In 2014 Singapore mandated BIM for building projects of more than 5000 square feet [5] and starting January first 2019 the Singapore municipality uses the Singapore Digital Twin for city planning and operation. Other municipalities around the world are following in this path and we can expect that a significant portion of major municipalities in many Countries will be using a Digital Twin to operate the city infrastructures. These Digital Twins will interact among each other to share “experiences” and come up with better operation strategies. Additionally, in a future perspective, we can imagine, like in the automotive domain, the interaction among the building DT with the DTs of the people in the building as described in Sect. 2.7. 3.2.3 Personal Digital Twins The idea of extending the concept of Digital Twin to mirror humans is just natural and indeed we have already seen a few companies moving in this direction, particularly in the area of healthcare. As with any other entity a Personal Digital Twin mirrors certain aspects of a person (a DT is never a perfect mirror, to be useful is has to mirror those parts, including behavior, that matters for a specific goal/application). Like for a Digital Twin of a car one might decide not to model/mirror the color of the car, likewise for a Personal Digital Twin on might decide to model a certain number of physiological parameters – application to the healthcare of sport training –, the set of relationship that person is engaged – application to social and business circles –, the skills and knowledge acquire and owned by that person – application to education, job market – and so on. More recently the name Deep Twin20 has been associated to DT modelling the internal organs of a person. The generic model of the organ, like the heart, is instantiated to create a specific Deep Twin using data derived from a variety of exams allowing the creation of an exact model of that organ. This model, during a surgery, is fed with data coming from the patient (like ECG, atrial/ventricular pressure and so on) and the surgeon can see the “workings” in real time through simulation based on the Deep Twin. In combination with additive manufacturing technologies, real twins of the organ have already been created in turn, as a training object for the real surgery [12].
20
https://www.sciencedirect.com/science/article/abs/pii/S0888327019308337
1152
R. Saracco and M. Lipka
As for any DT embedded data PTD’s data also have aspects of ownership and sharing control. Part of these aspects go under the issue of privacy. Clearly physiological and health data may be more privacy sensitive than others but in principle any personal data handling raises issues of privacy. However, rather than looking at a PDT as a potential source of privacy concern it is more useful to look at a PDT as a way to preserve and control privacy. In the coming years we can expect a PDT to have embedded processing and interaction capabilities that allow the control of privacy and the release of data under certain conditions, most of the time not releasing the actual data, rather answering a question based on the owned data. Of course, hosting the data for the personal augmentation in any virtual or augmented environment, will become a key element of the PDT. Today that is very much limited to actors as a base for non-real-time volumetric video productions [13]. Facebook has set up a research center in Pittsburgh, to develop the data base needed to create an avatar for virtual interactions undistinguishable from real interactions. That includes the necessary data to enrich the avatars by individual mimics and moving pattern, calling these lifelike avatars the codec avatars [14]. ObEN intends to enrich the PDT by breathing life into a new generation of digital humans, with fully customized voice, appearance, and creative soul [15]. It is planned as a first step towards an AI enriched avatar able to represent humans in virtual spaces for low level activities at the first stage. We can hardly await what kind of development this is going to trigger. 3.2.4 Healthcare In the healthcare space digital twins are used to mirror hospitals as infrastructures on the one side and the patients on the other side, fitting together for an optimum of treatment and efficiency. In the healthcare infrastructure they are supporting the digital transformation [16] enabling the monitoring of the medical infrastructure, incl. Building, devices, care facilities and processes. The data “owned” by the DT can be processed locally, increasingly making use of AI, and part of the information generated can be shared and clustered to give rise to an emerging distributed intelligence. More recently they have been considered to support epidemic control [17]. In this case they can become a gateway between the corresponding person and the society. On one side they help providing awareness of risk by receiving data from the ambient (usually from a healthcare institution) and matching them with the situation of the corresponding person and on the other side by releasing anonymized information on the person’s level of risk (of being contagious) towards the ambient (a healthcare organization that aggregates the data).
Future Evolution of Digital Twins
1153
3.2.5 Retail In retail we will see a multidimensional development of the digital twin deployment, which concerns the retailer, the customer and the goods. Goods will be represented by their digital twin to give access to the full spectrum of the product characteristics. That means the digital twin of the product can be tested in relation to the digital twin of the deployment environment, e. g. a digital twin of a dress can be tried out by the digital twin of the customer or the digital twin of a component for a technical system could be tested against the digital twin of the targeted system. As consequence fewer potential sources of mistake will essentially improve the efficiency of digital retail process. Further the shop assistants and the customers themselves going to become represented by their digital twins for the interaction in the digital store, therefore building a new interface for the online shop, which gives a completely new online shopping experience. Later we will see that also the digital product, without a realization in the physical world will become an essential element for the shopping of the future cyber society, and technologies as discussed in Chap. 2 are going to ensure a unique and individual offering of consumer goods [18]. There is a strong drive to create an effective interaction among the customer DT and the object DT in the apparel ecommerce space. Retailers are challenged today by a growing number of apperels being returned from customers that are buying several sizes of a dress, try it at home and return the ones that don’t fit. This generates huge cost. Consider that some 30%21 of clothes bought on-line are returned (compare this to the 8% returned when bought in a brick and mortar shop). The adoption of AR/VR and Digital Twins can help in reducing the return rate. 3.2.6 Smart Cities City management in the future cannot be imagined without the digital twin. The complexity of all relevant processes within a city and the control of all safety and security related aspects will require a digital twin framework, especially in the tremendously growing mega cities in Asia. The introduction of the Building Information Model22 in Singapore as a mandatory tool to unload the public administration from basic checks on applications for construction was already mentioned [5]. Munich has set up the Urban Data Platform as a central data hub for the digital city twin. The target is to digitally transform all administrative processes including a simulation of impact by planned modifications within the city infrastructure. A complete 3D city model is planned, including surface pattern information on a high grade of details obtained from biannual drone-scanning [19]. The model is planned to be enriched by real time sensor, geo and modelling data (e.g. traffic modelling) to support an increasing number of smart city solutions like automated traffic concepts.
21 22
https://www.invespcro.com/blog/ecommerce-product-return-rate-statistics/ https://www.autodesk.com/solutions/bim
1154
R. Saracco and M. Lipka
The EU is going even one step further. By the Destination Earth23 initiative, a very high precision digital model of the earth is targeted to monitor and simulate human impact on planet earth [20]. This digital twin is expected to give background for any political decisions having potential impact on the earth as a whole.
3.3 A Holistic Perception of a Mixed Reality as the Future Web Interface Today’s human’s interaction with the digital web is not unleashing the full potential of human senses and handling capabilities. While humans are used to manipulate, walk, and speak they are limited in the digital web to type, select, and click. Intuitive behavior is limited by today’s web technology. As an example, data storage and retrieval would be much easier by support of human’s photographic memory as supported by conventional card-index cabinets or bookshelves. Virtual reality and augmented reality are going to overcome these limitations by diving into a mixed reality that is offering an interface to all natural control mechanisms of human beings. For sure, this interaction within the mixed reality space is requiring a solid unobtrusive HMI, which is from the current perspective the biggest challenge for the user acceptance. Despite Facebook’s launch of Workrooms in 2021 [23], a virtual meeting space via Oculus, Facebook’s XR chief scientist Michael Abrash does not expect a quantum leap within 10 years for the mixed reality HMI during Oculus Connect in 2019 [24]. The main challenge lies in the sophisticated human sensory system, where different context information validates each other and if they do not fit together causing illness and in consequence a rejection of such kind of technologies. Linden Lab’s Second-Life [25] could be seen as the prototype for such kind of new web interface providing a full virtual environment, though the missing link for broad acceptance after nearly 20 years of operation is, once again, a convenient HMI.
3.4 Smart Assets – Establishing a Digital Economy So far, we have more less seen the digital twin as a representative of a real object in the digital environment, as something that is supporting different stages over the life cycle of a product, useful in reality as indicated by the terms digital shadow or digital thread. The increasing interactions of humans within the mixed reality, anchored in the spatial web, in the future indistinguishable from real interactions, is going to develop an own cyber-economy. Within the cyber-economy digital assets will gain value without ever going to be materialized for the real world. The raw model is known from the cyber gaming community, though while gaming is a passion for a 23
https://digital-strategy.ec.europa.eu/en/policies/destination-earth
Future Evolution of Digital Twins
1155
limited group of people, we postulate that cyber space communication and interaction will gain similar broad importance for our growth driven economy like telephony. Correspondingly to the development of social behavior in the virtual reality derived from typical behavior in the real social environment, digital products will be developed to become smart digital assets. While Louis Vuitton was bridging reality with the cyber game “League of Legends” by offering a corresponding creation in real and virtual for the game,24 a Dutch company “The Fabricant” [18] sold a unique digital dress, protected by a block chain, at a price of 8600€ in 2019. We already said, that a relevant part of life, especially of the business interactions, is going to take place in the virtual space. Instead of physical meetings, accompanied with expensive trips and representative meeting rooms, the personal avatars are going to meet in virtual meeting environments. Similar to the real environment, digital furniture and unique digital paintings are expected to become a status for the host and a high-quality digital suit is a matter of course in important meetings. These digital assets are going to be traded similar to the real counterparts. The uptake (might be a fad, we’ll have to see in the coming years) of NFT is clearly pointing in an increased value of digital assets.
3.5 Augmentation in the Real Environment As mentioned before the digital information layer will differ between different context groups defining different verses. Correspondingly the visualization of smart assets or the augmentation of the environment will depend on users’ specific rights or needs. What is meant can become clear by the example of a traffic sign, respectively the DT of the traffic sign. Traffic signs do not have a universal meaning, but depend on the type of vehicle addressed. As an example a traffic sign pertaining to trucks only, does not need to be augmented in passenger cars; in case of autonomous cars, there is no need for any augmentation of traffic signs at all. In view of visualization technologies for AR, the technological challenges for the HMI are similar as in the case of VR, except the technology can be embedded into a system like cars. An impression of such kind of blurring boundaries between digital content and real space was given by Nintendo’s Pokémon Go on smart phones in 2016 [26]; the digital content is expected to become photorealistic and positioning of objects no longer related to a fixed network address but instead to GPS data in future (Fig. 4).
24
https://eu.louisvuitton.com/eng-e1/magazine/articles/louis-vuitton-x-league-of-legends#
1156
R. Saracco and M. Lipka
Fig. 4 A world of blurring boundaries between real and virtual experience
4 Future Applications of Digital Twins In this section we explore how the evolution of Digital Twins opens up new and diverse areas of application and in doing so it will change our landscape, bringing us into a Digital Reality dimension. Part of this content and ideas were stimulated by the ongoing work in the Digital Reality Initiative25 at the Future Direction Committee of IEEE. Specifically, we consider the rise of Cognitive Digital Twins, the application of Digital Twins in Education, both in high schools and university and in continuous education, the flanking of Digital Twins to people to augment humans and to empower their avatars, the use of Digital Twins as bridge between humans and machines to improve cooperative working and finally the self-creation of digital twins giving rise to meta-twins (modelling complex systems) and swarms (modelling loosely connected entities).
4.1 Cognitive Digital Twins – Knowledge Management The concept of Cognitive Digital Twins was first used by IBM26 in 2018 in relation to Digital Twins in the manufacturing area with the ability to learn from the accrued data –machine learning in the small– and, basically at the same time, by the Digital
25 26
https://digitalreality.ieee.org https://www.ibm.com/blogs/internet-of-things/iot-evolution-of-a-cognitive-digital-twin/
Future Evolution of Digital Twins
1157
Reality Initiative27 applying Digital Twins to the Cognitive capabilities of humans as a way to capture their knowledge base (and the way they acquire and expand that knowledge base). In this decade we can expect a significant “adoption” of intelligence in the Digital Twin. That will surely be required to support autonomy and to make sense of the context. Such “outward-looking” intelligence may also be applied internally, “inward-looking”, to become aware of the meaning of the accrued data. Suppose that I can create my Personal Digital Twin (see Sect. 3.2.3) and embed into it the knowledge that I have accrued (my cv might be a starting point…). As I am gaining experiences in some areas my PDT will update my knowledge base, likewise if I am following a course, attending a conference… Actually, I could end up using my CDT to create an updated cv once I need one! If my PDT has my knowledge it could be used as an avatar in some (specific and well defined) interaction, in practice multiplying myself. This is not science fiction. It has already been done by UBS, that with the help of IBM has created a CDT of Daniel Kalt.28 The CDT is managed by Watson, the IBM AI system, and it is able to impersonate Daniel in videoconferences with UBS clients. It is so good an impersonator that his (its) clients feel they are interacting with the real Daniel. CDTs will not be restricted to mirror people knowledge. A company may create its own CDTs that in turn can aggregate the knowledge of the company’s employees CDTs. The company CDT will have the knowledge of the processes making up the company, of the relations with suppliers and with resellers… It will have the historical knowledge of what was done, what worked and what failed… We can expect a significant business in this area to emerge as an extension of the Enterprise Resource Management. Knowledge is an ever more important resource and CDT can be a fitting technology for managing it. Imagine a company considering to open a new market, to develop a new product: • what kind of competences would be required? • what is the leading edge level for those competences existing in the world (and where are they)? • what is the internal level of competences and where are they located? • how could the gap in competences be filled (training, new hiring, getting consultant, outsourcing a task…)? All of these are crucial questions for a company and the CDTs approach can provide timely and accurate answers. We can expect CDTs to become more and more performant and ubiquitous. Companies may ask prospective hire to interact with their CDT, and there will be specific contractual stipulations on the right to use the CDT by the company, as well as who is going to own the additional knowledge that my CDT will be acquiring by https://cmte.ieee.org/futuredirections/2019/05/16/applying-cognitive-digital-twins-toprofessional-education/ 28 https://www.ubs.com/microsites/news-for-banks/en/global-trends/2018/can-one-be-availablefor-clients-24-7.html 27
1158
R. Saracco and M. Lipka
working in that company. Clearly it will gain, by mirroring, the knowledge that I will be gaining, through experience on the job in that company, but it might also gain a far broader knowledge by interacting with other CDTs of people working in that company and potentially by CDTs of clients/suppliers of that company.
4.2 Education A direct extension of CDTs can be found in the education space. So far it is still a concept on the drawing board, although some examples are already available,29 but we foresee an increased adoption, possibly driven by companies. As pointed out in Sect. 4.1 CDT are likely to be adopted by companies as a way to manage their asset knowledge and to be more effective in leveraging it. The obvious next step is for companies to require the availability of a CDT to better exploit the knowledge of their employees and resources (the evolution of work, with more and more remote working is also stimulating the adoption of CDTs). Those seeking an employment, and even more those seeking to leverage on their knowledge offering it where it is needed, will be interested in creating their own CDT. This will create a strong motivation for students to create their CDT, growing it along their academic curricula. At that point we can expect new applications to be “invented” to further empower the CDT in its various aspects: creation, update and exploitation. It is not going to stop there. The CDT is going to re-engineer the education process as well as the “goal” of education. Here we are not taking a stand one way or the other. We just want to point out the possible change. Just think about the way tools have changed the way and goals of education in the past: the invention of writing changed the way of education and learning. More recently how many people would know today how to calculate the square root of a number? Yet, many more people know today how to get the result, just by using the calculator on their smartphone. It has become more important to know what we can use to achieve a result than to know how to do it by ourselves alone. This has just become inevitable. The world is so complex that in several (and growing number of) situations we simple would not be able to solve our problems without a computer/ software/application. We have to trust tools the same way we had to trust experts in the past. This complexity is spread everywhere, we already experienced that in our education, we need to focus more and more on less and less because that “less” is so complex that it is requiring all of our attention. Because of that we have to partner and share knowledge: we have been used to do that, what is new is that now we are starting to share knowledge with machines, with software. The growth of AI is shifting more and more knowledge to the machine side, we will need to share, and gather, knowledge with machines.
29
https://dl.acm.org/doi/abs/10.1145/3404983.3410007
Future Evolution of Digital Twins
1159
CDTs may become an essential knowledge gateway between ourselves and machines: we need our CDTs to translate machine knowledge into a way that we can digest, and machines will be better off interacting with our CDTs since it will be faster and more effective. As knowledge becomes more and more distributed among people and machine the CDTs will become essential, like the ink and the paper of the past and the education will have to take this into account. Education will require helping students creating an effective CDT and having them learning how to use it to grow, share and leverage knowledge.
4.3 Human Augmentation – Avatars Humans/CDTs/DTs The CDTs are likely to become a cognitive extension of ourselves, providing an augmentation to our capabilities. It is not just the CDTs, the Digital Twins of objects, of ambient, even the CDTs of companies and of a city can provide an augmentation to our capabilities. Today we may choose to go to work in that specific company because they provide a much better working environment with advanced tools that would come handy in our work. This is quite obvious if you are a researcher: wouldn’t you go to a company that can offer you the best lab equipment, that is best connected to other research centers so that you can leverage on their assets to multiply your effectiveness? I am sure you would. Same goes if you are a sales person, or an engineer, … or whatever. You would look for and prefer a company, an environment, that is fostering your career/supporting you. This also goes if you are looking at yourself as a citizen. Think about the services provided by smart cities digital twins, see Sect. 3.2.6, and how a citizen could benefit from them, resulting in an augmentation of her “quality of life”. Also consider how a Personal Digital Twin may become an extension of yourself: it can be instantiated multiple times enabling your digital person to be present at the same time in many places, as well as multiplying, augmenting, your value. Avatar technology is progressing rapidly (the whole area of synthetic reality will see an amazing progress in this decade) and the tuple avatar-personal/cognitive digital twin will become an important player in social and business environment.
4.4 Cooperative Working As already mentioned, digital twins go hand in hand with Industry 4.0 since they provide a way to link entities (like robots, products, supply chains…) to the cyberspace and monitor, simulate and analyze, with the possibility of interacting with each single entity and influencing the evolution of the overall context.
1160
R. Saracco and M. Lipka
By including Personal Digital Twins – PDTs – in the picture we can foresee a facilitation of cooperative working among machines (this embraces software/applications) and humans. We expect, along the thoughts socialized in the previous sub-sections, a strong uptake of digital twins becoming part, enablers, of collaborative working. Whilst collaboration among DTs on a workshop plant is already a reality, with robots’ DTs talking and coordinating activities with each other, the use of PDTs is still in the future. We are foreseeing the adoption of PDTs as a way to ease the interaction with machines: • a PDT can customize the interaction with a specific machine based on that person experience/knowledge (as an example interactions can be mapped on previous experience with different machines); • a PDT can convert the required interactions into AR prompt; • a PDT can use the ongoing interaction with the machine as accrued knowledge finely tuning subsequent interactions and reinforcing the knowledge acquisition; • a PDT can simulate the effect of an interaction before it takes place highlighting the result to the person for better situational awareness and as a help to decision making. Once this stage is reached the next step is to leverage on DTs and – partially – on PDTs autonomy to establish a mesh network of cooperating (P)DTs. This requires context awareness and an emerging intelligence out of the meshed network that feeds back on each (P)DT. In other words, each (P)DT by acting is changing the overall context and the change of the perceived context by each (P)DT creates a different awareness and influences the following behavior. A general framework is established, i.e. a shared purpose and a shared understanding of constraints (such as diminished resource utilization, decrease emission, increase production output…) and their ranking (in most situation pursuing one goal may be detrimental to another so an overall ranking/mixing is required). In case of PDTs a crucial factor is the awareness of both the context (this is shared with all DTs) plus the transparency, that is the understanding of the reasons why certain actions have been taken or will be taken. This is important to ensure that responsibility and accountability remains on the lap of the humans. In most situation, where overall monitoring is feasible (a single ownership domain) the whole set of (P)DTs may be clustered under a DT representing/mirroring the whole process. This DT may embed the framework and general principles to which each (P)DT shall refer. Notice that this is not a controller, autonomy is preserved within the defined framework. The adoption of PDTs in industry, as a way to smooth interactions with robots and tools (through their DTs) is leading to an operation of the whole workshop through a Digital Twin that “contains” PDTs and DTs. We call this an Hybrid Digital Twin since it contains PDTs (that may be characterized by non-deterministic behavior in certain situations, sometimes conditioned by emotional factors) and DTs (whose behavior is deterministic).
Future Evolution of Digital Twins
1161
4.5 Self Creation of Digital Twins – Meta Twins, Swarms The whole software landscape is evolving towards “low-code” paradigms, basically aiming at the possibility of creating applications and software that requires very little coding activity by a human. The “purpose” of the code is defined in a natural language-like modality (usually involving prompts and enquires from a coding development system that has access to a library) and an automatic system takes care of the required coding activity. This trend will also apply to the creation of Digital Twins. In several areas, like manufacturing, the design phase is already fully supported by computer applications and these are already taking care of generating a digital model of the entity being designed. Commercial platforms, like Mindsphere and FiWare, provide support for DT at all stages in terms of data accrual, data hosting, API management, … Hence the expectation that the number of entities with a flanking DT will rapidly increase over this decade with both high volume of DT and high density. This latter is important since it puts entities with an associated DT in close proximity. We expect that the trend towards DT autonomy (that implies some form of context awareness) will lead to many DTs that will be mutually influencing one another in a given ambient. These DTs will be acting as a swarm, where the individual behavior is influenced by the behavior of the others and in turns all together through this loosely synchronized behavior give rise to an intelligent semi-orchestrated behavior. In turns, this semi orchestrated behavior is generating data and will evolve according to some self-generated model that creates a meta-twin. Meta-twins are thus side effect of the autonomous interplay of several DTs, they will exist only in the cyberspace but they will be modelling the behavior of a physical space. As such they will be used for simulation and, in perspective, they might be used to enforce a framework.
5 Open Issues In addition to the issues pointed out in the previous sections, like the composability (clustering) of DTs to create a meta-DT, to the distribution of functions among the DT and its physical entity (stage 4 and beyond), the data ontology necessary for a smooth interaction among and with DTs, and the several ethical and societal issues arising from applying CDT and PDT to people, we would like to point out in the following sections the aspects of accountability/trust and the one of the framework (platforms, federation, standards).
1162
R. Saracco and M. Lipka
5.1 Accountability and Trust The technological development, especially in the field of artificial intelligence, will drive the role of digital twins from the data model of any physical entity to an actor in a corresponding business environment. As examples, being part of an operational environment a digital twin of any machine can be enabled to order, repair and spare part services by itself, human’s digital twins might execute legal transactions on behalf of any person. These developments need to be embedded into the global legal system where liability needs to be clearly defined and there should always be a real person taking responsibility for what is going on. Sure, financial damage can be covered also by a legal entity or insurance, though in case of personal injuries it is always a natural person hold liable by humans’ public law. That means that for every digital twin a documented and traceable ownership becomes mandatory. Therefore, those digital twins mediating and acting in business relations need a strict legal framework for accountability and trust. Regulations currently under development by international (e.g. IEC, …) or national organization (e.g. German DIN) for artificial intelligent algorithms will become a matter of course for digital twins and a potential impact respecting classification will occur. For trusted deployment of digital twins in business environments any classification needs a certification of the digital twin. In comparison to plain program code, where the algorithm can be separated from data, AI is self-adapting by deployment and that means, in turns, that any certification needs regular validation and approval by a third party like the German TUEV. The legal handling of cars is a quite good blueprint for the handling of the future DT. Type approval refers to the manufacturer while operational safety has to be ensured by the owner. The grade of autonomy for personal digital twins reflects a substantial risk for the user and for the business counterpart. While some of the financial risks can be covered by an insurance, other will require strict control and therefore attention and approval for any transaction by the clear ownership principle. That means that autonomous personal digital twins acting on behalf of the owner, need to be aware that the owner is sane to cover the consequences of legal transactions. The transfer of ownership of DTs is another interesting legal problem, relatively easy to handle if the DTs are part of a working infrastructure like a production plant. Though in case the personal digital twin is part of a heritage the inheritor needs to be aware of the meaning of the digital twin and it’s functionality for the legator. Some of the legal transactions covered by the personal digital twin might be necessary to be continued, others must not continue without approval by the inheritor. In any case that means that the personal digital twin needs to be aware of the status of the related natural person. Depending on the degree of autonomy and classification of the DT a continuous surveillance of the owner by the DT might become necessary by a corresponding technological solution. The implementation of such kind interrelation will also require long term ethical discussions and corresponding regulations.
Future Evolution of Digital Twins
1163
5.2 Platforms, Federation, Standards There are already several platforms supporting Digital Twins, such as: • MindSphere: a Siemens platform supporting industrial processes, with a specific focus on Industry 4.0. Digital Twins are supported from the design stage, through manufacturing and up to the operation phase. The platform will evolve over time supporting DT autonomy and process/operation management in the cyberspace. It is becoming a de-facto standard in manufacturing • FIWARE: an EU funded platform supported by the FIWARE Foundation. It is being used for Smart Cities. It supports data aggregation and vertical applications. Digital Twins can be hosted on the platform and/or access the platform data. In addition, DTs can interact with vertical applications. Their support is going to increase to include autonomous DTs. • Gaia-X: is a new architectural framework being defined in Europe and having a growing worldwide membership (over 850 members as of December 2022). It focuses on data management, through the definition of data spaces and rules to access and share data. Digital Twins are users of these data spaces and in addition can be included as data space owners. It is expected that in the coming years Gaia-X will become a standard for DTs in several application fields. There are working groups active in manufacturing, automotive, energy, healthcare, agriculture. The issue here is how to interoperate among the different platforms (Gaia-X might provide a framework) at the same time allowing for innovation and competition. In perspective Digital Twins will become, singularly, a platform of their own, meaning that there will be millions of platforms! A global framework to ensure interoperability is essential. There are already standards for DTs – one chapter in this book is discussing them – (no one yet for CDT) and more will come. However, in perspective, we feel that the evolution will call for agreed frameworks, more than for a standard. DTs in a way, as they become intelligent entities, will be like humans. We cannot have standards for humans, we have to accept individuality and diversity. However, we have developed accepted communication frameworks (languages) and rules of behavior (laws) even though they may differ from place to place.
6 Epilog The evolution of Digital Twins, as presented in this chapter is based on evolution of several technologies, among these we feel that major impulse will be given by evolution of: • Artificial Intelligence • Sensors and actuators
1164
R. Saracco and M. Lipka
• Communication fabric (5G and beyond, low power communication, mesh networks) • AR and VR devices providing seamless connectivity to the cyberspace data We have taken for granted that the evolution of these technologies will happen at a steady pace, even at an accelerated one. Along with tech evolution that is going to play a major role in the adoption and evolution of DT in the business/industry area we expect a “cultural” evolution with an even greater blurring of boundaries between the life in the physical and in the cyberspace (metaverse). This cultural evolution will stimulate adoption of Personal Digital Twins as copies of ourselves. It might also lead (and a few signs are already presence) to the creation of a digital “immortality” achieved through our Personal Digital Twin. The concept of the digital twin together with the conclusions for the future evolution, resulting one day in a revolution of business and social life, can be perceived as a bundle of opportunities. Others might see these developments as a threat for humankind, for some it might be disturbing, and some might not believe in the potential of this amazing development. Sure, the limitation of our universe will one day also limit the technological development, but from our current perspective there is still a long way to go until we run into that saturation. We also might see limits in the development of artificial intelligence, that will falsify Frank Tipler’s omega point theory [27], also known as “the big cosmic computer”, but there is no evidence to assume that technological development will come to an end any time soon. It must have been difficult for ancient people to imagine technologies enabling to travel to the Moon, but also the final report of the 9/11 incident criticized the lack of imagination. Imagination is an important driver for innovation, and this chapter was intended to give that kind of inspiration.
References 1. http://www.laiserin.com/features/bim/autodesk_bim.pdf 2. Eastman, C., Fisher, D., Lafue, G., Lividini, J., Stoker, D., & Yessios, C. (1974). An outline of the building description system. Carnegie-Mellon University Research Report No. 50. 3. https://www.youtube.com/watch?v=gCuPx9shWT0 4. https://leica-geosystems.com/industries/building/commercial/operation-and-maintenance/ as-built-measurement 5. https://academy.archistar.ai/singapores-b im-m andate-r aises-q uestions-a bout-s imilar- measures-in-australia 6. https://www.verses.io/ 7. https://wiredelta.com/how-the-spatial-web-is-defining-web-3-0/ 8. https://www.spatialweb.net/ 9. https://medium.com/swlh/an-introduction-to-the-spatial-web-bb8127f9ac45 10. https://www.bmbf.de/bmbf/shareddocs/bekanntmachungen/de/2021/09/2021-0 9-1 3- Bekanntmachung-6G.html 11. https://digital-strategy.ec.europa.eu/en/news/europe-launches-first-large-scale-6g-research- and-innovation-programme
Future Evolution of Digital Twins
1165
12. h t t p s : / / w w w. t c t m a g a z i n e . c o m / a d d i t i v e -m a n u f a c t u r i n g -3 d -p r i n t i n g -n e w s / surgeons-separate-conjoined-twins-with-help-of-3d-printed-model/ 13. https://volucap.de/ 14. https://tech.fb.com/codec-avatars-facebook-reality-labs/ 15. https://oben.me/ 16. https://www.i-scoop.eu/digital-twin-hospitals-partnership-smart-hospital/ 17. https://www.youtube.com/watch?v=VwknqjbGWco 18. https://www.thefabricant.com/ 19. https://muenchen.digital/twin/ 20. https://digital-strategy.ec.europa.eu/en/policies/destination-earth 21. https://cim40.com/en/ 22. https://dlt.mobi/ 23. https://about.fb.com/news/2021/08/introducing-horizon-workrooms-remote-collaboration- reimagined/ 24. https://www.youtube.com/watch?v=7YIGT13bdXw 25. https://secondlife.com/?lang=en-US 26. https://pokemongolive.com/en/ 27. Tipler, F. J. (1994). The physics of immortality: Modern cosmology, God and the resurrection of the dead. Doubleday. Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento and serves as Senior Advisor of the Reply Group. He is a senior member of IEEE where he co-chaired the Digital Reality Initiative fostering the Digital Transformation 2020–2022. He is a COMSOC Distinguished Lecturer and he is Past Chair of the New Initiative Committee. He has published over 200 papers in journals and magazines and 30 books/ebooks. He writes a daily blog, https://cmte.ieee.org/futuredirections/category/blog/, with commentary on innovation in various technology and market areas.
1166
R. Saracco and M. Lipka Michael Lipka received a diploma in communication engineering from the Technical University of Darmstadt in 1991 and a PhD in semiconductor technologies for RF devices from the University of Ulm in 1996. Different management positions in the communication industry from 1991 to 2007 with Alcatel, Siemens, and NOKIA focused on technology management for narrowband switching and mobile communication systems. Also as a manager in after sales service responsible for system acceptance by pilot customers, specifically Deutsche Telekom and Singapore Telecom. From 2007 to 2016 project manager for strategic long-term technology planning activities within Siemens Corporate Technology. Management of “Pictures-of-the-Future” foresighting projects for different Siemens businesses and installment of this toolset as a global benchmark for corporate foresighting. In 2017 taking over the technology management of Huawei’s European Research and since 2022 acting as senior manager for technology strategy of Huawei Germany.
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin Martin M. Zoltick and Jennifer B. Maisel
Abstract A myriad of laws, rules, and regulations are worthy of consideration for any new and innovative technology, and even more so for one as broad ranging and comprehensive as the Digital Twin ecosystem. A technology like this has the contradiction of open versus proprietary, and all the hybrids in between, because it is in the early stages of its evolution that, in many respects, relies on a combination of existing technologies and innovations. From a legal standpoint, we consider intellectual property rights, including patent, copyright, and trade secret protection, and balancing those rights with the benefits and protections available under contract law. The wide applicability of the Digital Twin to various technologies and fields, such as healthcare, finance, education, aviation, power plants, nuclear reactors, any many more, gives rise to regulatory considerations and ethical concerns. The Digital Twin ecosystem, as applied in these areas and more, requires the collection, processing, generation, and transmission of data subject to regulatory requirements involving privacy and cybersecurity issues, as well as ethical concerns requiring careful consideration of potential bias, trustworthiness, and transparency in the technology used. Keywords Bias · Compliance · Cybersecurity · Digital twins · Digital twins and data · Digital twin ecosystem · Ethics · Innovation · Intellectual property · Laws · Legal aspects · Liability · Patents · Privacy · Protection · Regulations · Regulatory requirements · Rights · Security · Transparency · Trustworthiness
M. M. Zoltick (*) · J. B. Maisel Member, Rothwell, Figg, Ernst, and Manbeck, P.C., Washington, DC, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_37
1167
1168
M. M. Zoltick and J. B. Maisel
1 Introduction1 Innovations challenge the boundaries of our legal system, particularly for intellectual property (IP) rights and the availability of different forms of protection for such innovations. Software is a prime example, and even today, legislators, lawmakers, judicial bodies, judges, and practitioners around the globe struggle with fitting software into the different forms of IP and other protection potentially applicable. Now consider the evolution and advancements made in the areas of virtual/augmented/ mixed reality, artificial intelligence, machine/deep learning, internet of things, blockchain, biotechnology, big data and analytics, and quantum computing. These technological innovations present new and continuing challenges from legal, regulatory, and ethical perspectives – often, applying or interpreting laws and regulations drafted and enacted years and, in some cases, decades before such technologies were conceived, let alone developed and deployed. Building and implementing these types of technologies typically involves the collection, processing, generation, and transmission of massive amounts of data, giving rise to cybersecurity and privacy concerns. Introducing aspects of artificial intelligence and cognitive computing further invokes ethical concerns requiring careful consideration of potential bias, trustworthiness, and transparency in the technology. As detailed in the preceding chapters, the Digital Twin requires a wide array of technologies and applies to a myriad of use cases. Simply put, the Digital Twin will be a technological innovation that will test the bounds of our legal system in many ways. The Digital Twin presents unique challenges, from legal, regulatory, and ethical standpoints, because of the confluence of so many different technologies that likely will be applied across many different subject matter areas, in different geographic locations, and by different business entities, enterprises and individuals. The key, based on the authors’ many years of experience navigating these issues for clients developing, implementing, deploying, and/or using large-scale complex systems like a Digital Twin, is to develop and implement a strategic, stepwise approach – often tracking the system development lifecycle – that includes consideration of applicable legal, regulatory, and ethical issues. This approach includes: (1) assessing the availability and different forms of IP protection for the Digital Twin technology, (2) conducting IP due diligence search and review in connection with freedom to operate and IP clearance opinions, (3) assessing and negotiating necessary contract rights and establishing a licensing regime for the Digital Twin technology, (4) identifying and assessing compliance with applicable US and International government regulations, and (5) assessing the Digital Twin technology and, particularly, the data used and algorithms and models applied, for potential bias, trustworthiness, and transparency, and developing a mitigation strategy (Fig. 1).
The information provided herein is for general informational purposes only and does not, and is not intended to, constitute legal advice. 1
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1169
Fig. 1 Implementing an IP strategy
2 Assessing the Availability and Different Forms of IP Protection for the Digital Twin The first step in assessing the availability and different forms of IP protection for Digital Twin technology is to understand what IP is, the basic forms of IP protection, and what aspects of the technology each is designed to cover. Understanding what IP is starts with the concept of “property.” “Property” refers to tangible things or assets (e.g., real property, such as a house, and personal property, such as a car) with rights/interests owned by a person or entity. “Intellectual property” refers to creations of the mind for which a set of rights are recognized under the applicable laws. IP rights are considered intangible assets. The basic forms of IP protection include: (1) patent rights, (2) trade secret rights, (3) copyright rights, and (4) trademark rights. It is important to conduct a critical review of all the different technologies that are part of the Digital Twin and make an informed determination, for each identified technology (e.g., models, algorithms, data sets, source code, processing, inputs, outputs, images, graphics, interfaces, functions, architectures, etc.), which IP category that technology falls within (Fig. 2). Start building an IP schedule that lists the different technologies identified and provides a corresponding indication of the IP rights intended to protect that technology. Set a schedule for periodic review of the current technologies of the Digital Twin and update the IP schedule as appropriate. This IP schedule can serve as a useful roadmap for ensuring that appropriate IP protection is secured and that informed decisions are made about what IP protection to pursue.
1170
M. M. Zoltick and J. B. Maisel
Fig. 2 Types of intellectual property
As a predicate to this process, it is also important for the company to have in place IP policies and procedures to ensure that, for example, any employees of the company or independent contractors engaged by the company working on development of the Digital Twin technology are obligated to: (1) keep company information confidential, (2) disclose to the company all IP developed, (3) assign to the Company all rights in that IP, and (4) not use that IP or any company proprietary information except for the benefit of the company. All individuals working on development of the Digital Twin technology, whether an employee or independent contractor, should be subject to an agreement including these types of provisions. Confidentiality and non-disclosure agreements should be used with any third parties that are provided with access to the company’s proprietary information, and all third parties that are involved in the development of the Digital Twin technology should have an agreement with the company that addresses confidentiality, use, ownership and assignment of IP rights, as well as the related provisions typical in such agreements regarding representations and warranties, limits of liability, indemnification, and disputes. Company IP policies and procedures should also include a disclosure process for facilitating the identification and disclosure of innovations to company management, and a defined process for determining in a timely manner whether and, if so, how the company will protect the innovations disclosed. Also important, particularly in today’s development environment, is to have in place guidelines regarding the use of open source and third party software, how to handle adoption and use of standards, and how to deal with data that may, for example, include personally identifiable information or other protected information. Decisions will need to be made early in the development cycle regarding whether
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1171
and to what extent aspects of the Digital Twin technology under development will be maintained as proprietary or made open source, and to what extent creation or adoption of protocols and standards will be adhered to. These are particularly important considerations given that the Digital Twin ecosystem relies on a combination of existing technologies and new innovations which, in many if not most applications, will need to be integrated and tightly coupled for data transfer, communications, input/output, etc.2 The creation of protocols and standards for the Digital Twin ecosystem to enable plug and play integration for software development and application tools (e.g., Digital Twin simulators, modelers, and viewers), devices (e.g., IoT sensing devices and AR/VR/XR headsets), and digital objects (e.g., cars, planes, boats, condominiums, houses, offices, equipment, clothing, and artwork) have taken on an even greater significance and a broader application with the current hype around the metaverse. If the metaverse realizes its anticipated potential, the value proposition for the digital objects (and their connected non-fungible tokens (NFTs)) created as part of the Digital Twin ecosystem could be enormous. Indeed, in some cases, those digital objects may realize more value than the physical objects they virtually represent. And, important to that, is the ability to utilize those digital objects across different types of digital platforms. Establishing protocols and standards for the Digital Twin ecosystem will be critical to enable cross-platform use, and plug and play integration, which will likely have a substantial influence on the overall value of the digital objects, as well as the software development and applications tools and devices used to create and use them. Establishing guidelines for adoption and use of protocols and standards to enable this type of functionality is something that should be considered and decided. One other related legal aspect to mention here is the notion of cross-licensing and standard essential patents (SEPs). The protocols and standards that necessarily will be part of the evolution of the Digital Twin ecosystem and the technology that enables their adoption and use will also necessarily lead to cross-licensing considerations, including the identification of SEPs and the licensing of SEPs under fair, reasonable, and non-discriminatory (FRAND) terms. This consideration should also be part of the IP policies and procedures for the organization (Fig. 3).
The importance of these considerations was highlighted in the current draft publication from the National Institute of Standards and Technology (NIST), NISTIR 8356, entitled “Considerations for Digital Twin Technology and Emerging Standards” (April 2021), p. 4 (“Whether or not these developments catalyze Digital Twin technology into widespread use may depend upon work in standards development. Currently most IoT systems, simulation and modeling software, and VR and AR systems exist in stovepipe proprietary systems. It is possible to combine them, but it takes significant work to integrate them. Much of the work in the emerging Digital Twin area is in the creation of protocols and standards to enable plug and play integration. The idea is to mix and match and be able to use any viewer with any Digital Twin simulator and modeler along with any sensing device. The idea is to be able to load any Digital Twin computer file into a Digital Twin system and have it function regardless of what is being modeled. These are lofty goals for the emerging Digital Twin community; their success in standards may largely determine the extent to which the technology is used.”). 2
1172
M. M. Zoltick and J. B. Maisel
Fig. 3 Implementing an IP Strategy: Step 1 – Develop/implement IP policies and procedures
2.1 Patent Rights Patents rights protect ideas or “inventions.” Patents are property rights granted to inventors or, if assigned, to the assignee in exchange for public disclosure of the invention. The categories of patent eligible subject matter include a process, machine, manufacture, composition of matter, and improvements thereof.3 Laws of nature, physical phenomena, and abstract ideas are not patent eligible.4 Computerized systems and software-implemented methods may be patent eligible even if directed to an abstract idea5 if they integrate the abstract idea into a practical application. And, even if the abstract idea is not integrated into a practical application, computerized systems and software-implemented methods may be still patent eligible if they involve significantly more than just the abstract idea.6 Patent rights represent a
See 35 U.S.C. § 101. See Bilski v. Kappos, 561 U.S. 593, 611 (2010; and Diamond v. Chakrabarty, 447 U.S. 303, 309 (1980) (“The Court’s precedents provide three specific exceptions to § 101’s broad patent- eligibility principles: ‘laws of nature, physical phenomena, and abstract ideas.’”). 5 There are three categories of subject matter that are considered abstract ideas: mathematical concepts, certain methods of organizing human activity, and mental processes. Only concepts that fall into those groupings can be rejected as “abstract ideas.” 6 The U.S. Patent and Trademark office provides several examples of inventions and corresponding claims that are illustrative of the analysis that is made to determine subject matter eligibility. These specific examples provide good guidance, at least under the current legal standard as applied. See https://www.uspto.gov/sites/default/files/documents/101_examples_37to42_20190107.pdf 3 4
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1173
broader, more powerful form of protection than copyright or trade secret in several respects, which will be addressed in more detail below. 2.1.1 Patent Protection – Digital Twin Examples (From Oil and Gas Projects and Operation) • Process –– Advanced analytic engine for a Digital Twin system for predicting corrosion issues using visual and 3D data • Machine –– Mobile device with data collection and processing module for inspection using mixed reality (MR) for use with a Digital Twin system • Article of Manufacture –– Corrosion detection monitor configured for communication with Digital Twin system • Composition of Matter –– Antistatic agent and surface cleaner • Improvement –– Improved analytic engine for a Digital Twin system for predicting corrosion issues using machine learning (ML) • Design –– New design for smart glasses adapted for use with a Digital Twin system on an offshore production platform The different types of patents that should be considered relative to securing patent protection for the Digital Twin include: (1) provisional and non-provisional U.S. utility patents, (2) U.S. design patents, (3) Patent Cooperation Treaty (PCT) applications, and (4) ex-U.S. regional and national stage applications and issued patents. There are basic requirements that must be met to obtain patent protection. We will address the requirements under U.S. law, but the requirements in many ex-U.S. countries are very similar. The U.S. Patent and Trademark Office grants patents
(“Example 37: Relocation of Icons on a Graphical User Interface; Example 38: Simulating an Analog Audio Mixer; Example 39: Method for Training a Neural Network for Facial Detection; Example 40: Adaptive Monitoring of Network Traffic Data; Example 41: Cryptographic Communications; and Example 42: Method for Transmission of Notifications When Medical Records Are Updated.”). The European Patent Office has also provided recent guidance on these issues. See https://www.epo.org/law-practice/case-law-appeals/communications/2021/20210310. html (PO G 1/19 decision addressing simulation invention for movement through a building.).
1174
M. M. Zoltick and J. B. Maisel
for inventions that are new7 and non-obvious,8 and that are described in an application with sufficient detail to enable others to practice the invention.9 The term of the patent is, generally, twenty (20) years from the filing date (Fig. 4).10 Patent protection provides the right to “exclude” others from making, using, selling, and offering for sale in the U.S., and importing into the U.S., the invention covered by the patent.11 This right to exclude is limited to the U.S. If patents rights are secured outside the U.S., then this right to exclude can be exercised outside the U.S. as well, subject to the specific laws in those countries where patent protection has been obtained. As mentioned previously, patent protection is broader and more powerful than the protection provided by copyright or trade secret. One reason is that, unlike copyrights and trade secrets, patents protect against independent development and reverse engineering. In other words, neither independent development nor reverse engineering is a defense to patent infringement, whereas these are defenses to copyright infringement and trade secret misappropriation. Also, both copyright infringement and trade secret misappropriation require a showing that the accused had access to the copyrighted work or trade secret. Access is irrelevant for establishing patent infringement. Fig. 4 U.S. Requirements for patentability
See 35 U.S.C. § 102. See 35 U.S.C. § 103. 9 See 35 U.S.C. § 112. 10 See 35 U.S.C. § 154(a)(2). 11 See 35 U.S.C. § 271(a). 7 8
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1175
Autonomous digital agents that can generate new inventions without supervision or input from humans are currently testing the bounds of legal precedent concerning whom or what may be considered an “inventor” of a patentable invention. The creators of the Artificial Inventor Project12 are actively testing these bounds after filing patent applications around the world in the name of an artificial intelligence creativity machine named DABUS that generated the inventions claimed in the subject patent applications. The United States Patent and Trademark Office and a federal district court recently issued the first decisions13 in the U.S. refusing to allow the patent applications to proceed because there was no human inventor named.
2.2 Trade Secret Rights Trade secret rights protect valuable secrets and confidential information against misappropriation and those who improperly derive such information. In contrast to patent rights, trade secret rights do not protect against reverse engineering or independent development. In the U.S., state and federal law govern trade secret rights. State laws vary, including with respect to definitions of what constitutes a trade secret and misappropriation of a trade secret. Federal laws include the Defend Trade Secrets Act (“DTSA”), which provides a federal remedy for misappropriation of trade secrets, and the Uniform Trade Secret Act (“UTSA”), a model trade secret protection framework adopted by most, but not all, states in the U.S. In addition to civil penalties, theft or misappropriation of trade secrets is a federal crime under the Economic Espionage Act. Because of the differing state laws, the company policies and procedures as applied to trade secret protection will need to be tailored to the law of the applicable jurisdiction(s) for the company. Examples of trade secret information can include software and source code, algorithms, machine learning models and weights, formulas, patterns, compilations of information, data sets (raw, query, training, extracted), technical data, processes, know-how, system architecture, research and development information, technology, designs, drawings, engineering, hardware configuration information, customer information, inventions, unpublished patent applications, marketing data, business plans and strategies, financial information, supplier information, and many other types of information. While an analysis of state and federal law must be done as applied to the company’s information and activities to determine whether information would be regarded as a trade secret, a trade secret may be anything that has economic value and provides an advantage in the marketplace. The company must take reasonable steps to maintain the information as a trade secret, otherwise legal protections over the information will be lost (Fig. 5). See Artificial Inventor Project, available at https://artificialinventor.com/dabus/ Thaler v. Hirshfeld, 20-cv-903 (E.D. Va., Sept. 9, 2021); In re Application of Application No. 16/524,350, Decision on Petition (Apr. 22, 2020), available at https://www.uspto.gov/sites/default/ files/documents/16524350_22apr2020.pdf 12 13
1176
M. M. Zoltick and J. B. Maisel
Fig. 5 Trade secret protection
2.2.1 Trade Secret Protection – Digital Twin Examples (From Oil and Gas Projects and Operation) • • • •
Source code Machine leaning models 3D data Architecture for integration of Digital Twin data sources
Trade secrets (whether in electronic or paper form) should be systematically inventoried and documented in a manner that considers the value that the information confers to the company. This is an important part of the process to ensure that the company’s trade secrets are properly identified and substantiated, and, if necessary, to prove trade secret status if there has been a misappropriation requiring litigation. Another part of documenting the company’s trade secrets is to ensure that all documents in either paper or electronic form including company trade secret information include a notice of confidentiality. An example of such a notice, to be stamped or otherwise applied to the header, footer, or title page of all company trade secret documents is as follows: CONFIDENTIAL AND TRADE SECRET This document contains highly confidential information, including trade secrets, owned by Company and this document and its contents are protected under state and federal law including, but not limited to, copyright, patent, and/or trade secret law. Access to and use of this information is strictly limited and controlled by Company. The receipt or possession of this document does not convey any license or right to copy, reproduce, distribute, disclose, or use its contents in any manner whatsoever without the express written authorization of Company. Copying, reproducing, distributing, disclosing, or otherwise using such information without the express written authorization of Company is strictly prohibited.
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1177
2.3 Copyright Rights Copyright rights protect original expressions of ideas, not the underlying ideas (which, as discussed previously, are subject to patent and trade secret rights, as applicable). Copyrightable works include: (1) label designs; (2) pictorial, graphic, sculptural works; (3) literary works; (4) motion pictures, audiovisual works; (5) sound recordings; and (6) derivative works.14 Copyright rights are granted to the author of the work or, if made by an employee within the scope of employment or under a contract designating the work as a “work made for hire,” to the employer. 2.3.1 Copyright Protection – Digital Twin Examples (From Oil and Gas Projects and Operation) • Source code • 3D engineering models • LiDAR scans Copyright protection is automatic and established immediately from the time the work is created in fixed form – that is, upon “fixation” of an original work in a tangible medium. Copyright protection is not contingent upon registration, but registration is a prerequisite to suing for copyright infringement and recovery of statutory damages, attorney fees and equitable relief. A copyright endures for the life of the author plus 70 years or, in the case of works made for hire, the longer of 95 years from date of publication or 120 years from creation. Copyright protection confers upon the copyright owner the exclusive right to reproduce the work, prepare derivative works, distribute copies of the work, display the work, and perform the work.15 Copyright does not preclude others from independently creating the same work or deriving it by reverse engineering. As with patent rights, currently in the U.S., machines (and other non-humans) cannot be an “author” of a copyrighted work. This precedent came from a series of cases concerning the copyright status of “selfie” photos taken by Celebes crested macaques (Fig. 6). The People for the Ethical Treatment of Animals (PETA) organization filed a lawsuit on behalf of the 6 year old monkey, Naruto, asserting ownership of the photos. A federal judge dismissed the suit, ruling that animals cannot assert copyright protection, and an appellate court affirmed the decision (Fig. 6).
14 15
See 17 U.S.C. § 102. See 17 U.S.C. § 106.
1178
M. M. Zoltick and J. B. Maisel
Fig. 6 Self-portrait of a female Macaca Nigra (Celebes crested macaque) in North Sulawesi, Indonesia, who had picked up photographer David Slater’s camera and photographed herself with it
2.4 Trademark Rights A trademark identifies the source or origin of a party’s goods or services. Trademarks are property rights in marks (e.g., words, names, logos, designs, graphics, interfaces, brands, taglines, etc.) that are used to distinguish a party’s products and/or services from those of others. Trademark rights confer upon their owners the right to prevent others from using confusingly similar marks. Under U.S. law, rights in a mark are created by using the mark in interstate commerce. Registration is not required, but registration gives the owner: (1) the right to sue in federal court for infringement, (2) treble damages and attorney fees, (3) a presumption that the mark is valid, (4) rights in a greater geographical area, and (5) a basis for ex-U.S. protection. A trademark’s strength is proportional to the distinctiveness of the mark – i.e., most distinctive are fanciful and arbitrary marks; less distinctive are suggestive marks; even less distinctive are descriptive terms, which must have acquired secondary meaning to serve as a mark; and least distinctive are generic terms, which cannot serve as a trademark (Figs. 7 and 8). The rights in the trademark will endure so long as the mark is used and does not become generic (Figs. 9 and 10). With all these different forms of IP protection in mind, the next step is to develop a strategy for IP protection for the Digital Twin technology. A key first step in this
Fig. 7 Fanciful marks and arbitrary marks
Fig. 8 Suggestive marks and descriptive marks
1180
Fig. 9 Generic words
Fig. 10 Trademarks that are now generic terms in the U.S.
M. M. Zoltick and J. B. Maisel
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1181
process is to conduct a detailed review of all aspects of the Digital Twin technology. This audit should consider all the research conducted as part of the development effort and involve reviewing electronic and, if created, paper records of the employees, independent contractors, and other individuals work in conceiving, developing, prototyping, modeling, coding, etc. the various features, functions, operations, algorithms, interfaces and all aspects of the structure, function, and operation of the Digital Twin technology under development. From this audit process, the developments – i.e., ideas, inventions, processes, software, products, devices, equipment, designs, graphics, interfaces, compositions, names, logos, brands, taglines, etc. – can be identified and a schedule prepared cataloguing, with appropriate descriptions, the identified developments. In addition to descriptions, documentation supporting the development work should be included (or at least links should be provided if in electronic form) to provide evidence of the development work (e.g., metadata, names, creation dates, locations, specifications, prototypes, flow charts, pseudo-code, source code, screen shots, presentations, etc.). To the extent that electronic communication platforms (e.g., Slack, etc.), source code repositories (e.g., Github, etc.), or other systems are used, extracting development records, or linking to them in the schedule can be very useful for documenting the development work and later preparation of patent applications and other filings, if pursued. Another helpful aspect to include in the schedule is an indication of whether, for each identified development, the work involved a third party and/or was in connection with any work funded by the government or pursuant to a government contract. With the development work properly identified and documented, company management can undertake a detailed review, typically in consultation with in-house or outside IP counsel, to consider the different forms of IP protection that may be available – i.e., patent, trade secret, copyright, and trademark – and make a strategic decision, based on legal and business considerations, what IP protection to pursue. The legal considerations will typically turn, at least in part, on the subject matter of the development and whether it is the type of technology protectable by a patent or trade secret – like ideas, inventions, processes, software, products, devices, equipment, designs, graphics, interfaces, and compositions – or by a copyright or trademark – like designs, graphics, interfaces, names, logos, brands, and taglines. These are not separate inquiries as, for example, with software, protection may be available by patent to protect the process performed by the software, by trade secret or copyright to protect the source code, and by trademark to protect the graphics, for example. Another legal consideration is based on an understanding of how easy or not reverse engineering of the identified technology is expected to be, timeline for independent development, and whether, ultimately, disclosure of the technology (as a quid pro quo for securing patent protection) or maintaining the technology as a secret is better. From a business standpoint, cost is always an important consideration and, depending on the number of developments under consideration and whether global protection is required or at least desirable, the legal fees and costs can be substantial. Evaluating the status of the company (e.g., early stage or mature, seeking funding, potential merger, or acquisition target, etc.), the competitive
1182
M. M. Zoltick and J. B. Maisel
Fig. 11 Implementing an IP Strategy: Step 2 – Develop strategy for IP protection
landscape (e.g., crowded field, mature patent landscape, first mover/innovator, etc.), and the licensing strategy (e.g., does the company intend to license the IP, only for internal use, etc.) are all important business considerations that will, in-part, drive the decisions for IP protection (Fig. 11). All of this is an ongoing process, and the company should have in place a periodic review procedure (e.g., weekly, monthly, quarterly, as appropriate) to ensure that the company’s innovations are given due consideration and a strategic decision is made regarding IP protection.
3 Conducting IP Due Diligence Search and Review in Connection with Freedom to Operate and IP Clearance Opinions Along with the strategic decisions about IP protection, the company must also consider the risks, from an IP standpoint, of utilizing the Digital Twin technology contemplated. At an appropriate point in the Digital Twin technology development cycle – typically, when aspects of the technology to be used have been tested and there is a plan to utilize that technology in the anticipated or commercial implementation –, the current IP landscape should be searched to determine whether there are any enforceable IP rights that would potentially be infringed by using the contemplated Digital Twin technology. These types of searches are typically referred to as “freedom-to-operate” (FTO) or clearance searches and, along with the search
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1183
results, a legal opinion should be given so that the company can assess risk and make an informed decision regarding how to proceed. One other option to get a sense of the patent landscape earlier in the development cycle is to conduct a patent landscape search that will identify, for a specific technology (e.g., IOT, AR/VR/XR, Blockchain, etc.) in a specific geographic region or country, what the concentration of issued patents and published pending applications is and identify the details for the patents and applications identified (e.g., patent/application number, title, date, inventor(s), assignee(s), etc.). This can be used to identify potential blocking patents and open areas, to assess whether licensing in may be necessary, and to inform whether a change in direction for the contemplated technology is warranted. As part of FTO/clearance searching, due diligence can be conducted on commercially available technologies, products, software, hardware, devices, etc. that may be used as part of the contemplated Digital Twin technology, as well as on companies, to identify IP rights, licenses, and other legal protections that need to be considered. There are several available platforms and databases that can be used for these types of searches (e.g., Derwent Innovation, Espacenet, Relativity, InnovationQ Plus, Google patents, USPTO/EPO/JPO and other patent office databases, etc.). The scope, in terms of subject matter and geography, and search logic must be very carefully optimized to ensure that the search results are focused and useful. With the results of these searches, a detailed review should be conducted, typically by in-house or outside IP counsel, to determine whether any patents identified could potentially be infringed by practice of the contemplated Digital Twin technology and to identify any relevant published applications still in process that should be placed on watch to track prosecution. The opinion provided regarding FTO/ clearance should be considered by management of the company in assessing potential risk of going forward with the contemplated Digital Twin technology and determining the basis for proceeding. Clearance searches should also be conducted, and opinions rendered for any trademarks that the company intends to use in connection with the contemplated Digital Twin technology. With these opinions in mind, and the previous strategic decisions made regarding IP protection, preparation of patent applications, applications for trademark registrations, and applications for copyright registrations should be initiated. To the extent that protection is sought across multiple countries, counsel in multiple countries may need to be engaged and involved in the process to ensure that region- and country-specific requirements and practices are satisfied, and to assist with the filings. To the greatest extent possible, all filings should be made prior to any disclosure of the company’s Digital Twin technology to any third party. If this is not possible, the company should ensure that, prior to any disclosure, an appropriate confidentiality and non-disclosure agreement is in place. After the filings have been made, the IP schedule should be updated to reflect the patent, trademark, and copyright filings, and to identify the developments that the company has decided to maintain as trade secrets. Again, this is an ongoing process, and a periodic review procedure (e.g., weekly, monthly, quarterly, as appropriate) should be followed to ensure that IP schedule is maintained and up to date (Fig. 12).
1184
M. M. Zoltick and J. B. Maisel
Fig. 12 Implementing an IP Strategy: Step 3 – Secure IP rights/FTO clearances/opinions
4 Assessing and Negotiating Necessary Contract Rights and Establishing a Licensing Regime for the Digital Twin Technology It is expected that, with any large-scale complex system like the Digital Twin use cases described in the preceding chapters, there will be many different technologies and entities involved the development process. As such, it is necessary to assess and negotiate the required contract rights to ensure that the company has the necessary rights to use the technologies as needed as part of the Digital Twin. Further, and as discussed earlier as part of the development process, it is important to ensure that confidentiality and non-disclosure agreements are in place with all third parties and that all employees and independent contractors have executed employment and IP agreements assigning all IP rights to the company. Also, as part of the development process, it is likely that some aspects of technology development, testing, prototyping, evaluation, or other activities will be carried out as joint efforts in collaboration with third parties or utilizing third parties as vendors or service providers to carry out certain development-related activities. It is critical to establish the appropriate contract terms with these third parties to address, for example, ownership and assignment of any jointly developed IP, licensing and sub-licensing rights, privacy, and data protection issues, limiting access as appropriate, dealing with timing and schedules, costs, and other necessary terms and conditions of the relationship. Taking into consideration that the Digital Twin ecosystem will involve many different technologies, applied across many different subject matter areas, in different geographic locations, and by different business entities, enterprises and individuals, the issue of who may ultimately be responsible should a problem, accident, or other issue arise must, at a minimum, be thought through and, in a best case scenario,
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1185
decided. While the applicable laws and regulations may determine who has liability, the better approach, at least when the parties involved have some kind of business or other relationship, is to negotiate and agree on who has responsibility and liability. A party’s obligations regarding responsibilities for assessment and enforcement, and exposure for liabilities, should a problem, accident, or other issue arise, can be addressed by contract through provisions that specify which party will have responsibility, who will be liable for things like, for example, patent, copyright or trademark infringement, trade secret misappropriation, product liability and other claims. Contracts between joint developers, licensor/licensee, manufacturer/distributor/ end-user, and general contracts between those involved in the Digital Twin ecosystem can include representations and warranties, indemnification provisions, limitations of liability, and enforcement obligations to ensure that there is certainty around who is and is not responsible and liable under the various situations that may arise. These types of contract provisions can provide some needed certainty for companies developing, implementing, deploying, and/or using large-scale complex systems like a Digital Twin. The company must also consider, in connection with the Digital Twin technology and the IP secured to protect it, how the technology and IP will, if at all, be licensed. This requires an understanding of how the Digital Twin technology developed will be made and used, and whether and how it will be offered (e.g., distributed/sold, available through a subscription, software-as-a-service (SAAS), hosted, etc.). In addition to considering the legal aspects, business aspects must also be considered, such as, licensing fees, the scope of any license or sublicense granted, the term, field of use and territory, whether the license should, or even can, be exclusive, how indemnification will be structured, and the limits of liability. There are, of course, many other legal and business considerations that need to be considered. If patents will be licensed, it is important to include a patent marking requirement as part of the license agreement to meet the requirements of “constructive notice” to maximize the potential recovery of damages if enforcement of the patent is necessary to address infringement. To the extent that the subject Digital Twin technology uses open source or third party software, the applicable open source and third party licenses need to be reviewed and the requirements for distribution, which may require attribution and making the company’s code available (e.g., copyleft licenses like the General Public License (GPL)), must be carefully considered. In addition, if aspects of the Digital Twin technology need to be compliant with standards (e.g., 5G, LTE, 3GPP2, IEEE P2413, ISO/IEC/IEEE 42010, etc.), appropriate representations and warranties need to be included to ensure compliance (Fig. 13). To round out the company’s IP strategy, a plan for enforcement and defense should also be discussed. As part of this plan, the company, either itself or using outside assistance (e.g., IP counsel, private investigator, IP search firm, etc.), should undertake periodic investigations to determine if the company’s patents, trademarks, and copyrights are being infringed, any of its competitors are competing unfairly or committing false advertising, its trade secrets have been misappropriated, or its technology is being used in violation of agreements that it has in place. If any of these violations or unauthorized uses are identified, the company should explore its
1186
M. M. Zoltick and J. B. Maisel
Fig. 13 Implementing an IP Strategy: Step 4 – Negotiate/execute agreements/licenses
options about potential enforcement, licensing, or other business arrangement to seek to resolve the dispute. If a resolution out of court is not feasible, the company should consider initiating a litigation or contested proceeding, which could potentially be brought in a court (e.g., District Court or equivalent ex-US tribunal), before a government agency (e.g., International Trade Commission, Patent and Trademark Office), or another tribunal. Another possibility is to seek to resolve the dispute by engaging in alternative dispute resolution (ADR), which can be conducted before an organization (e.g., American Arbitration Association (AAA), JAMS, International Chamber of Commerce (ICC), etc.) or by using a private mediator or neutral. To the extent that the company is accused of violating third party IP rights, a procedure should be established for the company’s defense. This should involve a detailed assessment of the alleged infringement or other type of violation and consideration of the validity and enforceability of the IP and the claim. If the claim cannot be satisfactorily resolved through negotiation, the company should consider a proactive approach to challenge the infringement allegation and/or attack the validity and enforceability of the IP through available procedures in the Patent and Trademark Office (e.g., Inter Partes Review (IPR), opposition, cancellation, etc.) or in a court (e.g., Declaratory Judgement, invalidation proceeding, etc.) (Fig. 14).
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1187
Fig. 14 Implementing an IP Strategy: Step 5 – Develop strategy for enforcement/defense
5 Identifying and Assessing Compliance with Applicable US and International Government Regulations Digital Twin (DT) technology is not immune from the centuries old conflict between innovation and regulation. DT technology has not been the subject of laws or targeted regulatory scrutiny. For example, in the United States, we currently do not find laws and regulations specific to Digital Twin technology per se. Many organizations build and apply DT technology in countless industries in effective, safe, and legally compliant ways, as described throughout the chapters of this book. DT technology poses unique risks, particularly in view of the data ecosystems that power DT technology, the decentralization and connectivity of DT platforms, and the myriad of applications of advanced DT technology – especially in highly regulated industries. The following section focuses on a key risk area for DT technology: compliance with global privacy and data protection regulations, including heightened security requirements for connected devices.
5.1 Privacy and Data Protection Personal Digital Twins leverage actual, current, and continuous human data and life history in highly transformative ways. For example, personal Digital Twins are revolutionizing healthcare with digital tracking and advanced modeling of the human body to improve patient outcomes and medical processes. Personal Digital Twin
1188
M. M. Zoltick and J. B. Maisel
assistants are anticipating and acting upon a person’s needs around the clock. Personal Digital Twins are building bridges between how a person looks and acts in the physical world and across digital worlds in the metaverse. Indeed, endless potential applications exist for personal Digital Twin technology that allow persons to experiment with different life choices and explore possible paths to inform everyday decisions. However, organizations must balance the benefits of personal DT technologies with the privacy rights of individuals, such as the right to be left alone and the right for personal information to be protected from public scrutiny. In view of the explosive generation and utilization of digital data, an increasing number of jurisdictions around the world have imposed privacy and data protection regulations. These regulations affect personal DT technology. Accordingly, consideration of the legal concepts of “information privacy” and “data protection” is important, especially where DT technology leverages data relating to an identifiable person. Information privacy concerns rules regarding an organization’s collection, use, disclosure, retention, and disposal of personally identifiable information, as well as any rights an individual has with respect to an organization’s collection, use, disclosure, retention, and distribution of that individual’s personally identifiable information. Data protection concerns rules regarding the handling, storing, and management of personal information. Intuitively, one cannot have privacy without security, and a key component of privacy and data protection regulations is the requirement that an organization must keep PI/PII, and other sensitive data secure from unauthorized disclosure or use. We provide an overview of the regulatory framework, and an overview of best practices for regulatory compliance. 5.1.1 Regulatory Framework Numerous definitions of personal information (PI) or personally identifiable information (PII) exist and generally encompass a set of information that can distinguish or trace an individual’s identity. Personal information may include information such as a name and biometric records, alone or when combined with other personal or identifying information that are linked or linkable to a specific individual. Regulators may afford heightened privacy and security limitations to certain categories of “sensitive” personal information, such as financial information, medical records, racial or ethnic origin, political opinions, religious or philosophical beliefs, genetic data, and biometric data. In addition, privacy and data protection regulations have two principal categories: general regulations and industry-specific regulations. General regulations apply broadly to processing of any PI or PII, whereas industry-specific regulations target certain covered entities (e.g., banks, healthcare providers) and/or certain categories of PI/PII (e.g., financial records, health records). General regulations, such as Europe’s General Data Protection Regulation (GDPR), define responsibilities for “controllers” and “processors” of personal
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1189
information, such as implementing “privacy by default” and “privacy by design,” maintain appropriate data security protections, and obtain appropriate consent for most personal data collection and provide notification of person data processing activities, among other responsibilities. Additionally, data subjects (consumers) are afforded certain rights such as erasure, removal (right to be forgotten), access, data portability, and rectification of personal data. Notably, the GDPR also provides data subjects the right to object to automated decision-making processes, including profiling, that affect substantial rights. Many other countries have adopted similar GDPR-like general regulations, such as the Japan Act on the Protection of Personal Information, the Australia Privacy Act, the Brazil General Data Privacy Law, the South Korea Personal Information Protection Act, China Personal Information Protection Law, Canada Personal Information Protection and Electronic Documents Act, among others.16 In the United States, a patchwork system of federal and state laws and regulations govern information privacy and data protection. While there currently are no general federal privacy and data protection regulations, several states have adopted such regulations, including California,17 Colorado,18 and Virginia.19 In addition, federal laws (as well as state laws) target the collection of and access to certain types of personal information data by entities in specific industries. Some of the most well-known regulations include: the Children’s Online Privacy Protection Act (COPPA), the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley Act, the Family Educational Rights and Privacy Act, the Fair Credit Reporting Act (FCRA), the Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN-SPAM Act), the Telephone Consumer Protection Act (TCPA), and the Electronic Communications Privacy Act (ECPA or Wiretap Act). There are numerous avenues in the Digital Twin context for an entity to engage in controller or processor activities of PI/PII and therefore be subject to privacy and data protection regulations. For example, application and gaming providers that collect PI/PII directly from consumers and determine the manner in which the collected PI/PII is used, such as building personal digital avatars in the metaverse, have obligations as a controller of that collected PI/PII. Similarly, an entity that acts on behalf of such a controller to process the collected PI/PII, such as providing data storage or other services, has obligations as a processor of the collected PI/PII. In some instances, an entity may be engaging in both data controller and processor activities. In distributed Digital Twin systems where, multiple entities are processing PI/PII, a data processing agreement between a data controller and data processor is key in regulating the processing of PI/PII and the obligations of the parties. For a more complete listing, visit the International Association of Privacy Professionals’ Global Comprehensive Privacy Law Mapping Chart, available at https://iapp.org/media/pdf/resource_ center/global_comprehensive_privacy_law_mapping.pdf 17 The California Consumer Privacy Act and California Consumer Privacy Act Regulations. 18 Colorado Privacy Act. 19 Consumer Data Protection Act. 16
1190
M. M. Zoltick and J. B. Maisel
5.1.2 Security of Connected Technologies Organizations often use DT technology as a platform for Internet of Things (IoT) applications, with uses in smart buildings, cities, transportation, logistics, agriculture, telecommunication infrastructure, and complex cyber-physical systems, among many others. As IoT systems powered by connected devices continues to grow, so do the DT platforms used to replicate, simulate, and manage these complex systems. As highlighted in recent years, IoT systems are subject to unique, and sometimes compounded cybersecurity risks, such as vulnerabilities within IoT devices, lack of physical security over remote devices and weak passwords, insecure data communication and transfer, mismanagement, or lack of visibility of remote devices, and insecure application program interfaces, among others. While many organizations use DT technology effectively to help ameliorate some of these cybersecurity risks, many of the same cybersecurity risks present in IoT systems will also proliferate in the DT platforms used for such systems. In response to an escalating number of data breaches concerning systems containing personally identifiable information, all fifty states in the United States have enacted data breach notification laws. Additionally, most of the privacy and data protection laws listed above contain data breach notification provisions. Companies may face severe penalties for failing to implement and follow reasonable practices to protect and secure digital devices, software, and systems from data breaches, as well as for failing to properly report covered data breaches. In addition to fines and potential regulatory investigation, companies facing a data breach may be subject to litigation initiated on behalf of individuals – or classes of individuals – that are personally harmed by the breach. For example, members of a class action litigation in the United States against the credit reporting company Equifax were able to receive free credit monitoring or up to $125 cash payment from the company.20 The litigation stemmed from a 2017 data breach that impacted the personal information of approximately 147 million people. While there is little “Digital Twin” focus to date, lawmakers and regulators are heavily scrutinizing the broader Internet of Things (IoT) industry. For example, the United States Federal Trade Commission (FTC) has been focusing on data privacy and data protection in the IoT industry and is becoming increasingly aggressive in launching investigations and initiating enforcement proceedings against IoT technology companies. A long line of FTC cases relates to IoT technology, and most recently, the FTC settled an investigation in Tapplock. In Tapplock, the FTC alleged that an internet connected smart lock provider deceived customers by falsely claiming that it designed its locks to be “unbreakable” and that it took reasonable steps to secure the data it collected from users.21 The FTC has issued guidelines and
https://www.equifaxbreachsettlement.com/ In the Matter of Tapplock, Inc., available at https://www.ftc.gov/enforcement/cases-proceedings/ 192-3011/tapplock-inc-matter 20 21
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1191
recommendations pertaining to IoT platforms, and the FTC’s fifth and sixth “PrivacyCon” conferences included discussion on the privacy and security risks of IoT devices.22 Additionally, several laws took effect in the United States over the past few years concerning the security of connected devices. California Senate Bill 327, “Security of Connected Devices” specifies the security obligations of manufacturers of connected devices, including equipping devices with reasonable security features. Such security features must be (1) appropriate to the nature and function of the device, (2) appropriate to the information it may collect, contain, or transmit, and (3) designed to protect the device and any information contained thereon from unauthorized access, destruction, use, modification, or disclosure. Oregon House Bill 2395, “Security Measures Required for Devices that Connect to the Internet,” is like California’s law. At a federal level, the IoT Cybersecurity Improvement Act of 2020 requires government agencies to ensure the security of their IoT devices and requires NIST to develop and publish standards and guidelines for the federal government.23 The Act follows the promulgation of guidelines and recommendations for best practices from several U.S. government agencies (e.g., NIST, FTC, DHS, GAO, NTIA, and NHTSA) and industry groups (e.g., CTIA, GSMA, and ISO). Security of distributed Digital Twin systems pose unique challenges from a legal perspective. For example, it can be difficult to ascertain which laws apply to a distributed Digital Twin system in cyberspace that crosses jurisdictional boundaries and processes data from consumers and systems across those boundaries. Similarly, laws can be difficult to enforce across jurisdictional boundaries and in distributed Digital Twin systems where it is difficult to identify a malicious actor or device. Contractual provisions may ameliorate some of these difficulties, such as specifying choice of law, forums for dispute resolution, as well as permitted uses of a distributed Digital Twin system. 5.1.3 Compliance Strategy and Best Practices Many countries and jurisdictions are trending towards increased privacy and data protection regulations, and non-compliance comes with tremendous financial and reputational risks. For example, non-compliance with the GDPR may result in a fine up to 20 million Euros or 4% of annual global turnover – whichever is higher. A data breach may result in a class action litigation as discussed above. In addition, as consumers demand greater protection and control over their personal information, companies can distinguish their goods and services in the marketplace based on
22 See https://www.ftc.gov/news-events/events-calendar/privacycon-2020 and https://www.ftc.gov/ news-events/blogs/business-blog/2021/07/get-ready-privacycon-july-27th 23 NIST Cybersecurity for IoT program, available at https://www.nist.gov/itl/applied-cybersecurity/ nist-cybersecurity-iot-program
1192
M. M. Zoltick and J. B. Maisel
their privacy and data protection practices, and often face the ire of the public for mishandling of consumer data. No one-size fits all approach exists for organizations looking to establish a compliance strategy, but there are helpful tools to get started. For example, the U.S. National Institute for Standards and Technology (NIST) has set forth a cybersecurity framework24 and a privacy framework25 that are broadly applicable to most organizations. The NIST cybersecurity framework is organized into five core functions: (1) Identify, (2) Protect, (3) Detect, (4) Respond, and (5) Recover. NIST organized the privacy framework into three parts: (1) Core, (2) Profiles, and (3) Implementation Tiers. In our experience, several key considerations demonstrate that an organization has established and is practicing “reasonable” privacy and data protection measures. Entities should develop and implement a compliance strategy and framework for risk management to establish best practices for legal and regulatory compliance (Fig. 15). A typical compliance strategy and framework encompasses, inter alia: • Establishing a governance structure to define, document, communicate, and assign accountability for privacy policies and procedures. • Understanding personal data inventory, retention, and transfer. A key foundational step in establishing a privacy compliance strategy and developing a program is understanding what data (PI) is being managed. • Developing privacy notices applicable to each type of data subject and internal privacy policies for the organization. • Managing requests from individuals to provide the type of PI collected, sold, or disclosed, to provide a copy of the PI, and to maintain and honor consent preferences. • Understanding where PI is being shared with vendors, service providers, and other third parties, and establishing oversight • Everyone who handles PI, including decision-makers, should receive training in the organization’s privacy programs and policies In addition, data protection requires an organization to implement reasonable administrative, technical, and physical security safeguards to protect covered PI/ PII. An organization should define an organized approach to managing the occurrence and aftermath of a data privacy incident, security breach, or cyberattack – preferably well in advance of any potential incident (Fig. 16). Increasing regulation – and penalties for non-compliance – are likely to favor certain implementation aspects of DT technology. Accordingly, we encourage organizations to engage in early and continuing discussions surrounding implementation of privacy and data protection measures in any new personal DT technology. For example, DT designs that use non-personalized information, or anonymized or
24 25
NIST Cybersecurity Framework available at https://www.nist.gov/cyberframework NIST Privacy Framework available at https://www.nist.gov/privacy-framework
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
Establish Governance Structure Continuous Compliance Monitoring
1193
Personal Data Inventory, Retention, & Transfers
Data Privacy Breach Management Program
Data Privacy Policies & Notices
Training & Awareness Program Vendor/Third Party Management
Individual Rights/Consumer Preference Management
Fig. 15 Compliance strategy & framework for risk management
pseudononymized information, may be favored over DT implementations that use personalized information. Organizations may design DT technology in accordance with privacy by design and security principles at the outset. In fact, some organizations view privacy and data security controls as a competitive advantage, and such organizations distinguish their technology goods and services in the market based on their adherence to privacy and security by design. To simplify compliance, providers may restrict access and dissemination of data across different jurisdictions and entities. Additionally, DT technology providers should proceed with caution and assess any impact on individuals’ privacy rights when aggregating PI/PII across different applications and use cases. As a final note, compliance with an increasingly complex privacy and data protection regulatory and legal framework is a distinct issue from broader digital ethical questions raised by building, using, and selling new personal DT applications. At minimum, however, compliance will help ensure that an organization’s practices align with consumers’ expectation for security and confidentiality of their PI/PII.
1194
M. M. Zoltick and J. B. Maisel
Prepare IT & staff to handle potential incidents Process for ensuring no further threat and recovering affected systems
Process for finding cause, removing affected systems, and eradicating
Process to identify whether incident is “data breach”
Process for containment to limit damage and isolate affected systems
Fig. 16 Data privacy breach management plan
6 Assessing the Digital Twin Technology for Potential Bias, Trustworthiness and Transparency, and Developing a Mitigation Strategy Many organizations use DT applications for descriptive or informative purposes, for instance, to describe the current state of a system or asset and to present diagnostic information, with a more specific example being a current health or condition indicator. More recently, DT technology that leverages large datasets and artificial intelligence (AI) and machine learning (ML) capabilities has proved capable of predicting a system’s future state or performance and providing prescriptive or recommended actions based on the prediction. Yet, more advanced DT technology may soon have the capability to act autonomously without human input, e.g., close the control loop such that the DT technology makes decisions and executes actions based on those decisions. For example, Digital Twin technology integrated with blockchain networks can automatically execute smart contracts at certain project milestones or on other conditions. Indeed, smart automation, interconnectivity,
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1195
decentralization, and data exchange in manufacturing technologies and processes are at the forefront of what has been coined the “Fourth Industrial Revolution.”26 Advanced DT technology coupled with cutting edge developments in IoT, AI/ ML, and other digital technologies will fundamentally alter and transform nearly every aspect of society. World leaders and regulators have taken notice and are performing the delicate task of targeting the most harmful and dangerous uses of automated technology without stifling innovation. Lawyers and prosecutors are also applying existing laws in ways that test the outer bounds of legal precedent to address novel technological harms. The following sections address regulatory oversight and legal liability associated with automated decision-making processes of advanced predictive, prescriptive, and autonomous DT technology, as well as best practices.
6.1 Regulatory Framework As with DT technology, the United States does not have legislation broadly directed to AI-enabled automation technology. However, several states and federal agencies have formed task forces to examine AI technologies and recommend how to use and regulate such technologies. Additionally, regulators have passed laws and regulations to address certain automation technology. Regulation of automated technology should come as no surprise in industries that are already highly regulated, such as the automobile industry. Autonomous or self-driving vehicles are subject to heightened regulation in the United States. Nevada was the first state to adopt legislation concerning the testing of autonomous vehicles in 2011, and the United States Department of Transportation developed the Automated Vehicles Comprehensive Plan to prioritize safety while preparing for the future of transportation.27 When accidents do arise, it can be difficult to ascertain who is at fault: the passenger, the automaker, the software developer, or someone else. Additionally, technology ethicists clash as to whether the infamous Trolley Problem – the ethical dilemma of choosing to sacrifice one person to save a larger number from an accident – is applicable to how autonomous vehicles operate, and if so, how decisions should be made. Regulators have also implemented more narrow regulations of autonomous technology. For example, facial recognition technology that law enforcement uses to automatically identify potential targets has come under intense scrutiny in the United States due to concerns of privacy erosion, reinforcement of bias against Black people, and misuse. As a result, a handful of jurisdictions have banned or
Klaus Schwab, “The Fourth Industrial Revolution: what it means, how to respond,” World Economic Forum (Jan. 14, 2016), available at https://www.weforum.org/agenda/2016/01/ the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ 27 Automated Vehicles Comprehensive Plan, available at https://www.transportation.gov/av/avcp 26
1196
M. M. Zoltick and J. B. Maisel
restricted law enforcement from using facial recognition software.28 In addition, companies and researchers are pushing back on what they view as unethical uses of facial recognition technology.29 Remote tracking and surveillance more generally has come under scrutiny. For example, in Carpenter v. U.S., 138 S. Ct. 2206 (2018), the United States Supreme Court upheld protection of privacy interests and expectations of privacy in time-stamped cell-site location information. A handful of jurisdictions have also passed legislation related to drone privacy, which prohibits the use of drones to commit video voyeurism in violation of a party’s reasonable expectation of privacy. AI/ML is also increasingly used to automate aspects of the hiring process, including recruiting, screening, and predicting the success of potential applicants. Unfortunately, studies show that several automated hiring applications promote biased hiring due to reliance on faulty data or unconsciously prejudiced selection patterns such as demography.30 Illinois passed the Artificial Intelligence Video Interview Act (820 ILCS 42) that requires employers to disclose the use of artificial intelligence analysis of applicant-submitted videos and to obtain consent from the applicant to be evaluated by the disclosed artificial intelligence program. Chat bots that autonomously engage with consumers have also come under scrutiny. For example, California passed the Bolstering Online Transparency (BOT) Act, which regulates online chat bots. The BOT Act prohibits certain public-facing sites or applications to use a bot to communicate or interact online with a person in California to incentivize a sale or transaction of goods or services or to influence a vote in an election without first disclosing that the communication or interaction is via a bot. The statute defines a “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” While similar legislation does not (yet) exist in the United States, it is important to note that Europe’s GDPR already expressly provides that a data subject “shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her” (Art. 22).
6.2 Developing a Mitigation Strategy As with other autonomous technology, organizations providing advanced DT technology will have to carefully consider what decisions, if any, are being made about the underlying information, and by what or whom. For example, will a human For example, Virginia and Vermont have passed outright bans applicable to law enforcement use. See, e.g., Richard Van Noorden, “The Ethical questions that haunt facial-recognition research,” Nature (Nov. 18, 2020), available at https://www.nature.com/articles/d41586-020-03187-3 30 See, e.g., Dawn Zapata, “New Study Finds AI-enabled Anti-Black bias in Recruiting,” Thomson Reuters (Jun. 18, 2021), available at https://www.thomsonreuters.com/en-us/posts/legal/ ai-enabled-anti-black-bias/ 28 29
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1197
approve a proposed recommendation from a DT system, or will the DT system have control processing that can automatically implement the proposed recommendation? There are understandably several safety considerations at stake, and when something goes differently than planned, ascribing fault for the resulting harm will test the boundaries our existing legal system. For example, new fact patterns concerning autonomous systems will raise questions such as how will harm and liability be defined, what is the standard of care for an automated system, who or what is responsible when an accident occurs, does a record need to be maintained of the automated decision, how will the technology be examined in a court of law, and what are the ethical, legal, and social implications. Accordingly, organizations making, using, and selling automated DT technology must consider the foreseeable potential harms of the technology to ensure that the technology is developed and used in safe, effective, and legally compliant manner. This consideration is especially true for technologies that affect substantial rights of individuals, such as the right to privacy and equal treatment under the law. To increase certainty, entities entering into an agreement should specify contractual liabilities and responsibilities when these foreseeable potential harms arise in the Digital Twin context. For example, an indemnity provision can specify under what conditions one party agrees to take responsibility and pay for any losses or damages caused by another party. A limitation of liability provision may excuse a party from liability under certain circumstances or place a financial cap on such a liability. One foreseeable harm is bias in autonomous technology, which at this point is a well-documented problem. For example, developers may introduce bias into systems due to prejudiced assumptions made in algorithm development or in the training data. Indeed, several of the regulations discussed above are the result of unjustifiable racial bias present in automated systems, such as facial recognition and hiring tools. Racial bias has also arisen in healthcare risks algorithms, sentencing tools, and advertising targeting tools, among many others. Thankfully, numerous researchers, industry groups, organizations, and government are tackling the problem of how to manage bias in automated systems. NIST has also outlined a Proposal for Identifying and Managing Bias in Artificial Intelligence.31 Organizations should stay abreast of best practices and developments to identify and manage harmful bias in automated decisions more effectively. In our experience, trust built through transparency is of utmost importance in limiting the legal risks associated with autonomous technology. Many of the regulations discussed above address transparency, which require express, knowing, and voluntary consent from consumers before engaging with autonomous systems. As discussed in the preceding section, organizations can divulge certain details of their DT technology and systems to the discerning public without having to forfeit important intellectual property rights, such as patent rights.
NIST Proposes Approach for Reducing Risk of Bias in Artificial Intelligence (June 22, 2021), available at https://www.nist.gov/news-events/news/2021/06/nist-proposes-approach-reducingrisk-bias-artificial-intelligence 31
1198
M. M. Zoltick and J. B. Maisel
7 Summary and Conclusions Advancements in Digital Twin technology test the bounds of our legal system in many ways. A myriad of laws, rules, and regulations are worthy of consideration for any new and innovative technology, and even more so for one as broad ranging and comprehensive as the Digital Twin ecosystem. The foregoing sections describe a strategic, stepwise approach for organizations developing, implementing, deploying, and/or using large-scale complex systems like a Digital Twin. This approach includes: (1) assessing the availability and different forms of IP protection for the Digital Twin technology, (2) conducting IP due diligence search and review in connection with freedom to operate and IP clearance opinions, (3) assessing and negotiating necessary contract rights and establishing a licensing regime for the Digital Twin technology, (4) identifying and assessing compliance with applicable US and International government regulations, and (5) assessing the Digital Twin technology and, particularly, the data used and algorithms and models applied, for potential bias, trustworthiness, and transparency, and developing a mitigation strategy. Currently, organizations may choose to seek a wide variety of IP rights for Digital Twin technology, including patent rights, trade secret rights, copyright rights, and trademark rights. These IP rights – and the legal issues and potential disputes that arise from securing and enforcing those rights – will continue to shape the expectations and decisions of investors and industry participants who seek to use or build upon Digital Twin technology for innovation. Indeed, the United States Patent and Trademark Office has noted that one “hallmark of valuable new technologies is an increase in patent applications,” and such “applications reflect the expectations and decisions of investors and innovators who seek to use or build on the new technologies for innovation.”32 On the flipside, an IP due diligence search and review, often referred to as “freedom-to-operate” or clearance search, coupled with a legal opinion, will help an organization mitigate the risks of their Digital Twin technology violating the IP rights of third-parties. Moreover, organizations should assess, negotiate, and secure any necessary contract rights and establish an IP licensing and enforcement strategy for their Digital Twin technology. Continuing advancements in virtual/augmented/mixed reality, artificial intelligence, machine/deep learning, internet of things, blockchain, biotechnology, big data and analytics, and quantum computing present new and continuing challenges from legal, regulatory, and ethical perspectives. Organizations that leverage large datasets containing personally identifying information are subject to global regulations that address the privacy rights of individuals. In addition to managing the collection, use, disclosure, retention, and distribution of individual’s personally identifiable information in “personal” Digital Twin systems, organizations must ensure that those systems adequately protect such information from unauthorized “Inventing AI; Tracing the diffusion of artificial intelligence with U.S. Patents,” Office of the Chief Economist in Data Highlights (Oct. 2020, United States Patent and Trademark Office), p.4, available at https://www.uspto.gov/sites/default/files/documents/OCE-DH-AI.pdf 32
Societal Impacts: Legal, Regulatory and Ethical Considerations for the Digital Twin
1199
disclosure and use. Privacy and data protection risks only further compound when organizations use the Digital Twin as a platform for IoT technology distributed across different jurisdictions and entities. Moreover, regulators will continue to evaluate potential societal abuses of the automated decision-making processes of advanced predictive, prescriptive, and autonomous Digital Twin technology. As a concluding remark, as Digital Twin technology continues to evolve, so too will regulations and laws that surround its use. Accordingly, organizations should implement and revisit the foregoing approach at regular intervals throughout the lifecycle of the Digital Twin technology. Martin M. Zoltick is a technology lawyer with more than 30 years of experience representing inventors, innovators, entrepreneurs, and investors. Marty regularly works with early-stage, emerging, middle market, and mature companies, and with venture firms. For years, Marty has advised tech startups on developing IP strategies and implementing those strategies to secure IP rights, building a portfolio of IP assets, and monetizing those assets through strategic investments, licensing, enforcement, and acquisition. He is a shareholder at Rothwell Figg in Washington, DC, and is recognized as one of the World’s leading patent professionals (IAM Patent 1000), an IP Star (Managing Intellectual Property), and has been selected as a Washington, DC Super Lawyer (2013–2021). Marty has a degree in computer science and, prior to attending law school, he worked for several years as a software developer. His practice is focused primarily on IP, transactions, and privacy law issues, with the majority of matters that he has handled over the past 30 years involving software technologies, including operating systems, networking, telecommunications, client/server, P2P, real-time systems, virtual networking, IOT, Big Data, AI, ML, neural networks, and quantum computing. He has developed a particular expertise with handling the legal aspects of open source software (OSS), including licensing, due diligence, compliance programs, and IP protection. He is a registered patent attorney, and a substantial part of his practice involves drafting and prosecuting patent applications. Marty also has significant experience handling contested cases and disputes on behalf of his clients. He regularly serves as trial counsel in major patent disputes in the U.S. federal district courts and as lead counsel in post-grant proceedings before the U.S. Patent and Trademark Office Patent Trial and Appeal Board. A Certified Information Privacy Professional in the United States (CIPP/US), he also helps clients understand and navigate the rapidly evolving area of privacy and data protection law. Marty is a competitive Masters swimmer and regularly competes in U.S. Masters Swimming Meets, as well as competing in open water swims in the U.S. and abroad.
1200
M. M. Zoltick and J. B. Maisel Jennifer B. Maisel is an emerging thought leader on the intersection of AI and the law. Jen is a partner at Rothwell Figg in Washington, DC, and focuses on IP and privacy law issues involving cutting-edge technology. Her practice encompasses all aspects of IP law, including litigation, patent prosecution, transactions, opinions, and counselling. Jen is also a Certified Information Privacy Professional in the United States (CIPP/US) and counsels clients on privacy and data security matters. She serves a diverse range of clients – from solo inventors and start-ups to Fortune 100 companies – in matters concerning AI and machine learning, telecoms systems, the Internet of Things, Big Data technologies, blockchain, mobile and website applications, and other digital technology. She has been selected to the Washington, DC Super Lawyers ‘Rising Star’ list (2018–2021) and is included in the inaugural edition of Best Lawyers: Ones to Watch (2021), which recognizes extraordinary lawyers who have been in private practice for less than 10 years.
The Digital Twin in Action and Directions for the Future Noel Crespi, Adam T. Drobot, and Roberto Minerva
Abstract The Digital Twin is crucial and timely for positively affecting how we work, live, and play. It eliminates the gap between experimentation and learning by bridging real and virtual worlds in a powerful methodology, making significant headway in conquering previously unsolvable problems and challenges. Digital Twins are made possible by four widely deployed infrastructures for connectivity and communications, computing, digital storage, and sources of digital data. The Digital Twin provides insights, paths to innovation, efficient production of goods, improved delivery of services, better experiences and entertainment, and new business models. Investing in Digital Twins is one of the most valuable ways to create sustainable paths to the future. While Digital Twins bring concrete value today for a wide assortment of applications in different vertical domains today they are still in the early stages of evolution. The considerations in the future of Digital Twins are the business cases and models that accompany Digital Twins, the mind set of the management, engineers and technologists, the development for ecosystems in individual industries, and finally the trust of end-use beneficiaries. Digital Twins can be simple, but more than likely, where they bring transformative value they will be complex technologically and will require significant changes in organizational structures to be successful. From what we have learned it is consequently important to understand that the commitment of an enterprise or a business to adopt Digital Twins as a methodology is a journey. It includes incentives for those involved, the development of human capabilities, significant acquisition of tools and infrastructure, the maturation of processes across the enterprise or business, and integration with a supporting ecosystem. The Digital Twin book is the most comprehensive work on the subject to date. It has brought together top practitioners, technical experts, analysts, and academics to explore and discuss the concept of the Digital Twin, its history, evolution, and the N. Crespi (*) · R. Minerva Telecom SudParis, Institut Polytechnique de Paris, Palaiseau, France e-mail: [email protected] A. T. Drobot OpenTechWorks Inc., Wayne, PA, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4_38
1201
1202
N. Crespi et al.
profound impact across sectors of the global economy. The book addresses the business value, technological underpinnings, lessons learned from implementations, resources for success, practical approaches for implementation, and illustrative use cases. It makes the case for why we believe that Digital Twins will fundamentally transform major industries and enable us to fulfill important societal goals. Keywords Applications · Bibliography · Common aspects · Complexity · Definition · Deployment · Digital Twin · Diversity · Human factors · Implementation · Methodology · Multidomain · Organization · Technique · Value
1 Introduction Digital Twins are an important progression for ideas and methods that have been around for milenia. These are ideas about how we innovate, how we produce and operate goods and services, and how we do so efficiently and reliably, and how we avoid unwelcome surprises in the way they serve us. Whenever a complex task to either do something or build something is undertaken, even if it is by one individual, it helps if we can capture the intent of the task and resolve and foresee the difficulties we will encounter. When more than one person is involved it becomes equally important to share and communicate the intent with the team or organization performing the task and to partition the effort in an efficient way. Collective tasking also brings in the element of monitoring and managing activity to see that it leads to the intended outcome. This is even more crucial if the intent is not clearly defined or it is not obvious how the intent can be realized. Communications and discussion are essential to arrive at a consensus for both the intent and the methods used to perform the task. If a task is likely to be repeated, there is great value in capturing the knowhow of what worked and what did not and to record it. Once a task has been completed the outcome creates either an experience, that may be shared with others, or a a physical entity that takes on a life of its own and may affect a significant number of people. In the case where the outcome is an experience there is business and societal value in closing the loop by capturing information and knowledge, so the result for the next time the task is performed it is more dependable and better than before. When the outcome is a physical entity the value comes from closing the loop to improve the design by capturing how it performs in actual use, how it can be better operated, maintained, upgraded, and how we can eliminate unwanted features and behaviours. Digital Twins fit the setting of taking on tasks as a general methodology for capturing the past, understanding the present, and predicting the future. What makes Digital Twins timely are four widely available infrastructures that make it possible to solve the hard problems we have not been able to address in the past. These are: Computing, Digital Storage, Connectivity and Communications, and lastly sources of Real Time Data and Information.
The Digital Twin in Action and Directions for the Future
1203
In the past, experimentation, physical scale models, empirical formulas, drawings, and texts were the principal tools to help us deal with the complexity of “industrial”, “societal”, and “personal” tasks. With the advent of the first wave of Information and Communication Technology beginning in the late 1950s we found a new set of tools that led to significant improvement. The emphasis was on gains in productivity through automation of routine processes and routine tasks, and the introduction of digital simulation and modeling technologies in the early stages of product’s or an experience’s lifecycle. Developments in technology have significantly changed what is possible today. The continued improvements in communication speeds, access to compute power, digital storage capacity, and ability to ingest real time data from multiple sources makes it possible to take the next step. Digital Twins play a key role in harnessing the technology to solve the problems the complex and difficult problems of efficiency, optimization, but most of all give us the ability to innovate by shifting functionality to the digital domain – fundamentally changing design and lifecycle of goods and services. They are the bridge between the real and virtual worlds. Digital Twins are not a stand alone idea but are very closely tied to large scale movements and supporting technologies. The movements include Digital Transformation, IT/OT Convergence, The Internet of Things, aspects of Cyber- Physical Systems, Industry 4.0, Smart Cities, and even the newest entry called the Metaverse. Some of the important technologies include Artificial Intelligence and Machine Learning, Location based services, Virtual and Augmented Reality, Photogrametry, 3D Printing, Robotics, Broadband Communications, Cloud and Edge Computing, Automation and autonomy, and many more. Digital Twins are very much a multi-disciplinary game that depends on the context, as well as intuition and knowledge that comes from a wide range of application verticals. The material in the book addresses the business aspects of Digital Twins, important enabling technologies technologies, the management and operation of Digital Twins, and concrete examples of their use cand their promise in major industries segments and in use-case applications.
2 The Digital Twin as a Technique Driven by Context Perhaps the most mature adoption of the concept of Digital Twins comes from the industrial manufacturing community. Digital twins were thought of as a methodology to supercharge the capabilities of Product Lifecycle Management (PLM) across the processes of major industrial enterprises. The concepts used there have been adapted to many uses of Digital Twins today can be found in almost any sector. For simplicity we can characterise five high level areas where there is significant adoption: (1) Manufacturing, (2) Products and Goods, (3) Services, (4) Processes, and (5) Abstract Concepts. The “nature” of what a Digital Twin is remains elusive to a clearly stated definition. Many researchers have attempted to provide a common and shared definition
1204
N. Crespi et al.
of the Digital Twin concept by analysing scores of different definitions [1–4]. However, there is still no consensus on a complete and shared characterization. At a very simplistic level, a Digital Twin can be considered a set of data that describes and represents a physical object. While this may serve for an intuitive interpretation of the concept, it is not possible to build on top of it a consistent set of general propositions, commonly regarded as correct, that can be used as principles and lead to explanations or prediction for representing a complex physical object and its behavior. Even if the contextualization of the Digital Twin definition based on specific problem domains can help to understand and represent principles, rules and properties, there is a lack of generality that may affect assumptions or the full applicability of the model. In spite of these difficulties, the increasing computing capabilities, the availability of real time data and the increasingly precision of modelling techniques (either mathematical or based on Artificial Intelligence) make the realization and usage of Digital Twins a reality in several problem domains. An entire section of this book is devoted to the application of the Digital Twin to specific vertical problem domains. The definition, construction, and usage of the Digital Twin has followed a “bottom up” approach, i.e., the need to represent and reflect the behavior of physical objects in specific contexts has led to the incremental refinement of effective models addressing and covering stringent issues in application domains. Digital Twin representations have been built in several areas and they are successfully used by many actors. These Digital Twins can support the entire Product Life Cycle or only a part of it. This introduces another level of uncertainty with respect to a full definition of the concept and the approach. A fully-fledged Digital Twin is certainly complex to build, and an incremental approach to its construction can be a good option for evaluating and assessing its merits. Once the Digital Twin construction has started, it extension to a bigger problem domain or to a larger coverage of the life cycle can be considered. This bottom-up and incremental approach leads to the intertwining of the DT properties with those of the problem domain and, sometimes, with the specific needs of the enterprise. The Digital Twin definition is then seen as the outcome of a long process of adjusting and tuning the representation to the problem domain issues as well as to the needs and specific attitudes of the organization building and using the concept.
3 The Exploring Range of Digital Twins Across Domains and Verticals The Digital Twin have been applied in many fields. They have brought innovation and novelty as well as efficience and optimization.
The Digital Twin in Action and Directions for the Future Type of Digital Twin 1. Manufacturing 2. Products and goods 3. Services 4. Processes 5. Abstract concepts
1205
Examples of applications Aerospace, agriculture, automotive, consumer, construction, Defense, electronics, food, mining, shipbuilding, ….. Aircraft, appliances, autonomous vehicles, computers, drones, trains, ships Education, entertainment, healthcare, personal care, recreation, retailing, smart homes, ….. Airport security, billing and invoicing, pharmaceuticals, Project Management, refining, supply chains, warehousing, …. Art, economics, NFTs, aspects of research in science, the Metaverse, the social sciences, ….
The use of replicas of physical objects for monitoring and anticipating or solving issues has a very long history. One of the first uses of the concept was at NASA, where different replicas of their aircraft were used to identify and solve possible problems that could occur during missions. The concept has since been applied to an range of projects and realizations in manufacturing. The Digital Twin applications are many and they cover different vertical industries. The landscape is huge and new applications are emerging. These applications can be grouped into a small set of categories: –– Construction, where the Digital Twin is used as an enabler for the design, testing, and operation of products and physical objects of different types and with many different functions. This group is the largest and encompasses manufacturing, building, and other activities characterized by the need to design, aggregate, operate and control complex artefacts made of thousands of well-defined or standardized components and parts. –– Monitoring and Orchestration, where the Digital Twin collects the data of physical objects and represents its current status aiming to fully understand the condition and context. Sometimes the applications falling in this category also have the ability to govern and orchestrate the Digital Twin by means of programmability in order to optimize the use of resources in complex domains. Smart City applications exemplify this type of application. There is an initial trend to apply the Digital Twin approach to large and complex networks. The so-called Digital Twin Networks aims at moving a step further in the network management systems. This application of the Digital Twin represents the ability of representing and controlling processes. –– Servitization, where the Digital Twin is used as a means to provide services associated to a specific product. For instance, car sharing capabilities are using a Digital Twin approach in order to temporarily associate a vehicle to a customer while still keeping the possibility of the Service Provider to control and intervene on the usage of the car. It is expected that the use of forms of twinning will be common in the move from owning to renting products and objects. –– Simulation, where the Digital Twin is used to represent a physical object or an aggregation of objects under different conditions in order to evaluate how it or
1206
N. Crespi et al.
they will react to various situations, some of which could be extreme or even faulty. In this case, the Diital Twin is used to assess the robustness and the limits of a product or solutions. Two major applications fall in this category: the education Digital Twins and the Life Science Digital Twin. These are emerging applications designed training people in an environment and with systems that are very close to reality. The main value of these applications is to qualify people to understand and predict the reactions of particularly complex systems. Obviously some applications can show characteristics of different categories. For instance, e-health applications do not necessarily pursue educational goals, but they can achieve them while targeting the monitoring and safeguarding of people. The increasing interest in Extended Reality will most likely represent an additional push towards a wider use of the Digital Twin approach in several applications domains, from gaming to medicine to further level of automation. An interesting field of application of the Digital Twin is Cultural Heritage, in which physical objects can be represented, but in principle a Digital Twin could represent an idea or an artistic style. This leads to the representation of concepts and ideas as Digital Twins, which could open up a vast application domain.
4 The Value of the Digital Twin The value that Digital Twins bring have three major axes. The first is innovation where the Digital Twin fed by data of all forms (historical, operational, contextual, environmental, …) leads to insights and understanding and creates the pathway to novel designs, better products and services, and new functionality. It permits rapid tradeoffs and what ifs, that would be prohibitive to conduct in the real world. The second is to use the Digital Twin as part of the control loop for conducting operations where the financial return comes from greater efficiency and optimization of complex flows and processes. The third is from avoidance of the cost of hazards/ perils driven by the need for predictability and safety, and the elimination of unwanted surprises. The Digital Twin is an obvious case of the dematerialization of objects. This approach has the potential of disrupting several markets. For instance, the ability to duplicate by software artifacts and their functions by software can lead to a loss of value of the original physical object. On the other hand, the properties of programmability and customization of the Digital Twin can promote a higher level of personalization and satisfaction of customer requirements. The use of a powerful car can in fact accommodate the driving skills of the current user, or the car can accommodate for the preferred setup and tuning of the vehicle and its behavior. The Digital Twin approach will bring a multitude of possibilities and features to the final customer.
The Digital Twin in Action and Directions for the Future
1207
For service providers, the Digital Twin approach is extremely valuable, as the customer requirements can be better evaluated and the objects and related services can be tuned to the customers’ real needs. The Digital Twin will create a permanent link with customers for manufacturers. The usage of the physical products by means of the Digital Twin can be continuously monitored, which can lead to moving from fault management to preventive management. Rom the perspective of a manufacturer, the Digital Twin will bring in the ability to move many activities (from design to testing) from the expensive “hardware” infrastructure to the cheaper software platform. With so many tests that can be performed by software, design errors can be avoided in new versions or new products. A part of the design and testing processes can be highly automated. This may result in quicker time to market and a better satisfaction of customer requirements. The production costs and processes can be optimized and made more efficient and effective.
5 The Great Diversity in the Digital Twin Approaches Digital Twins have been used in a wide range of vertical sectors and application domains. Each of those in turn has very different requirements, unique needs, constraints, considerations that drive outcomes, and belongs to a unique ecosystem. This drives the diversity that we see in Digital Twins and why we consider Digital Twins as a methodology as opposed to a monolithic well defined framework. The actual approach in the implementation of a Digital Twin solution is strongly related to the business needs of several stakeholders. In certain cases, the Digital Twin representation has such proprietary content and value that it must be concealed and safeguarded from competitors and even users (e.g., the optimization in manufacturing processes). In other cases, the Digital Twin is an integral part of the product and it is fully presented and disclosed to the customers (e.g., in the case of servitization). In the case of representation of ideas and concepts, the related Digital Twin needs to be open and widely accessible, including its behavioral characteristics. Between these extremes, black box vs. fully open system, many different approaches are possible. The Digital Twin representation could be a good means of agreement and cooperation between a manufacturer and some components providers. Sharing the expected capabilities and requirements allows the providers to fine- tune their offering and optimize them in terms of cost and quality. The Digital Twin can also be a good means for cooperation between manufacturers and service providers, as the market needs and the requests of end users can quickly be passed to to the manufacturer. Internally to an enterprise, the Digital Twin can be used to synchronize the processes and reach the intended level of quality during the different phases of a product’s life cycle. In addition, a Digital Twin approach will more easily show the
1208
N. Crespi et al.
issues and the inconsistencies among different groups and processes within the enterprise. On the other side, the adhesion to the Digital Twin concept within an enterprise needs to be complete and be endorsed and pursued at all the hierarchical levels of the enterprise. If this is not the case, either some phases of the product life cycle will not benefit from this approach or the entire product will suffer from inconsistency and inefficiencies caused by bad processes.
6 The Common Aspects of Digital Twins Digital Twins typically exist at different levels of complexity and different levels of composition. They can be very simple stand alone Digital Twins dedicated to a single objective for a stand alone physical object or device. Even in this context they can be applied to any of the five types we had mentioned previously including abstract concepts, services, or processes. They can also have varying degrees of intelligence (smarts) in terms of autonomous behaviors. At the next level we can have compound/aggregate Digital Twins that are composed of smaller single purpose Digital Twins. In this case there can be a significant variation in the complexity of the Digital Twins in the aggregation and added complexity from how the Digital Twins interact with each other. This level also implies coordination and orchestration of how the Digital Twins act in the time and space allocated to their physical counterpart. Beyond that for complicated manufacturing regimes or sophisticate products of which there may be a large number the complexity can come from the distribution, sheer volume, and the velocity of information necessary to feed the Digital Twin. As indicated earlier, the Digital Twin has been defined (and implemented) in different ways, motivated by the contingent issues of the applications’ domain. In spite of this prevalent approach, it is worthwhile to identify the Digital Twin that are common throughout the different realizations and implementations. The basic features of a Digital Twin can be summarized as: –– Data availability – In order to be effective, a Digital Twin should operate on data that timely represent the status, the behavior and the context of the physical object. Without meaningful and updated data, a Digital Twin is useless. This data should be well-formed, curated, and represent the current status. In addition, historical data representing the past behavior of the physical object should be available and usable in order to be used for prediction and to identify previous issues and their solutions. –– Modelling – Data should be supported by models (AI and mathematical) that fully describe the characteristics and the expected behavior of the physical object. This is a fundamental requirement, because data availability alone cannot guarantee the correct interpretation of the status and behavior of an object. It should also be noted that the modelling should cover the representations of the
The Digital Twin in Action and Directions for the Future
1209
context and environments in which the physical object will operate. Good modelling could make use of the data and in some cases it could help in uncertain cases or with outlier data. –– Communication – Data availability and modelling require that the physical object can send timely data regarding its status and behavior. In many cases, real- time communication is required for the correct usage of a Digital Twin. In other cases, communication can serve the goal of collecting data for improving the modelling and the representation of the physical object. These three features are those needed for building a Digital Twin. However, do to the increasing applications of the Digital Twin to many application domains, other features are emerging as especially valuable: –– Aggregation – A Digital Twin may represent a complex object or a simple one. In both cases, its usability and value increases if the Digital Twin can be “decomposed” or “aggregated”. The Digital Twin should be built in order to represent a specific physical object, and so that it can be integrated into a larger Digital Twin (still maintaining consistency with its physical object). On the other hand, a large Digital Twin should be easily decomposed into parts (simpler Digital Twins), each with its effective modelling and data. While aggregation poses many technical issues, it can be an important quality for multiple reasons. –– Programmability – The Digital Twin is a software representation, as such it should come with interfaces and Application Programming Interfaces (APIs), in order to allow its functional integration into large software systems and the exploitation of its data and representation. In addition, it should offer the ability to create new software functions and capabilities in order to flexibly accommodate new requests and needs. A definition of a Digital Twin can then be formulated as a programmable software representation of a physical object based on actual and updated data whose behavior is determined by accurate models of the physical object tuned by the actual data. A Digital Twin should be designed in such a way to be aggregated into a more complex system while keeping the consistency of it behavior, or decomposed into smaller parts that interact in such a way that the global behavior of the object is consistent with that of the physical object.
7 Technical Complexity of Digital Twins The Digital Twin can be seen as a relatively simple software representation of a physical object. However, its usage for simple cases is not relevant. The benefits of the Digital Twin representation are evident when the Digital Twin represents complex physical objects and related environment. In addition, the Digital Twin may be used during different phases of an objects lifecycle, and different actors may need
1210
N. Crespi et al.
to have a specific view of it (i.e., a software interface). From this perspective, a DT is an aggregation point of different views and technologies. Data representation and modelling, simulation, programmability, AI, distribution and replications of parts in large networks, sensing and actuation capabilities, security and privacy; these and other aspects make the Digital Twin a complex distributed system that can be executed in different administrative domains, with various sorts of data and types of communications, producing and consuming a large quantity of data. The issues related to the computational capabilities associated with the Digital Twin can thus be exacerbated by the real-time requirements of many application domains. The design of a Digital Twin is also strongly associated with the representation and control of complex systems. These vary tremendously across applications. For instance, a Smart City Digital Twin represents a complex system as a city which is in itself a complex distributed system. This aspect can help to explain why the development of effective Digital Twin systems is is a long term investment and of necessity takes a long time to actually develop, and why each Digital Twin is often considered as a technological achievement to be safeguarded from competitors.
8 The Impact of Digital Twins on Organizational Structure The diversity of Digital Twin applications also extends to the diversity of geographic and political settings where they may be used. This means that the implications for organizations must also follow unique requirements driven by the type of business that they are in, the markets they address, and where they operate their facilities or services. It is one thing to be large organization like Amazon, Apple, Google or Microsoft with a world wide footprint and another to be a regional manufacturer that only operates within a single country. The Digital Twin represents and encompasses the relevant knowledge of one or more enterprises (the ecosystem). Its construction and tuning requires a considerable amount of time and thus it represents a considerable software development effort. Initial development has take the approach of building the computing and communication infrastructure on top of which to incrementally a Digital Twin solution. The solution can then grow addictively by extending the functions and covering more and more phases of the life cycle of the physical object. The enterprises that have taken this path have addressed the computing and communications issues for suporting the DT (for instance the creation or usage of cloud infrastructure), the design and implementation of the Digital Twin (a significant software effort); and the functional extension of the system for covering different lifecycle phases or for a functional extension. The effort needed is huge, and without the right skills and outlook the risk of failure is high. In fact, the implementations are realized step by step in order to maintain focus and to progressively cover all the aims of the adoption of the Digital
The Digital Twin in Action and Directions for the Future
1211
Twin approach. For certain companies, the objective is a complete adoption of the DT in the lifecycle, for others it is only to use a DT to monitor certain aspects of important processes, and for still others it is an attempt to understand the applicability and the real benefits within a specific business domain. Recently, some major software actors (e.g., Amazon, Microsoft, Google) are offering middleware and platforms supporting the Digital Twin approach, while simultaneously, open source solution are emerging for creating Digital Twin applications (e.g., Eclipse’s Ditto, Fiware). This is a sign that progressively, a mature software infrastructure (based on the edge-cloud continuum) can be used. Enterprises will be focusing more on the design of the Digital Twin functions related to a specific problem domain, while more general features will be supported by the middleware. The issue in this case is to create a dependency on the middleware provider and only loose a bit of flexibility in the timely adoption of most advances in technology supporting the specific needs of the enterprise. The platform vendors will try to keep up with the need, but will develop their own development plans that may not necessarily satisfy the enterprise’s needs. In any case, the rise of a market of platforms and tools supporting the development of Digital Twin solutions is a great indication of the interest of several industries, and a great boost for the definition and progress of the Digital Twin market.
9 Adoption, Implementation, and Deployment of Digital Twins – The Importance of Human Factors The Digital Twin approach necessitates a change of attitude in an enterprise. It requires the precise and formal definition of models and accuracy in data acquisition. The approach is potentially pervasive in all the aspects, processes and activities of an enterprise. As such it requires, at least, three drivers: –– An enhancement of software skill. As we have shown, the realization of a Digital Twin solution is a significant software endeavour. This requires knowledge of the latest technologies and techniques in distributed systems, data management, middelware platforms and Artificial Intelligence tools, among other areas. Not all enterprises have the right set of skills. Consultants and external teams can help, but the Digital Twin is often a package that must be tailored for each company. A good approach is to adopt a new attitude towards the softwarization of the enterprise by hiring and educating the right staff. –– The constant support of the hierarchy of the enterprise. The approach needs to be fully supported within an enterprise, even when the implementation is step by step. The adoption of the Digital Twin approach is actually a move towards the complete softwarization of the company, and it should be supported by all the hierarchical levels. A continuous support from the executives and the top management is needed, even in cases of partial failures or issues. The approach is
1212
N. Crespi et al.
really one of the highest level of the digital transformation, and as such, it is difficult and needs to be promoted and supported. –– The participation of the workers. This is probably the toughest issue to tackle. Many employees whose skills are central to an enterprise’s activities can perceive the transition to the Digital Twin as a loss of power. This is due to the need to “softwaritize” the skills and to transfer the knowledge of the problem domain into the modelling of the Digital Twin. This effort should be supported by motivating the people in who are part of this transformation. It is very likely that some of the attempts to adopt a Digital Twin approach may fail or have huge problems. However, it is important to consider what could happen if a competitor succeeds in this transformation. The benefits of this change could be such that the enterprises that are slower to adopt it will suffer and lag behind the competition.
10 What We Learned from Writing This Book Our objective in writing the Digital Twin book was to collect the important aspects of what it takes to convince an organization and its executives to adopt and internalize Digital Twins as a way of doing business. The book was meant to contain enough high level material that it would be the starting point for a roadmap that could be followed and an exposure to major issues. We feel that we partially achieved this objective. The book contains major sections that: (1) Describe what a Digital Twin is and its history as a concept as well as its evolution, the business value, and the market for Digital Twins; (2) The technical underpinnings for Digital Twins addressing architectures, approaches for development, as well as important technologies that are important for the journey in maturing an organizations capability and capacity to adopt Digital Twins; (3) Operational aspects of Digital Twins concentrating on how they are actually used in an enterprise setting, the requirements for organizational competency and the impact on organizational structure; (4) A set of diverse shared experiences from actual use cases of Digital Twin deployment as well as aspirational uses of Digital Twins; and (5) a future view of how Digital Twins are likely to develop and the constraints that they face from legal and regulatory considerations. At the beginning, the book began as a small number of issues and related chapters. With time and discussion among the authors, the needs and the extent of the subjects related to the Digital Twin led to a much larger effort. The major lesson learned is that Digital Twin adoption is complex and it deserves an appropriate preparation and collection of skills. From a technological perspective, the Digital Twin has a double challenge: to adopt, integrate, and develop the right tools and functions to support a robust platform; and to develop specific tools, solutions and functions to reflect the needs and requirements of the problem domain. For both these challenges, many current technologies are not adequate to the challenge, and
The Digital Twin in Action and Directions for the Future
1213
there is the need to adopt top-notch solutions from data management, distributed computing, AI and security mechanisms. The Digital Twin is a great challenge, and the benefits of its adoption could be huge. This book presents the basis for a strong technological evolution in support of the Digital Twin. Another important aspect is the applicability of the Digital Twin. We started with a few meaningful use cases, and then found ourselves with large descriptions of them; many have not found their way to publication. This serves to reinforce how the Digital Twin concept, apart from the hype around it, has the potential to change the way in which software is implemented for solving stringent problems in several vertical markets. We learned that education and even cultural heritage can benefit from this approach. A final remark is that the Digital Twin concept is capable of integrating different areas and cross-fertilizing them. The most fruitful discussions during the preparation of the book were those of experts in different problems domains that were finding commonalities in the usage of the Digital Twin.
11 Getting the Most Out of Digital Twins As all technologies, the Digital Twin should serve a purpose. Before embarking on a Digital Twin solution, it is important to understand why it is needed and how to realize it. Finding convincing answers to the “why” question is fundamental for the successful implementation of this approach. The book has represented a set of technologies and a large number of use cases. The readers may find the use cases interesting and related to their business and hence the technology chapters can help to identify the right tools and steps to take. Many potential use cases are not presented in the book; however, some people may find technologies and suggestions about how to use them for implementing a convincing approach by means of the Digital Twin. We have also included an extensive bibliography in this chapter [5–46] of other books on Digital Twins and on closely related subjects. As a final remark, it is important to say that the Digital Twin is a very powerful instrument for fully exploiting the available data and to model them to represent the intended behaviour of physical objects and products. The possibilities to exploit this combination are huge, but they must be framed in a business perspectives to offer advantages for adopters.
References 1. Negri, E., Fumagalli, L., & Macchi, M. (2017). A review of the roles of Digital Twin in CPS- based production systems. Procedia Manufacturing, 11, 939–948.
1214
N. Crespi et al.
2. VanDerHorn, E., & Mahadevan, S. (2021). Digital Twin: “Generalization, characterization and implementation”. Decision Support Systems, 145, 113524. 3. Wright, L., & Davidson, S. (2020). How to tell the difference between a model and a Digital Twin. Advanced Modeling and Simulation in Engineering Sciences, 7(1), 1–3. 4. Liu, M., Fang, S., Dong, H., & Xu, C. (2021). Review of Digital Twin about concepts, technologies, and industrial applications. Journal of Manufacturing Systems, 58, 346–361. 5. Adra, H. (2018). Success with simulation: A definitive guide to process improvement success using simulation for healthcare, manufacturing, and warehousing, Self-Published, 290 Pages, (December 23, 2018). ISBN-13: 978-1732987807. 6. Ambra, T. (2020). Synchromodal transport and the physical internet: The cornerstones of future long-distance digital twin applications, Crazy Copy Centre Productions, 197 pages (April 21, 2020). ISBN-13: 978-9493079397. 7. Armendia, M., Guipuzcoa, M., Ozturk, E., & Peysson, F. (Eds.). (2019). Twin-control: A digital twin approach to improve machine lifecycle (p. 312). Springer. ISBN-13: 978-3030022020. 8. Armstrong, E. T. (2021). Productize: The Ultimate Guide to Turning Professional Services into Scalable Products (p. 172). Vectris. ISBN-13: 978-1736929612. 9. Auer, M. E., & Ram, B. K. (Eds.). (2020). Cyber-physical systems and digital twins. In Proceedings of the 16th international conference on remote engineering and virtual instrumentation (Bangalore, India 3–6 February 2019). Lecture Notes in Networks and Systems Book 80; Springer. 862 Pages. ISBN-13: 978-3030231613. 10. Barachini, F., & Stary, C. (2022). From digital twins to digital selves and beyond: Engineering and social models for a trans-humanist world. Springer. 148 pages, March 31, 2022. [ISBN-13: 978–3030964115]. 11. Bascur, O., O’Rourke, J., & Contributing Editor (Eds.). (2020). Digital Transformation for the Process Industries: A Roadmap. CRC Press. 320 pages, October 28, 2020. [ISBN-13: 978-0367222376]. 12. Blokdyk, G. (2018). Digital Twin: A complete guide. 5StarCooks – The Art of Service. 296 Pages. ISBN-13: 978-0655519218. 13. Blokdyk, G. (2020). Digital twin oil and gas a complete guide. The Art of Service. 312pages (November 17, 2020). ISBN-13: 978-1867436737. 14. Bornet, P., Barkin, I., & Wirtz, J. (2020). Intelligent automation: Learn how to harness artificial intelligence to boost business and make our world more human. Independently Published, Kindle Edition. 26068KB (Print length 460pages), October 14,2020. ASIN: B08KFLY51Y. 15. Chaudhary, G., Khari, M., & Elhoseny, M. (Eds.). (2021). Digital Twin Technology (1st ed.). CRC Press. 252pages (October 6, 2021). [ISBN-13: 978–0367677954]. 16. Chen, K. (2021). Digital Twin. Royal Collins Publishing Company. 324pages (June 30, 2021). ISBN-13: 978–9811200731. 17. Del Giudice, M., & Osello, A. (Eds.). (2021). Handbook of research on developing smart cities based on digital twins. IGI Global, Advances in Civil and Industrial Engineering. 674pages (January 15, 2021) ISBN-13: 978-1799870913. 18. Dieck, M. C. T., Jung, T. H., & Loureiro, S. M. C. (Eds.). (2021). Augmented reality and virtual reality: New trends in immersive technology (1st ed.). Springer. 325pages (May 5, 2021). [ISBN-13: 978–3030680855]. 19. Dungey, D. (2022). Digital twin technology: Twins digital technology and industries. Independently Published. 44 pages. January 4, 2022. [ISBN-13: 979-8795505428]. 20. Dziosa, P. (2018). Digital Twins for Innovative Service and Business Models: Upcoming opportunities for products, services and entrepreneurs. AV Akademikerverlag. (October 16, 2018) ISBN-13: 978-6202218467. 21. Elangovan, U. (2021). Industry 5.0: The future of the industrial economy. CRC Press. 127 pages, Decemberc 28, 2021. [ISBN-13: 978–1032041278]. 22. Farsi, M., Daneshkhah, A., Hosseinian-Far, A., & Jahankhani, H. (Eds.). (2020). Digital twin technologies and smart cities. Springer Series on Internet of Things. 212 Pages (2020). ISBN-13: 978-3030187316.
The Digital Twin in Action and Directions for the Future
1215
23. Fortin, C., Rivest, L., Bernard, A., & Bouras, A. (Eds.). (2020). Product lifecycle management in the digital twin era. Springer, IFIP Advances in Information and Communication Technology (Book 56) Press. 426 Pages (2020). ISBN-13: 978-3030422493. 24. Herwig, C., Pörtner, R., & Möller, J. (Eds.). (2021). Digital twins: Applications to the design and optimization of bioprocesses. Springer, Advances in Biochemical Engineering/ Biotechnology, Book 177 1st ed. 261pages (May 27, 2021). ISBN-13: 978-3030716554. 25. Iyer, A. (2017). Digital twin: Possibilities of the new digital twin technology. Academic. 35 Pages (2017). ASIN: B077LN1LD7. 26. Jardine, A. K. S., & Tsang, A. H. C. (2021). Maintenance, replacement, and reliability: Theory and applications (3rd ed.). CRC Press. 412 pages, September 16, 2021. [ISBN-13: 978–0367076054]. 27. Kamilaris, A., Wohlgemuth, V., Karatzas, K., & Athanasiadis, I. N. (Eds.). (2020). Advances and new trends in environmental informatics: Digital twins for sustainability. Springer; Progress in Information Science, 1st ed. 279 pages (December 17, 2020) ISBN-13: 978-3030619688. 28. Khaled, N., Pattel, B., & Siddiqui, A. (2020). Digital twin development and deployment on the cloud. Academic. 592 Pages (2020). ISBN-13: 978-0128216316. 29. Kravets, A. G., Bolshakov, A. A., & Shcherbakov, M. V. (Eds.). (2021). Cyber-physical systems: Digital technologies and applications. Springer, Studies in Systems, Decision and Control, Book 350 1st ed. 404pages (April 14, 2021) ISBN-13: 978-3030678913. 30. Kühn, W. (2019). Handbook of digital enterprise systems: Digital twins, simulation and AI. World Scientific Press. 248 Pages. ISBN-13: 978-9811200731. 31. Laudon, K. C., & Laudon, J. P. (2019). Management information systems: Managing the digital firm. Pearson, 648 Pages. ISBN-13: 978-0135191798. 32. Masrour, T., El Hassani, I., & Cherrafi, A. (Eds.). (2020). Artificial intelligence and industrial applications: Artificial intelligence techniques for cyber-physical, digital twin systems and engineering applications. Springer, Lecture Notes in Networks and Systems, Book 144, 1st Edition, 344pages. (July 19, 2020). ISBN-13: 978-3030539696. 33. McCarthy, H. (2021). Digital twin use cases: Application of digital twins. Kindle Edition. 7587KB, (July 11, 2021) ASIN: B0997JB4T4. 34. Nath, S. V., Dunkin, A., Chowdhary, M., & Patel, N. (2020). Industrial digital transformation: accelerate digital transformation with business optimization, AI, and industry 4.0. Packt Publishing. 426pages (November 27, 2020) ISBN-13: 978-1800207677. 35. Nath, S. V., & van Schalkwyk, P. (2021). Building industrial digital twins: Design, develop, and deploy digital twin solutions for real-world industries using Azure digital twins. Packt Publishing. 286 Pages (November 2, 2021). ISBN-13: 978-1839219078. 36. Ong, S. K., & Nee, A. Y. C. (Eds.). (2021). Digital twins in industry. MDPI AG. 244 pages (November 12, 2021) [ISBN-13: 978-3036518008]. 37. Pal, S. K., Mishra, B., Pal, A., Dutta, S., Chakravarty, D., & Pal, S. (2021). Digital twin – Fundamental concepts to application in advanced manufacturing (1st ed.). Springer. 501pages (August 14, 2021). [ISBN-13: 9783030818142]. 38. Raghunathan, V., & Barma, S. D. (2020). Digital twin: A complete guide for the complete beginner. ASIN: B081CKGR5Q. 56 Pages. 39. Raj, P., & Evangeline, P. (Eds.). (2020). The digital twin paradigm for smarter systems and environments: The industry use cases. Advances in Computers, Academic Press, Science Direct, 117(1), 368. ISBN-13: 978-0128187562. 40. Rehak, A. (2021). Additive knowledge: Everything you need to know about 3D printing, 3D scanning, and 3D modeling. Kindle Edition. 485 pages, June 20, 2021. [ASIN: B097NP1DHC]. 41. Shanmuganathan, V., Kadry, S., Vijayalakshmi, K., Subbulakshmi, P., & Julie, G. (Eds.). (2022). Deep learning for video analytics using digital twin. River Publishers. 350 pages (January 30, 2022). [ISBN-13: 9788770226622]. 42. Stjepandić, J., Sommer, M., & Denkena, B. (Eds.). (2021). DigiTwin: An approach for production process optimization in a built environment (1st ed.). Springer. 271 pages (August 24, 2021). [ISBN-13: 978-3030775384].
1216
N. Crespi et al.
43. Tao, F., Liu, A., Hu, T., & Nee, A. Y. C. (2020). In Editors and Authors (Ed.), Digital twin driven smart design. Academic. 358 pages. ISBN-13: 978-0128189184. 44. Tao, F., Zhang, M., & Nee, A. Y. C. (2019). Digital twin driven smart manufacturing (p. 282). Academic. ISBN-13: 978-0128176306. 45. Tao, F., Qi, Q., & Nee, A. Y. C. (Eds.). (2022). Digital twin driven service. Academic Press. 326 pages, March 31, 2022. [ISBN-13: 978-0323913003]. 46. Tarkhov, D., & Vasilyev, A. N. (2019). Semi-empirical neural network modeling and digital twins development (1st ed., p. 288). Academic Press. (December 6, 2019) ISBN-13: 978-0128156513. Prof. Noel Crespi holds Masters degrees from the Universities of Orsay (Paris 11) and Kent (UK), a diplome d’ingénieur from Telecom Paris, and a Ph.D and an Habilitation from Sorbonne University. From 1993 he worked at CLIP, Bouygues Telecom and then at Orange Labs in 1995. In 1999, he joined Nortel Networks as telephony program manager, architecting core network products for the EMEA region. He joined Institut MinesTelecom, Telecom SudParis in 2002 and is currently Professor and Program Director at Institut Polytechnique de Paris, leading the Data Intelligence and Communication Engineering Lab. He coordinates the standardization activities for Institut MinesTelecom at ITU-T and ETSI. He is also an adjunct professor at KAIST (South Korea), a guest researcher at the University of Goettingen (Germany) and an affiliate professor at Concordia University (Canada). His current research interests are in Sotwarisation, Artificial Intelligence and Internet of Things. http://noelcrespi.wp.tem-tsp.eu/. Adam Drobot is an experienced technologist and manager. His activities are strategic consulting, start-ups, and industry associations. He is the Chairman of the Board of OpenTechWorks, Inc and serves on the boards of multiple companies and no-profit organizations. These include Avlino Inc., Stealth Software Technologies Inc., Advanced Green Computing Machines Ltd., Fames USA, and the University of Texas Department of Physics Advisory Council. In the past he was the Managing Director and CTO of 2M Companies, the President of Applied Technology Solutions, and the CTO of Telcordia Technologies (Bellcore). Previous to that, he managed the Advanced Technology Group at Science Applications International (SAIC/Leidos) and was the SAIC Senior Vice President for Science and Technology. Adam is a member of the FCC Technological Advisory Council, where he recently co-chaired the Working Group on Artificial Intelligence. In the past he was on the Board of the Telecommunications Industry Association (TIA) where he Chaired the Technology Committee; the Association for Telecommunications Industry Solutions (ATIS), the US Department of Transportation Intelligent Transportation Systems Program Advisory Committee, and the University of Michigan Transportation Research Institute (UMTRI) External Advisory Board. He has served in multiple capacities within IEEE, which include the Chair of the IEEE Employee Benefits and
The Digital Twin in Action and Directions for the Future
1217
Compensation Committee, as a member of the IEEE Awards Board, and the IEEE Industry Engagement Committee. In 2017 and 2018 he chaired the IEEE Internet of Things Initiative Activities Board and has been a General Co-Chair for the IEEE World Forum on the Internet of Things since 2018. He has published over 150 journal articles and holds 27 patents. In his professional career he was responsible for the development of several major multi-disciplinary scientificmodeling codes and also specialized in developing tools and techniques for the design, management, and operation of complex scientific facilities, discrete manufacturing systems, and large-scale industrial platforms, for both government and industry. His degrees include a BA in Engineering Physics from Cornell University and a PhD. in Plasma Physics from the University of Texas at Austin. Roberto Minerva, associate professor at Telecom SudParis, Institut Polytechnique de Paris, holds a Ph.D in Computer Science and Telecommunications, and a Master Degree in Computer Science. He currently conducts research in the area of Edge Computing, Digital Twin, Internet of Things and Artificial Intelligence applications. During 2014-16, he was the Chairman of the IEEE IoT Initiative. Roberto has been for several years in TIMLab as research manager for Advanced Software Architectures. He is authors of several papers published in international journals, conferences, and books.
Index
A Accountability, 200, 307, 311, 937, 1160–1162, 1192 Acquisition and sensing in vivo, 1051, 1055 synesthesia, 1053 Activities community development, 536–539 projects, 536 Actuation, 7, 130, 205, 206, 314, 315, 406, 442, 1210 Actuators, 184, 209, 212, 219, 220, 301, 315, 338, 339, 341, 342, 351, 352, 378, 379, 425, 777, 1009, 1141, 1163 Adaptive authoring, 453–455 Additive manufacturing, 27, 600, 620, 635, 741, 746, 784, 788, 988, 989, 1009, 1151 Adoption, 16, 17, 24, 25, 27, 34, 44–53, 58, 65–79, 86, 91, 156, 200, 211, 216, 218, 232, 246, 253, 254, 258, 260, 261, 263, 268, 272, 274, 323, 327, 330, 332, 372, 373, 379, 383, 409, 411, 412, 421, 454, 455, 460–462, 464, 466, 468, 474, 499, 517, 601, 607, 609, 610, 622, 647, 648, 653, 704, 775, 787, 796, 802, 826, 827, 840, 909, 920, 921, 926, 936, 960, 972, 1012–1014, 1016, 1038, 1116, 1120, 1140, 1141, 1148, 1151, 1153, 1157, 1158, 1160, 1164, 1170, 1171, 1203, 1210–1213 Aerospace, 67, 71, 103, 127, 206, 241–243, 578, 581, 585, 586, 597, 602, 603, 607, 618, 620, 635, 645, 646, 649, 652, 680, 744, 750, 755–756, 769, 852, 863, 885,
890, 892, 895, 897, 953, 1001, 1093, 1148, 1205 Aggregation, 10, 32, 33, 41, 42, 46, 105, 106, 132, 187, 188, 195, 220, 261, 270, 271, 301, 302, 310, 401, 403, 407, 418, 640, 666, 668, 708, 710, 1050, 1052, 1055, 1056, 1076, 1108, 1163, 1205, 1208–1210 Agile development, 171, 177 Agility, 158, 161, 295, 342, 419, 615, 648, 736, 756, 764, 765, 768, 770, 782 Algorithms, 15, 161, 176, 178, 183, 185, 248, 249, 256, 259, 302–306, 310–312, 314, 315, 320, 323, 326, 327, 330, 386, 436, 444, 450, 573, 581, 588, 590, 594, 595, 625, 626, 628, 638, 640, 642, 649, 666, 686, 699, 739, 791, 793, 821, 829, 838, 945, 1010, 1011, 1029, 1064, 1065, 1069, 1072, 1119, 1123–1125, 1142, 1162, 1168, 1169, 1175, 1181, 1197, 1198 Analytics, 26, 27, 45, 75, 78, 79, 86, 87, 90, 158–160, 163, 164, 173, 177, 201, 207, 208, 211, 214–220, 230, 244, 259, 262, 264, 266, 269, 304, 327, 398, 403–404, 411, 412, 418, 424, 425, 498, 502, 516, 565, 578, 592, 594–596, 602, 628, 630, 631, 635, 637–640, 644, 650, 652, 678–680, 682, 683, 685, 694, 700, 715, 716, 721, 722, 724, 740, 747, 769, 782, 794–796, 923, 934, 935, 942, 945, 950, 951, 959, 977, 989, 990, 993, 994, 1010, 1011, 1016, 1017, 1063–1065, 1069–1072, 1140, 1168, 1173, 1198
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 N. Crespi et al. (eds.), The Digital Twin, https://doi.org/10.1007/978-3-031-21343-4
1219
1220 Analytics (cont.) data, 208, 211, 218, 220, 230, 304, 327, 418, 578, 592, 594–596, 631, 635, 637, 640, 644, 721, 782, 794, 796, 945, 950, 977, 989, 990, 993, 994, 1010, 1011, 1016, 1017, 1063–1065, 1069–1072, 1140 Apache Software Foundation (ASF), 546, 547 Application domains, 4–6, 9, 11, 13, 17, 18, 214, 259, 274, 275, 301, 311, 312, 357, 397, 405–406, 517, 1114, 1204, 1206, 1207, 1209, 1210 Application Programming Interface (API), 137, 138, 140, 143, 147, 208, 212, 216, 217, 263, 354, 357, 359, 513, 514, 518, 523, 716, 1161, 1209 Architecture buildings, 126–129, 133, 146–148 component, 126, 129 deployment, 127, 129 functional, 126, 129, 130, 356–357 interaction, 127 Art digital, 44, 1098 Artifact physical, 31, 32, 49, 53, 54, 114, 118, 300, 301 virtual, 44 Artificial Intelligence (AI) explainable, 650 trustworthy, 200 Artificial Neural Network, 304 Asset management, 87, 194, 201, 628, 759, 976 Asset utilization, 71, 73, 736, 753 Assimilation data, 290, 293 Attack direct access, 376 eavesdropping, 376 malware, 376, 377 multi-vector, 376 phishing, 376 polymorphic, 376 reverse engineering, 376 spoofing, 377 tampering, 377 Attitude, 16, 17, 327, 332, 398, 1204, 1211 Augmentation, 135, 218, 307, 311, 312, 465, 481, 482, 1143, 1146, 1152, 1155–1156, 1159 Augmented reality (AR)
Index adoption, 454, 455, 460–462, 464, 466 content authoring, 448–455, 478, 480 non-industrial, 458–460 spatial (SAR), 456, 459, 463, 465, 475 training, 447–486 use cases in manufacturing, 457–458 Authoring tools classification, 448–449 Automation data, 681–682 manufacturing, 127, 680 Automotive, 18, 67, 88–90, 100, 240, 245, 338, 403, 406, 410–415, 417, 614, 635, 649, 714, 750, 752–754, 769, 773–796, 852, 860–862, 886, 895, 897, 1001, 1138, 1147, 1148, 1150, 1151, 1163, 1205 Autonomy, 30, 212, 302, 406, 412, 415, 602, 607, 628, 638, 639, 651, 653, 774, 778–781, 784, 1035, 1037, 1140, 1141, 1145–1146, 1157, 1160–1163, 1203 Availability, 5, 14, 17, 18, 44, 115, 173, 190, 191, 209, 212, 250, 251, 258, 262, 263, 269–271, 295, 312, 318, 322, 323, 326, 327, 343, 344, 366, 375, 376, 383, 400, 402, 404, 419, 421, 423, 473, 508, 511, 516, 594, 597, 602, 611, 629, 638, 640, 647, 650, 652, 662, 667, 673, 686, 754, 795, 804, 806, 810, 829, 830, 863, 972, 976, 992, 994, 1010, 1055, 1148, 1158, 1168–1182, 1198, 1204, 1208, 1209 data, 322, 323, 1208, 1209 Awareness context, 212, 449, 453–455, 704, 1141, 1143, 1160, 1161 situation, 11, 220, 300, 302, 312, 323, 327, 328, 603, 604, 653, 690, 711, 721, 767, 811, 1160 B Backdoor, 376 Bandwidth variable, 442 Behaviors, 7, 49, 89, 108, 234, 284, 300, 399, 516, 567, 579, 614, 678, 718, 801, 935, 1017, 1031, 1208 Benefits, 4, 40, 65, 126, 154, 184, 207, 231, 254, 294, 308, 342, 369, 408, 447, 502, 533, 565, 585, 605, 661, 678, 704, 748, 774, 800, 856, 904, 972, 1028, 1083, 1115, 1159, 1170, 1208 Berger, H., 1067
Index Best practices, 4, 12, 14, 261, 327, 366, 382, 497–528 Biometrics, 1041, 1049, 1053, 1138 Blockchain, 27, 28, 30, 35, 40, 43, 54, 55, 93, 215, 218, 273, 572, 919, 1038, 1099–1101, 1168, 1183, 1194, 1198 Boeing 777, 892, 896 Brain-Computer Interface (BCI), 1047, 1053, 1055, 1058, 1059, 1066–1076 Brand, 41, 99, 306, 338, 542, 544, 777, 785, 1178, 1181 Building blocks, 90, 126, 146–148, 158–160, 207, 214, 219, 397, 537, 684, 755, 789, 829, 862, 904, 909–911, 935, 947, 948 Building Information Modelling (BIM), 35, 127, 161, 339–341, 343, 680, 940, 963, 964, 999, 1014, 1015, 1017, 1096, 1147, 1150, 1151 Business metrics, 154, 170 models, 11, 18, 23, 24, 26, 28, 29, 32–34, 39–44, 52, 57, 58, 60, 157, 167, 169, 182, 183, 186, 199, 221, 231, 234, 295, 338, 391, 396, 516, 542, 543, 600, 610, 618, 663, 664, 666, 779, 783, 795, 800, 834–835, 855, 860, 863, 882, 895, 898, 932, 957, 960, 1016, 1038–1039 performance, 81, 645, 741, 850–852, 854–857, 859, 882 value, 154, 161, 170, 178, 258, 498, 522, 541, 737, 763, 764, 766, 769, 1212 value in the enterprise, 34, 602, 647, 648, 650 value in manufacturing, 763–766 C Caution(s), 572–573, 906, 920, 936–938, 1193 Chevrolet Volt, 862 Christensen, C., 26, 27, 35, 59, 543 Cities smart, 6, 7, 9–11, 18, 86, 103, 128, 145, 211, 274, 280, 306, 314, 338, 339, 347–348, 353, 388, 390, 405, 498, 512, 516, 901–965, 1037, 1084, 1143, 1145, 1151, 1153–1154, 1159, 1163, 1203, 1205, 1210 City Challenges, 908–910, 933 Civil Engineering Research Foundation (CERF), 853, 854, 882, 884–886, 897 Clean Energy and Smart Manufacturing Innovation Institute (CESMII), 522 Clinical decisions, 1026–1027
1221 Cloud analytics, 244, 782, 794–796 computing, 143, 228, 280, 294, 360, 406, 408, 413, 419, 426, 602, 652, 940, 945, 1203 storage, 27, 47, 273, 413 Cognition and control, 1050 Cognitive Intelligence, 305 Collaboration, 30, 52, 87, 231, 234, 239–241, 243, 255, 338, 343, 345, 403, 410, 417, 430, 458, 505, 515, 517, 522, 532, 533, 541, 542, 546–554, 602, 604, 610, 615, 634, 648, 781, 795–796, 801, 802, 804, 806, 809, 811, 819, 822, 826, 829, 832, 835, 838, 841–843, 845, 854–857, 859, 884, 891, 892, 899, 927, 933, 934, 936, 942, 943, 995, 1008, 1032, 1036, 1083, 1086, 1088, 1160, 1184 Comanche RAH66, 619 Command, 130, 138, 145, 198, 210, 217, 255, 314, 315, 371, 373, 374, 379, 381, 391, 400, 417, 434, 457, 478, 479, 483, 485, 516, 606, 665, 808, 829, 1076 Communications, 8, 27, 101, 125, 183, 205, 234, 254, 314, 373, 397, 434, 500, 538, 568, 603, 665, 683, 706, 785, 829, 911, 997, 1046, 1084, 1117, 1143, 1171, 1202 Communities, 5, 12, 52, 84, 99, 128, 200, 213, 257, 260, 261, 274, 371, 375, 381, 419, 483, 499, 507, 515, 517, 520, 527, 532, 534–544, 547–554, 628, 634, 679, 912, 916, 917, 919, 920, 922, 926, 927, 930, 934–936, 939–942, 945, 946, 949, 952, 953, 957, 958, 1002, 1008, 1024, 1025, 1027, 1031, 1033, 1085, 1154, 1171, 1203 Community development projects, 536–538 Complement projects, 541–542 Complexity, 5, 9, 12, 24, 36–38, 44, 45, 52, 58–60, 105, 112, 162, 163, 173, 178, 207, 220, 222, 227–251, 280–282, 285, 287, 291, 292, 300, 305, 306, 309, 311, 312, 323, 328, 331, 332, 345, 358, 372, 389, 398, 400, 404, 437–440, 445, 453–456, 471, 472, 483, 508, 539, 562–566, 607, 695, 698, 740–742, 749, 777–779, 781, 785, 790, 791, 793, 794, 796, 800, 801, 804, 823, 845, 861, 876, 879, 880, 904, 907–909, 921, 932, 935, 940, 949, 972, 1001, 1038, 1055, 1125, 1153, 1158, 1203, 1208–1210
1222 Complex systems age, 559–573 Compliance, 184, 235, 341, 388, 608, 611, 614, 629, 719, 736, 737, 741, 751, 765, 784, 988, 993, 1014, 1015, 1017, 1168, 1185, 1187–1194, 1198 Composition, 10, 18, 37, 55, 92, 131, 145, 158, 160–167, 213, 285, 308, 341, 342, 348, 388, 390, 396, 397, 403, 470, 514, 594, 777, 810–812, 816, 819, 941, 1033, 1065, 1103, 1172, 1173, 1182, 1208 Computational capabilities, 600, 602, 603, 1210 divide, 560–562 Computational fluid dynamics (CFD), 236, 241, 616, 620–622, 784, 792, 979, 1004, 1006 Computer-Aided Design (CAD), 113, 602, 800, 802, 812, 821, 826, 858, 891, 892, 986, 1148 Computing embedded, 402, 408 Concerns physiological, 467–468 psychological, 467–468 Concorde, 860, 891 Condition-based maintenance, 641, 992, 1013, 1119, 1120, 1123–1124, 1129 Confidentiality, 211, 366, 375, 376, 645, 647, 648, 655, 1170, 1176, 1183, 1184, 1193 Configuration data, 415, 416, 434, 781, 795 Connectedness, 37, 437 Connectivity intelligent, 1046 pervasive, 1046 Construction industry, 86, 850–852, 856, 865, 875, 878–880, 882, 883, 886–889, 892, 897 manufacturing, 849–898 nuclear power, 972, 973, 975–977, 980–989, 999, 1001, 1014–1016 processes, 810, 856, 858, 862, 865, 870, 877, 879, 880, 890, 893, 897 productivity, 850, 887 Consumer goods, 67, 68, 171, 1153 Context-aware, 212, 449, 453–455, 704, 1141, 1143, 1160, 1161 Continuous improvement, 328, 330, 403, 415, 637, 640, 645, 653–655, 741, 1116 Contracts, 30, 35, 43, 54, 200, 535, 545, 647, 746, 761, 806, 808–810, 820, 835, 836, 869, 990, 1014, 1100, 1168, 1177, 1181, 1184–1187, 1194, 1198
Index Convergence, 206, 406, 411, 418, 551, 588, 603, 737, 781, 800, 809, 817, 838, 849–898, 953, 1046, 1203 Copyright, 532, 533, 542, 1099, 1169, 1173, 1174, 1176–1178, 1181, 1183, 1185, 1198 Core value proposition projects, 542–544 Costs, 8, 27, 67, 104, 126, 162, 182, 210, 237, 254, 280, 326, 358, 405, 442, 457, 502, 533, 560, 578, 601, 662, 679, 704, 736, 775, 803, 850, 907, 972, 1024, 1058, 1096, 1115, 1141, 1181, 1206 Cross-domain, 230, 232, 338–341, 344, 514, 565, 604, 652, 781, 791, 796 Cultural heritage definition, 1082 Cultural tourism, 1107 Curation data, 273, 397, 398, 608 Customer relationship management (CRM), 637 Cyber-physical security nuclear power, 981, 990, 997 Cyber-Physical Systems (CPS), 126, 130, 137, 138, 140, 205, 210, 255, 337–360, 405, 419, 501, 514, 600, 603–605, 607, 614, 617, 618, 628, 649, 650, 1119, 1190, 1203 Cybersecurity, 220, 365–391, 718, 937, 997, 1168, 1190–1192 D Damage accumulation, 582, 584, 588, 591, 593, 596 analysis, 994 component, 994 propagation, 590, 591 Dashboards, 212, 213, 219, 345, 664, 666, 740, 747, 789, 949 Dassault Systems, 85, 128, 737, 854, 870, 871, 874, 882, 884–886, 888, 891, 892, 897, 924, 927, 941, 942, 1025, 1026, 1029, 1030, 1032, 1033, 1041 Data brokering, 274 collective, 17, 254, 256, 261, 262, 264, 265, 269, 275, 341, 375, 378, 408, 502–506, 516, 522, 523, 581, 637, 640, 649, 664, 708, 729, 768, 938, 988, 989, 1035–1036, 1038, 1051, 1065, 1076, 1173, 1189
Index distributed, 274, 411, 415, 417–419, 421, 423–424 driven, 67, 74, 182, 186, 255, 268, 281, 283–286, 322, 327, 402, 614, 622, 630, 632, 637, 663, 783, 953, 1011, 1034 environmental, 312 functional specification, 500 geometric design, 500 historical, 7, 15, 32, 130, 137, 196, 206, 259, 264, 304, 310, 313, 318, 321, 581, 592, 748, 944, 945, 992, 1013, 1208 inferred, 313 measured, 126, 283, 312, 978, 993, 995, 1012 patient, 521, 1033, 1034, 1036, 1076 personal, 500, 917, 1035, 1152, 1189, 1192 product, 133, 134, 238, 241, 500, 752, 781, 987, 988 product production, 500 project, 500, 682, 699 quality, 222, 254, 266, 268–271, 273, 500, 597, 707, 1056, 1057, 1124 real-time, 15, 86, 91, 130, 161, 163, 170, 177, 216, 221, 254, 261, 264, 293, 310, 313, 415, 498, 526, 561–564, 568, 573, 603, 644, 647, 665, 716, 722, 738, 743, 749, 765, 766, 826, 842, 917, 918, 920, 947, 949, 989, 992, 1115, 1125, 1126, 1203, 1204 sharing, 187–188, 258, 271, 272, 423, 515, 795, 796, 840, 916, 1036, 1038 sources, 114, 161, 163, 177, 178, 231, 254, 261, 263–264, 266, 269, 270, 275, 346, 354, 519, 664, 710, 711, 716, 717, 719, 720, 768, 769, 914, 915, 920, 941, 976, 1026, 1036, 1050, 1055, 1056, 1176 static, 704, 710, 915, 949, 1054, 1095 synthetic, 43, 44, 51, 213, 255, 409 types, 143, 161, 259, 263, 483, 699 urban, 913–919, 935 Database collections, 1086, 1087 object oriented, 425 Decision-making, 78, 82, 88, 140, 148, 164, 168, 212, 236, 238, 248–249, 264, 275, 280, 281, 283, 284, 289, 304, 401, 404, 407, 498, 524, 526, 562, 580, 597, 604, 610, 612, 648–650, 654, 655, 707, 721, 723, 724, 729, 753, 755, 779, 800, 801, 845, 903, 904, 907–909, 932–934, 939, 941–943, 946, 956, 976, 992, 993, 1011, 1012, 1037, 1055, 1057, 1062, 1117, 1122, 1160, 1189, 1195, 1199
1223 Decision support system (DSS), 15, 318, 637, 639, 909, 934, 935, 946, 1039, 1096 Decision trees, 304, 320, 321, 568, 638, 1011 Deep learning (DL), 211, 604, 621, 1011, 1033, 1047, 1168, 1198 Dependability, 365–391, 611 Deployment architectures, 127, 129, 374–375 environments, 374–375, 1153 Design, 5, 22, 65, 106, 127, 158, 182, 207, 229, 255, 280, 300, 338, 366, 399, 448, 499, 542, 562, 578, 602, 662, 678, 704, 738, 774, 800, 851, 920, 972, 1025, 1070, 1104, 1116, 1139, 1173, 1202 Detectable fault, 583–585, 593, 594 Development agile, 171, 177 cyclic, 171, 251, 523, 604, 652, 781, 782, 785, 1170, 1182, 1183 lean, 155, 170–171 product, 87–89, 92, 102, 127, 167, 201, 228, 232, 234, 238, 239, 251, 523, 600, 602, 603, 605, 607, 608, 610, 611, 614, 615, 617–631, 633, 644, 646, 647, 652, 654, 695, 782, 794, 1008 waterfall, 171 Diagnostics, 55, 588–591, 603, 607, 625, 627, 628, 638, 640, 651, 722, 747, 764, 990–994, 1011, 1026, 1036, 1194 Differences across manufacturing industries, 748–754 Digital clone, 586–591 cultural heritage, 1083–1085, 1105–1108 design, 111, 791 divide, 562–564, 939 progress, 896–898 Digital-age education, 888–890 Digitalization, 91, 93, 133, 229, 231, 233–236, 238, 240, 243–249, 251, 778, 782, 785, 786, 789, 794, 796, 959, 960, 1039, 1086, 1087, 1089, 1098, 1106, 1116, 1151 Digital Twin(s) 5G, 126, 199, 433–445, 920 adoption, 16, 17, 24, 25, 27, 34, 44–53, 58, 65–79, 200, 218, 232, 254, 260, 263, 268, 274, 323, 330, 332, 372, 373, 383, 517, 601, 607, 609, 610, 647, 648, 653, 704, 796, 840, 902, 960, 1016, 1120, 1140, 1141, 1151, 1164, 1203, 1210–1213
1224 Digital Twin(s) (cont.) advantages, 9, 12, 227–251, 308, 314, 341, 609, 630, 673, 805, 819–825, 834, 845, 922–923, 939, 940, 1115 architecture, 24, 25, 125–149, 211, 214, 231, 256, 332, 479, 501, 503–506, 519, 604, 634, 654, 1144, 1176, 1212 autonomic, 302–304, 315, 322, 332 barriers to implementation, 831–834 biometric information based, 1049 business model implications, 834–835 cities, 346–348, 904–909, 911, 912, 914, 916, 919–923, 925–965, 1141, 1145, 1151, 1153, 1210 classes, 302, 355, 770 cognitive, 206, 212, 216, 1142, 1156–1158 common aspects, 1208–1209 complex projects, 11, 677–700 composable, 17, 153–179 composite, 18, 131, 162, 266, 301, 302, 402, 634 consortium (DTC), 154, 179, 208, 515–517, 547, 551–554 continuity, 134, 136–137 definition, 3, 5, 9, 132, 210, 216, 229, 233, 304, 307, 312, 316, 321, 329, 367, 370, 384, 388–390, 499, 514, 579, 605, 681, 737, 740, 742, 757, 976–980, 1083, 1204, 1209, 1211 deployment, 66, 85, 407–409, 427, 704, 718, 750, 766, 770, 809, 972, 1017, 1149, 1153, 1162, 1211–1212 design, 34, 36–39, 136, 163, 220, 256, 264, 300, 304, 308, 316–322, 331, 356–359, 406, 434, 628–629, 787, 788, 804–805, 824, 988–989, 1001–1008, 1014, 1193, 1210, 1211 distributed, 34, 143–145, 270–271, 409, 413–418, 421, 1144, 1147, 1189, 1191, 1210 ecosystem, 23, 24, 26, 28, 30, 46, 52–54, 56, 60, 126, 127, 129, 136, 137, 254, 258–265, 272–274, 324, 367, 380, 382, 387, 514, 515, 578, 728, 730, 926, 946, 1171, 1185, 1198 enabling technologies, 208, 800, 825–831, 840, 1008–1011, 1017, 1203 of everything, 56, 1148–1154 functionality, 135, 143, 207, 208, 213, 220, 221, 331, 401, 716, 1144, 1147 future, 22–34, 36, 53, 57, 58, 60, 97–118, 140, 148, 232, 233, 243, 273–274, 580, 663, 721–724, 726–729, 972, 978,
Index 1012–1016, 1081–1109, 1137–1164, 1201–1213 graph-based, 337–360 healthcare, 103, 508–509, 1023–1042, 1152 history, 127–129, 1081–1109, 1212 human body, 1187 hybrid, 279–295, 596, 597, 1160 impact, 58, 172, 218, 602, 624, 648, 652, 738, 834, 1167–1199, 1210–1211 implementation, 9, 11, 14, 38, 42, 132, 312, 314, 331, 356–359, 531–555, 559, 564–565, 567, 568, 607, 612, 634, 648, 653, 655, 738, 832, 838, 840, 842, 845, 1192, 1193, 1207, 1210–1212 instance, 5, 7, 9, 105, 106, 111, 118, 127, 128, 132, 136, 137, 141, 147, 236, 389, 390, 745, 757, 836, 840, 1140, 1210 intellectual property protection, 1169, 1197 intelligent, 30, 57, 115, 116, 118, 206, 212, 823, 1163 lifecycle, 13, 16, 106–108, 111, 118, 131–133, 136, 141, 172, 232, 256, 329, 399, 403, 408, 768, 1017, 1211 lifescience, 1023–1042 maintenance, 250–251, 594, 637, 661–673, 704, 723, 737, 738, 824, 990–998, 1013, 1017, 1125 manufacturing, 3, 22, 65, 97, 126, 154, 182, 206, 229, 253, 281, 300, 337, 366, 396, 434, 479, 498, 560, 578, 600, 661, 678, 704, 736, 800, 904, 972, 1024, 1046, 1083, 1114, 1137, 1168, 1202 maturity model, 651, 652, 926–932, 940–965 methodology, 30, 154, 166, 177, 444, 570, 600–613, 616, 619, 620, 629–633, 635, 637, 644–654, 1207 Microsoft Description Language (DTDL), 516 multi-actor, 140–145, 148, 338, 342, 346, 360 multi-level, 338–339, 342, 349, 355, 514 multi-sided, 40–41, 338–340 near real-time, 259, 373–374, 630, 743, 961 nuclear energy systems, 972, 976–980 oil & gas operations, 708–721, 724 oil & gas projects, 703–730, 1173, 1176–1178 operations, 9–11, 17, 23, 28, 47–49, 51, 58, 86–87, 135, 139, 154, 163, 301, 302, 323–331, 403–404, 628–630, 634,
Index 649, 653, 654, 661–673, 703–730, 753, 757, 799–845, 992, 1009, 1012, 1150, 1181, 1203 organizational structure, 601, 645–648, 652, 1210–1212 overhaul, 637, 661–673 passive, 8, 15, 302–305, 322, 331, 1140 personal (PDT), 1045–1076, 1152, 1157, 1160, 1161 platform, 9, 11, 13, 14, 115, 163–165, 167, 213–219, 321, 323, 327, 329–331, 337–360, 413, 552, 553, 614, 615, 618, 685, 935, 956, 989, 1147, 1163, 1187, 1190, 1199 predictive, 46, 262, 264, 302–304, 306, 322, 404, 651, 723, 724, 730, 747, 801 process, 171–178, 425, 600, 604, 607, 613, 632, 634, 646, 649, 653, 655 prototype, 104–105, 107, 108, 111, 114, 118, 127–129, 131–133, 135–137, 141 reactive, 238, 265, 302–305, 315, 322, 843, 1123, 1129 real-time, 15, 147, 163, 164, 175, 177, 221, 373–374, 415, 417, 498, 502, 563, 573, 629, 723, 836, 976, 1011, 1210 repair, 594, 630, 661–673, 699 requirements, 11, 15, 99, 129, 176, 241, 305, 313, 324, 415, 513, 517, 604, 1013, 1083 simple models, 613 structuring, 136, 266, 356–359 supply chain, 24, 26–28, 43, 45, 59, 568, 737, 745–746, 753–756, 761 technical complexity, 508, 1209–1210 technique driven by context, 1203–1204 technology, 24–26, 28, 57, 84, 86–92, 126, 233, 254, 258, 275, 338, 339, 366–368, 371–375, 381, 385, 387–389, 391, 498, 499, 501, 509, 512, 516, 517, 523–525, 527, 554, 565–567, 569, 578, 631, 767, 802, 816, 834, 955, 1168–1171, 1178, 1181–1188, 1194–1199 urban, 919–932, 940, 945, 948, 949, 953, 954, 956, 961, 963–965 value, 23, 24, 26, 34, 36–38, 45, 58, 60, 79, 87, 110, 111, 118, 155–165, 207, 229–233, 310, 349–350, 386, 396, 397, 401, 406, 580–581, 601, 609, 610, 736, 739, 763–766, 769, 802, 817, 818, 843, 845, 1083–1084, 1206–1207, 1209, 1212 versions, 37, 39, 117, 118, 264, 390, 742, 757
1225 visualization, 133, 842, 1145 world, 398, 519, 577–597 Digitization, 31, 59, 71, 343, 404, 679, 851, 856–860, 870–880, 887, 998, 1150 Discrete manufacturing, 567, 570, 737, 740, 741, 750–754, 890 Design projects, 679, 684, 686, 688, 690, 694–698, 700, 891, 948 Disney Concert Hall, 872–875 Display desktop, 463–465 Handheld (HHD), 463–464 Head Mounted (HMD), 463, 464 monitor based, 464–465 screen-based video see-through, 465 Disruption, 13, 80, 83, 327, 543, 570, 572, 609, 706, 736, 737, 743, 745, 746, 760, 761, 764, 765, 769, 770, 778, 781–783, 836, 842, 866, 884, 940, 957, 975 Distributed control, 437–438 Diversity, 43, 44, 435, 826, 1031, 1055, 1163, 1207–1208, 1210 Dynamics, 23, 42, 43, 51, 58, 116, 135–137, 145, 146, 187, 196, 198, 209, 231, 236, 241, 284, 294, 302, 303, 306, 307, 312, 313, 369–373, 385, 386, 396, 399, 402, 412, 415, 416, 427, 435, 436, 439, 442, 470, 474, 501, 561, 564, 568, 587, 607, 609, 613, 614, 619, 620, 622, 624, 631, 632, 647–649, 668–670, 672, 678, 679, 682, 683, 693, 695, 704, 710, 729, 756, 774, 781, 784, 787, 789, 791, 801, 813, 815, 825, 829, 842, 890, 920, 936, 941, 942, 949, 950, 959, 979, 982, 993, 994, 1024, 1026, 1031, 1034, 1049, 1095, 1105, 1118, 1119 E Eagan Report, 882 Eclipse Ditto, 137, 138, 215, 217, 518–519 Economics, 10, 23, 24, 39, 57, 58, 60, 70, 71, 85, 86, 103, 108–110, 114, 161, 173, 184, 221, 270, 271, 273, 360, 405, 532–534, 539–547, 554, 561, 562, 571, 602, 603, 606, 611, 649, 652, 654, 775, 795, 806, 830, 863, 908, 918, 922, 923, 934, 936–938, 940, 945, 950, 972, 981, 982, 990, 1026, 1027, 1038, 1098, 1149, 1175, 1195, 1205 open source, 532, 534
1226 Economy digital, 272, 851, 852, 856, 875, 1154 Ecosystem, 17, 23, 24, 26–30, 32, 34, 37, 38, 41, 42, 44–47, 52–54, 56, 60, 83, 93, 126, 127, 129, 136, 137, 146, 183, 187, 188, 195, 196, 200, 201, 206, 207, 216, 219, 220, 240, 243, 249, 254, 255, 258–265, 267, 272–274, 323, 324, 340, 345–347, 366, 367, 369, 375, 379–382, 387–389, 396–398, 403, 405, 499, 514, 515, 517, 521, 526, 528, 540, 542, 559, 578, 592, 651, 700, 728, 730, 787, 790, 794, 796, 908, 920, 926, 933–935, 940, 945–947, 949, 956, 959, 999, 1039–1041, 1074–1076, 1086, 1108, 1171 Edge computing, 190, 294, 314, 406, 408, 417–427, 507, 512, 604, 743, 1009, 1069, 1203 Education, 5, 17, 33, 498, 565, 648, 822, 888–890, 902, 904, 919, 933–935, 939, 959, 1035, 1037, 1075, 1076, 1085, 1105, 1106, 1113–1130, 1147, 1148, 1151, 1156–1159, 1205, 1206, 1213 Efficiency, 10, 23, 30, 32, 35, 48, 51, 55, 65, 68, 73, 81, 83, 86–87, 91, 109, 157, 187, 188, 194, 208, 220, 221, 230, 234, 238, 241, 265, 287, 291, 339–342, 405, 409, 417, 427, 436, 443, 453, 457, 461, 468, 498, 515, 523–525, 571, 622, 630, 652, 667, 679, 708, 711, 720, 736, 740, 742, 745, 751, 755, 760–762, 764–766, 768, 782, 794, 795, 800, 801, 809, 826, 830, 836, 838–840, 910, 912, 936, 945, 950, 954, 957, 960, 976, 983, 987, 1000, 1120, 1152, 1153, 1203, 1204, 1206, 1208 manufacturing, 736, 764, 766 Electric vehicles (EVs), 65, 66, 84, 88, 89, 144, 244, 516, 563, 653, 693–695, 774–779 Electrification, 406, 774–776, 783, 784, 788 Electronic Data Systems (EDS), 99 Electronics, 32, 66, 91–93, 113, 206, 228, 230, 245–249, 367, 375, 385, 391, 412, 413, 464, 499–501, 507, 508, 520, 523, 607, 619, 628, 636, 686, 738, 742, 750, 777–779, 781, 783, 784, 786, 790–793, 808, 810, 826, 829, 833, 1009, 1036, 1055, 1176, 1181, 1189, 1205 Emerging ecosystems, 1039–1041 Emulation, 402–403, 629, 630
Index Encryption, 210, 380, 442 End-to-end system integration, 769 Energy efficiency, 86, 341, 342, 436, 443, 954 Engineering economics open source, 533–534, 539–544 Entanglement data, 322 Enterprise processes, 599–655 resource planning (ERP), 158, 161, 198, 206, 218, 600, 610, 618, 739–741, 785, 795, 800, 825–827 value proposition (EVP), 652–653 Entities, 8, 11, 23, 25, 28, 30, 32, 38, 42, 43, 49, 52, 55–57, 59, 79, 114, 128, 143, 198, 208, 255, 260, 264, 314, 325, 339, 347, 350–352, 354, 357, 367–368, 371, 375, 386, 387, 396, 399, 404, 423, 441, 443, 444, 454, 503, 505, 514, 517, 526, 527, 549, 604, 651, 711, 712, 826, 896, 906, 928, 1031, 1083, 1140, 1142, 1144, 1145, 1148, 1156, 1159, 1161, 1163, 1168, 1184, 1188, 1189, 1192, 1193, 1197, 1199 Epilepsy research, 1029 Ethics, 103, 262, 540, 1026, 1027, 1030, 1075, 1147, 1161, 1162, 1167–1199 Event processing, 147 Evidence, 113, 267, 273, 404, 475, 578, 615, 693, 923, 988, 1024, 1034, 1038, 1088, 1164, 1181 Evolution, 6, 7, 9, 10, 12, 16, 17, 23–25, 32, 34, 36, 41, 55, 107, 112–116, 126, 128–146, 148, 182, 185, 205, 206, 253–255, 264, 265, 270, 271, 273, 281, 287, 292, 295, 303, 304, 306, 313, 328, 331, 344, 345, 378, 406, 410, 412, 413, 421, 423, 427, 437, 532, 545, 546, 572, 603, 627, 653, 672, 681, 737, 739, 782, 796, 810, 826, 831, 852, 858, 891, 903, 952, 985, 990, 1016, 1024, 1031, 1032, 1046, 1096, 1108, 1137–1164, 1168, 1171, 1212, 1213 projects, 810 Evolving ecosystems, 27, 32, 44, 56, 908 Exchange data, 47, 130, 240, 274, 417, 423, 442, 445, 506, 521, 603, 664, 949, 1117, 1195 Execution, 16, 65, 78, 127, 133, 137, 143, 145, 148, 198, 240, 271, 301, 305, 306, 308, 310–312, 314, 315, 326, 373, 397, 401–403, 406, 415, 434, 436, 467, 472,
Index 521, 564, 571, 610, 627, 629, 638, 679, 700, 743, 749, 751, 759, 763, 799, 809, 811, 823, 826, 827, 855, 937, 963, 998, 1015, 1090, 1122, 1146 Experience 3D, 1026, 1027 customer, 23, 154, 600, 605, 638, 652 end-user, 704, 719 immersive, 714, 1094 Experimental PDT platforms, 1057–1074 Experimentation, 84, 170, 338, 341–344, 348, 532, 533, 940, 1126, 1203 Expert knowledge, 217, 454, 466, 468 F Factory smart, 183, 190, 196, 199, 206, 255, 338, 342, 499, 509, 510, 512, 513, 523 Failure component, 106, 441, 579, 582, 585, 588, 630, 994 prediction, 578, 579, 581, 582, 584–586, 588, 593, 620, 824 threshold, 436, 582–584, 589 Fatigue, 239, 385, 387, 578, 586–588, 590, 591, 593, 594, 619, 632, 636 Feature fusion, 666, 1057, 1063, 1064, 1069, 1072 Federated learning, 1011 Fidelity, 22, 36–38, 49, 116, 117, 168, 209, 229, 232, 238, 256, 282, 286, 369, 370, 384, 385, 389–391, 448, 502, 587, 588, 592, 608, 621, 630, 719, 738, 757, 791, 793, 800, 825, 826, 828, 830, 950, 977, 978, 995, 1010, 1011, 1017, 1029, 1041 Finite element method (FEM), 133, 280, 281, 1118 Fish sculpture, 872, 873 FIWARE, 128, 349, 354, 517–518, 526, 1143, 1163 Fiware4Water, 526 Floating hotel, 855, 862, 863 Framework regulatory, 573, 1147, 1188–1189, 1195–1196 Frisk, P., 156, 157 Full autonomy, 778–781 Fusion data, 208, 211, 258, 270, 310, 313, 651, 666, 917, 1047, 1056, 1058, 1060, 1062, 1069 sensor, 42, 666, 791 Fusion reactor, 871, 1001–1008
1227 G GAIA-X, 183, 185, 186, 200, 274, 521, 1143, 1163 Gehry, F., 872–875, 882, 884, 885, 889 Gemini Principles, 272–273, 957 General Electric (GE), 98, 101, 102, 167, 215, 217, 498, 523–524, 563, 600, 630–631, 652, 842 Governance, 34, 60, 74–76, 78, 199, 254, 262, 264, 270, 358, 401, 442–445, 537, 538, 545, 547, 551, 572, 757, 768, 837, 838, 842, 912, 920, 933–936, 945, 946, 949, 1037, 1192, 1193 Graphics Processing Unit (GPU), 600, 602, 620–623, 880 Graphs database, 350, 357–358, 360 knowledge, 143, 211, 260, 342, 349, 350, 354–356, 710–713 H Health digital, 1024, 1034 Healthcare Innovation, 1026, 1037, 1039 Health state awareness, 578, 583 Healthy life, 579, 919, 1035–1039 Heilmeier, G., 4 Helicopter, 578, 580, 585–586, 592, 595, 600, 618, 632, 634, 640, 646 History, 7, 15, 16, 22, 23, 25, 28, 32, 46, 88, 101, 108, 127–130, 137, 143, 144, 147, 155, 161, 183, 196, 206, 212, 213, 218, 240, 258, 259, 264, 267, 271, 304, 309, 310, 313, 318, 321, 338, 343, 346, 357, 360, 367, 399, 402, 474, 514, 518, 519, 543, 554, 571, 579, 581, 585, 592, 595, 596, 606, 608, 611, 618, 619, 634–636, 640–642, 644, 645, 647, 716, 722, 738, 748–750, 753, 762, 763, 777, 833, 836, 842, 872, 879, 883, 884, 903, 904, 913, 915, 937, 944, 945, 948, 954, 955, 964, 973, 977, 981, 983, 985, 991, 992, 996, 1013, 1017, 1024, 1025, 1034, 1081–1109, 1142, 1157, 1187, 1205, 1206, 1208, 1212 Hosting, 132, 219, 358, 375, 409, 417, 419, 421, 425, 427, 539, 716–718, 991, 1152, 1161 Human augmentation, 1159 factors, 396, 397, 707, 730, 880, 995, 1211–1212
1228 Hybrid framework, 283, 285, 286, 594–597 Hybrid Twin (HT), 279–295 Hyperconnected, 148, 562, 565, 571, 912 I Implementation, 9, 11, 14, 17, 18, 24, 25, 32, 38, 42, 57, 58, 130–133, 154, 158, 186, 209, 210, 221, 256, 259, 262, 265, 270, 283, 294–295, 304, 312, 314, 316, 323, 329–331, 338, 356–360, 372, 377, 381, 389, 402, 420, 427, 448, 452, 457, 461, 467, 469, 478, 509, 514, 517, 531–554, 564–568, 572, 607, 612, 634, 635, 648, 653, 655, 664, 679, 738, 748, 766, 789, 802, 819, 827, 828, 831–834, 837–840, 842, 844, 845, 857, 872, 880, 897, 936, 938, 946, 950, 951, 954, 979, 998, 1123, 1126, 1129, 1162, 1182, 1192, 1193, 1207, 1208, 1210–1213 Industrial Digital Twin Association (IDTA), 185, 515, 520 Industrial Internet of Things (IIoT), 184, 208–211, 213–216, 218, 228, 311, 603, 650, 652, 743 Industry 4.0, 17, 55, 58, 68, 78, 79, 126, 141, 185, 186, 188, 196, 198, 217, 254–256, 274, 294, 342, 447, 499, 509, 510, 515, 520, 549, 652, 801, 825, 920, 1115, 1142, 1159, 1163, 1203 smart, 342–344 Inference engine, 402, 565 Information conveyance methodology, 481, 484 flow, 211, 261, 401, 600, 608–610, 637, 639, 640, 644–646, 680, 830, 1053 instructions, 28, 451–453, 464, 470, 472, 477, 481, 482, 484, 608, 839, 1051 representation, 453, 456, 462, 472, 480–481 technology, 11, 99, 102, 109, 116, 184, 201, 233, 349, 404, 549, 740, 800, 826, 845, 1075, 1076, 1175 Infrastructure, 10–14, 26–28, 31–33, 40, 49, 60, 69, 70, 76, 93, 103, 114, 127, 137, 144, 148, 173, 182–187, 191, 194–196, 199–201, 213, 217, 219, 229, 250, 261, 268, 272–274, 305, 306, 314, 326, 327, 337–341, 343–347, 353, 354, 357, 358,
Index 360, 370, 377, 382, 395–427, 436, 517, 519, 521, 536, 545, 549, 565–567, 570, 581, 601, 602, 605, 621, 638, 644, 652, 653, 665, 679, 693, 709, 716–718, 739, 752, 753, 775, 776, 783, 794–796, 826, 830, 840, 842, 849, 850, 866, 888, 909, 922, 931, 932, 934, 935, 940, 941, 944, 949, 950, 953, 956–959, 964, 977, 1015, 1037, 1141, 1151–1153, 1162, 1190, 1202, 1207, 1210, 1211 Innovation, 4, 14, 21–23, 25, 29, 33, 35, 36, 45, 46, 52, 54–57, 60, 79, 84, 157, 167, 169, 170, 185, 212, 218, 228, 231, 234, 236, 238, 240, 245, 251, 267, 273, 280, 294, 412, 413, 423–425, 468, 507, 515, 522, 532, 540, 542–545, 547, 631, 635, 678, 736, 738, 752, 754, 756, 763, 767, 774, 776, 777, 785, 800, 812, 834, 932, 937, 938, 942, 948, 951, 955, 959, 960, 1003, 1004, 1008, 1026, 1030, 1033, 1034, 1036, 1037, 1039, 1041, 1138, 1145, 1163, 1164, 1168, 1170, 1171, 1182, 1183, 1187, 1195, 1198, 1204, 1206 Inspection, 32, 82, 87, 92, 93, 168, 387, 456, 457, 500, 524, 553, 600, 601, 608, 630, 631, 634–636, 646, 652, 662, 663, 671, 704, 706–708, 712–714, 716, 719, 724–726, 728, 788, 820, 824, 865, 877, 893, 959, 977, 981, 986–989, 1013, 1015, 1124, 1173 digital, 646, 1124 Instruction, 28, 190, 245, 447–462, 464–466, 468–486, 608, 640, 653, 721, 727, 740, 785, 802, 806, 809, 811, 820, 822, 824, 827, 831, 836, 838–841, 877, 895, 896, 988, 1051 Integration, 5, 7, 28, 42, 87, 113, 128, 159–161, 163, 164, 175, 186, 188, 190, 196, 198, 201, 205–222, 232, 234, 238, 241, 254, 258, 270, 275, 300, 304, 305, 308, 341, 369, 389, 400, 408, 409, 418–420, 427, 466, 468, 500, 502, 510, 516, 520–522, 527, 533–535, 548, 549, 560, 570, 583, 586–591, 600, 607, 614, 615, 624, 628–631, 633, 640, 641, 643, 648, 651, 653, 665, 666, 669, 679, 716, 717, 719–722, 755, 769, 777, 781, 783, 785–787, 803–807, 825–827, 842, 845, 850, 854, 855, 861, 863, 864, 876, 887, 889, 896, 912, 948, 951–953, 977, 979, 982, 985, 993, 1010, 1011, 1036, 1056,
Index 1075, 1076, 1084, 1096, 1115–1119, 1123, 1124, 1129, 1171, 1176, 1209 Integrity, 211, 219, 241, 242, 273, 366, 375, 376, 383, 387, 504, 585, 607, 622, 655, 682, 699, 710, 713, 716, 721, 723, 730, 784, 787, 805, 843, 997, 1005, 1141, 1147 Intellectual property (IP), 240, 534, 540, 546, 647, 755, 762–763, 832, 1168–1170, 1197 Intelligence data, 1035–1036, 1038 Interface, 12, 28, 42, 56, 59, 126, 130, 131, 133, 138–140, 143, 146, 154, 158, 160, 196, 198, 210, 212, 216–220, 238, 250, 300, 351, 357, 378, 388, 389, 398, 400–401, 403, 413, 436, 437, 449, 451, 454, 461–463, 465–467, 476, 478, 479, 482–485, 504–506, 511, 516, 517, 519, 526, 548, 550, 551, 566, 606, 607, 613, 614, 628, 651, 654, 664, 665, 668–670, 672, 680, 682, 709, 716, 718, 801, 948, 949, 989, 994, 995, 999, 1047, 1053, 1059, 1065, 1066, 1072, 1073, 1108, 1125–1128, 1153, 1154, 1169, 1173, 1178, 1181, 1190, 1209, 1210 International Experimental Reactor (ITER), 870–872, 1001, 1002 International regulations, 186, 608, 832, 1162, 1168, 1187–1194, 1198 Internet of Things (IoT), 7, 26, 78, 90, 132, 157, 184, 205, 228, 254, 294, 321, 337, 366, 454, 498, 549, 603, 769, 794, 830, 913, 993, 1083, 1119, 1140, 1171, 1203 Interoperability, 9, 11, 14, 18, 24, 30, 139, 185, 196, 199, 210, 211, 216, 220, 221, 254, 256, 260, 263, 272–274, 338, 349, 387–389, 421, 423, 499, 500, 502, 507, 508, 510, 512, 513, 517, 521, 522, 526, 527, 548, 550, 604, 610, 649, 650, 653–655, 826, 827, 842, 946, 947, 949, 1012, 1108, 1109, 1139, 1163 Investment, 14, 18, 34, 67, 69–72, 78, 79, 83, 93, 115, 170, 190, 194, 232–235, 247, 248, 294, 327, 369, 533, 534, 541, 568, 569, 572, 580, 581, 606, 635, 654, 760, 767, 769, 777, 819, 830, 833, 836–838, 840, 842, 844, 857, 858, 872, 874, 887, 888, 892, 904, 906, 936, 950, 973, 1010, 1016, 1017, 1115, 1123, 1210
1229 K Key performance indicator (KPI), 170, 190, 198, 217, 343, 468 L Lake data, 42, 201, 217, 220, 768, 795 Latency variable, 441 Laws, 4, 5, 27, 59, 98, 99, 113, 116, 118, 200, 245, 284, 286, 300, 328, 532, 544, 560, 590, 620, 629, 745, 1162, 1163, 1168, 1169, 1172–1174, 1178, 1185, 1187, 1189–1191, 1195–1199 Layer data, 143, 158, 219, 261, 268, 445 hidden, 1071 input, 452, 1071 output, 1071 Leadership, 18, 74–76, 78, 249, 562–564, 601, 606, 618, 652, 653, 655, 776, 780, 782, 833, 834, 840, 917, 934, 935 Learning factories (LFs), 1114–1116, 1129, 1130 Legal aspects, 1172, 1185 Liability, 184, 539–541, 545–547, 937, 973, 1162, 1170, 1185, 1195, 1197 License open source, 532, 533, 535–538, 542, 546 Lidar, 34, 49, 190, 709–710, 716, 719–721, 724–726, 729, 730, 823, 827–829, 963, 964, 1177 Life component, 582, 586, 620, 636, 978, 981 Lifecycle, 7, 71, 99, 127, 163, 193, 211, 229, 254, 300, 385, 498, 562, 578, 600, 661, 680, 715, 737, 781, 800, 858, 972, 1096, 1117, 1150, 1168, 1203 Lifecycle phases, 13, 106, 230, 231, 267, 600, 605, 607, 617, 646, 654, 661, 782, 1012, 1014, 1129, 1210 Life sciences, 7, 66–68, 90–91, 338, 517, 1037, 1053, 1206 Link, E., 25 Living Brain, 1028–1030 cell, 1031–1032 Heart Project (LHP), 1025–1028, 1032, 1040 Microbiota, 1032–1033 Load balancing, 215, 219, 434, 436, 443 Location Based Services (LBS), 1203
1230 M Machine Learning (ML), 7, 27, 90, 108, 130, 154, 176, 183, 211, 213, 215, 217, 218, 228, 259, 268, 283, 293, 294, 303, 304, 306, 308–311, 314, 326–328, 402, 404, 452–453, 455, 498, 516, 572, 594, 595, 601, 604, 606, 628, 638, 649, 650, 652, 724–726, 730, 742, 743, 747, 754, 825, 826, 838, 920, 934, 935, 945, 959, 977, 978, 989, 992, 998, 1010, 1011, 1034, 1041, 1047, 1096, 1124, 1125, 1156, 1173, 1175, 1194–1196, 1203 Machines smart, 801 Maintainability, 366, 383, 628, 804, 806, 858, 986, 987 Manageability, 307, 383, 426 Management asset, 87, 194, 201, 628, 759, 976 Context (CM), 146, 453 data, 138, 145, 201, 253–275, 397, 407, 409, 415, 419, 516, 522, 526, 628, 662, 785, 978, 1015, 1141–1143, 1163, 1211, 1213 knowledge (KM), 1139, 1147, 1156–1158 network management, 434–436, 441, 526, 527, 650, 1205 oil & gas projects, 738, 743, 758, 865 risk, 502, 601, 602, 652, 800, 845, 883, 897, 992, 1192, 1193 system health (SHM), 600, 608, 612, 614, 623, 624, 628, 644, 649, 654 Manual assembly, 455, 457, 459, 460, 469–486 Manufacturing, 23, 68, 99, 127, 166, 182, 208, 228, 254, 282, 300, 338, 372, 399, 447, 498, 552, 564, 586, 600, 673, 679, 736, 776, 800, 850, 920, 972, 1024, 1074, 1114, 1138, 1203 digital, 141, 738–739, 787, 895 discrete, 567, 570, 737, 740, 741, 750–753, 758, 760, 890 modeling and simulation, 87, 826–827, 840, 978, 981 process, 32, 52, 103, 127, 217, 228, 229, 238–240, 244, 287–289, 294, 403, 447, 501, 502, 505, 515, 586, 587, 607, 609, 635, 673, 740–742, 745, 764–766, 786, 788, 790, 820, 824, 896, 989, 1115–1118, 1120, 1129, 1207 Marketplace, 22, 31, 163, 218, 522, 606, 947, 1175, 1191
Index Massively Multiplayer Games (MMGs), 44, 52 Material Resource Planning (MRP), 739, 740, 742 Materials, 5, 16, 22, 23, 41, 45, 53, 80, 84, 86–93, 109, 111, 116, 133, 171, 174, 195, 212, 230, 233, 240, 242, 244, 250, 251, 255, 279, 284–287, 292, 295, 308, 316, 320, 321, 342, 343, 369, 382, 388, 399, 403, 457, 458, 500, 502, 522, 561, 568, 570, 571, 578, 581–584, 587–589, 591, 593, 594, 596, 608, 611, 618–620, 634, 635, 662, 671, 708, 721, 728, 738, 739, 741, 743, 745, 751, 752, 754, 755, 757, 760, 764, 770, 775, 784, 785, 787, 788, 790, 802, 803, 805, 806, 808, 811, 822, 823, 825, 836, 858, 861, 889, 896, 941, 958, 973, 975, 981, 986, 989, 990, 994–997, 1001, 1003–1005, 1008, 1010, 1012–1015, 1017, 1053, 1090, 1094, 1098, 1099, 1103–1105, 1154, 1203, 1212 data, 285, 338, 399, 594, 728, 743, 751 Material stresses, 581, 583, 584, 587 Maturity, 13, 66–79, 93, 112, 173, 231, 232, 236, 267, 284, 418, 463, 468, 537, 538, 547, 597, 651, 652, 721, 722, 747, 789, 825, 826, 838, 909, 926–932, 940–965, 1055 Measurability, 16 Mechanical systems, 206, 242, 578, 580–584, 590, 777, 778, 783, 1052, 1053 Medical practice, 1025, 1027, 1039, 1042 Metadata, 138, 143, 145, 210, 216, 217, 260, 349, 518, 1086, 1181 Meta-twins, 1156, 1161 Metaverse, 7, 21, 44, 54–60, 126, 148, 564, 571, 1083, 1101, 1149, 1164, 1171, 1188, 1189, 1203, 1205 Michelangelo’s David, 1091–1094 Microstructure stresses, 582, 583 Mindset, 60, 396, 397, 1037 Minimum viable product (MVP), 167–170, 947 Mining industry, 749–750 Mirrors digital, 23, 367, 1152 Mitigation strategy regulatory, 1194–1197 Mixed Reality (MR), 148, 726, 727, 730, 829, 1085, 1154, 1168, 1173, 1198 Mobility, 88–90, 146, 245, 274, 280, 339, 435, 463, 465, 513, 566, 637, 638, 779, 781,
Index 842, 915–919, 925, 926, 932, 946, 955, 963, 964, 1030, 1037, 1065, 1147, 1150 Model(s) 2D, 339 3D, 87, 88, 113, 128, 130, 131, 133, 136, 339, 458, 459, 472, 616–618, 634, 704, 707, 708, 710, 711, 719–721, 727, 746, 749, 751–754, 760, 768, 783, 811, 820, 822, 833, 834, 858–860, 862, 871, 872, 876, 877, 879, 880, 892, 893, 895–897, 924, 940–942, 948, 964, 998, 1025, 1027, 1037, 1084, 1095, 1097, 1105, 1119 digital, 22, 110, 111, 300, 396, 502, 620, 629, 665, 678, 680–690, 700, 761, 805, 831, 845, 911, 949, 950, 963, 1083, 1085, 1096, 1115, 1117, 1129, 1140, 1141, 1148, 1154, 1161 disease, 1033–1034 dynamic, 369–371, 386, 813, 815, 920 graph, 338, 349, 350, 354, 360 holistic, 434, 756, 1038 logical, 240, 262 multi-domain, 618–620, 628 multi-functional causal, 600, 624, 625, 627, 628, 637, 650 order reduction (MOR), 281–283, 288, 291 physics-based, 161, 279, 281–283, 286, 287, 292, 578, 581, 584, 586, 591–596, 604, 606, 607, 612, 613, 625, 630–632, 649, 652, 654, 655, 812, 959, 978, 990, 1017 population, 1033–1034 reference, 141, 143, 158–160, 256, 503, 509, 510, 527, 1026, 1139 static, 37, 369, 600 visualization, 161, 812, 827, 840, 842 Model-based design (MBD), 607, 612–615, 782, 826, 836 Model-based systems engineering (MBSE), 237, 600, 604, 607, 612–615, 618, 653, 654, 1017 Modeling data, 259, 1055 lifecycle, 129, 1118–1119 MongoDB, 42 Monitoring, 18, 29, 46, 51, 163, 190, 196–198, 201, 213, 215, 217, 248, 250, 251, 259, 265, 281, 293, 295, 301, 302, 306, 314, 329, 341, 370–372, 374, 384, 386, 391, 401, 408, 410, 419, 420, 422, 426, 427, 434, 436, 440–442, 502, 506–508, 513, 516, 524, 525, 578, 581, 582, 590, 596,
1231 600, 605, 608, 628, 638, 640, 654, 662, 664, 668–670, 716, 722, 726, 757, 765, 794, 795, 824–826, 887, 896, 928, 934, 936, 945, 946, 958, 959, 973, 978, 981, 989–995, 997, 1009, 1010, 1034, 1035, 1052, 1054, 1062, 1065, 1076, 1083, 1086, 1090, 1096, 1138, 1150, 1152, 1160, 1173, 1190, 1193, 1202, 1205, 1206 Motion capture, 453, 1054, 1058, 1062 Multi-decision governance, 442–444 Multiscale, 292, 514, 578, 592–594, 620, 621, 1002, 1030, 1042, 1075 Multi-system fusion, 1055–1074 Multiverse, 55, 57, 572–573 Museums, 872, 1082, 1085–1098, 1103 N NASA, 25, 101, 127, 498, 562, 567, 592, 625, 1205 National regulations, 608, 644, 1162 Negotiations, 182, 198, 444, 1186 Network 5G, 126, 183, 186–200, 206, 294, 411, 412, 415, 417–419, 421–423, 433–445, 507, 920, 955, 1010, 1046, 1076, 1099, 1144, 1149, 1164, 1185 6G, 126, 148, 194, 211, 419, 435, 445, 1046, 1074, 1076, 1144, 1149 cellular, 211, 1063 dynamic topology, 442 multi-administrative domains, 442 transport, 195, 353 value, 26 wireless, 148, 209, 421, 829, 831, 832, 842 wireless body, 1047, 1052–1054 New product introduction (NPI) accelerated, 736, 763 successful, 769 Non-Fungible Token (NFT) art, 44, 1099–1102 Non-light water reactor (LWR) Design, 988–989 Notre-Dame, 22, 23, 36 Nuclear power advanced reactors, 972, 976, 988, 995, 1009, 1014–1017 aging management, 975–976, 990, 993 commissioning, 976, 977, 980, 985–987, 1015 condition monitoring, 981, 990–995
1232 Nuclear power (cont.) decommissioning, 897, 973, 976, 977, 980, 981, 997–1000, 1014–1016 design, 973, 976, 977, 980–989 dismantling, 976, 981, 997–1000 enabling technologies, 1008–1011, 1016, 1017 facilities, 971–1017 fuel, 972, 973, 975–979, 981, 984, 988, 990, 991, 995, 1001 future, 972, 976, 982, 1012–1016 licensing, 972, 973, 976, 978–990, 1001 materials tracking, 981, 990, 996, 997 plants, 572, 876, 877, 879, 880, 890, 893, 895, 899, 971–1017 reactor core, 972, 977, 978, 980, 981, 990–991, 993–996, 1010, 1013 safety and risk, 973–974 training, 994–995 O Object conceptual, 4 logical, 308, 309, 311 physical, 4, 23, 99, 209, 229, 259, 300, 339, 368, 434, 464, 498, 946, 1083, 1116, 1171, 1204 real-world, 4, 129, 140, 206, 369–372, 384, 387, 390, 391, 451, 473, 1097 virtual, 99, 113, 474, 829, 1149 Oil & gas, 67, 71, 87, 103, 168, 171, 662, 703–730, 738, 743, 748, 758, 760, 865, 1173–1178 Ontology, 254, 258, 260, 262, 265–268, 326, 355, 357, 454, 679, 700, 933–936, 941, 999, 1143, 1161 Open Manufacturing Platform (OMP), 515, 516 Operations, 5, 23, 66, 102, 135, 163, 194, 210, 229, 256, 285, 301, 346, 371, 399, 436, 451, 503, 552, 565, 580, 605, 662, 681, 706, 736, 779, 800, 855, 920, 972, 1053, 1118, 1140, 1181, 1203 Optimization, 7, 8, 10, 11, 79–93, 136, 140, 147, 161, 229, 234, 237–240, 246, 248–250, 281, 282, 323, 325, 329, 401, 402, 404, 405, 407, 434, 438, 444, 524, 562, 564, 619, 625, 666, 748, 754, 758–764, 766, 768, 770, 784, 786, 788, 791, 793–795, 820, 829, 930, 981, 982, 993, 1001, 1011, 1017, 1065, 1096, 1117, 1118, 1121–1123, 1147, 1203, 1204, 1206, 1207
Index Oracle database, 42 Orchestration, 161, 163, 217, 401, 403, 404, 413, 415, 419, 426–427, 549, 1205, 1208 Organization structures, 157, 263, 332, 401, 561, 601, 609, 645–648 Original Equipment Manufacturer (OEM), 88, 89, 294, 752–754, 777, 795 P Partial differential equations (PDEs), 280, 281, 292 Patents, 508, 611, 1169, 1171–1177, 1181–1183, 1185, 1186, 1197, 1198 People centric, 909–913, 919, 939 Perception and sensing, 1049–1050 Persistence, 53–54, 286, 287, 678, 679, 698–700 Personalization, 68, 228, 1041, 1206 Perspective cultural, 309 functional, 308, 503, 504 physical, 9, 458 Phase conceptual, 113, 114, 606, 607, 611–617, 654 customer effectiveness, 608, 637–643 development, 168, 600, 607, 611, 613–615, 617–631, 633, 654, 1009 environmentally suitable termination, 609, 644–645 manufacturing, 633–637 operation, 256, 327, 1163 product improvement, 608, 638, 643–644 quality management, 600, 608, 633–637 qualification and certification, 600, 608, 615, 624, 631–633 support, 608, 637–643 testing, 5, 247, 600, 624, 1145 Photography, 704, 710, 1087–1088 Physical twin, 47, 101, 105, 110, 117, 118, 262, 265, 384, 386, 387, 389, 391, 563, 1091, 1140 Physics, 4, 27, 116, 161, 206, 279, 300, 403, 578, 604, 666, 741, 791, 812, 959, 972, 1033 Physics-based phenomena, 581 Pilots, 25, 32, 49, 65, 67, 78, 84, 241, 247, 294, 373, 586, 691, 788, 851, 858, 859, 880, 907, 912, 919, 953
Index Planning, 65, 132, 190, 206, 237, 331, 342, 366, 400, 435, 451, 498, 562, 595, 599, 679, 704, 739, 786, 800, 854, 917, 973, 1034, 1118, 1151 Plant optimization, 90–91, 760, 761 Platform(s) IT/OT, 548, 549 offshore, 704–706, 711, 1173 production, 704–706, 1173 Pleiades project, 998–1000 Practice open source, 531–554 Predictability, 138, 213, 308, 311, 437, 439, 1041, 1206 Prediction, 7, 17, 46, 55, 106, 115, 116, 118, 136, 138, 147, 211, 232, 262, 281, 283, 284, 286–293, 300, 301, 304–306, 310–312, 314, 317, 318, 408, 502, 580, 582, 588, 590–595, 597, 603, 619, 625, 637, 669, 722, 723, 759, 764, 765, 811, 821, 824, 883, 909, 921, 934, 946, 951, 952, 979, 981, 991, 993–995, 1042, 1047, 1049, 1058, 1060, 1069, 1194, 1204, 1205 failure, 578, 579, 581, 582, 584–586, 588, 593, 620, 824 Predix, 215, 217, 523, 524 Privacy, 12, 13, 24, 183, 185, 211, 222, 273, 326, 357, 358, 424, 434, 461, 569, 571–572, 651, 917, 936–938, 941, 946, 1035, 1055, 1142, 1152, 1168, 1184, 1187–1199, 1210 Process Intelligent Automation (IPA), 47, 48 manufacturing, 32, 52, 103, 127, 217, 228, 229, 238–240, 244, 287–289, 294, 403, 447, 501, 502, 505, 515, 586, 587, 607, 609, 635, 673, 740–742, 745, 764–766, 786, 788, 790, 820, 824, 896, 989, 1115–1118, 1120, 1129, 1207 optimization, 81, 92, 238, 760 Processing data, 15, 210, 216, 286, 287, 400, 513, 666, 667, 795, 964, 1055, 1097, 1115, 1189 Product development, 87–89, 92, 102, 127, 167, 201, 228, 232, 234, 238, 239, 251, 523, 600, 602, 603, 605, 607, 608, 610, 611, 615, 617–620, 629–631, 633, 644, 646, 647, 652, 654, 695, 782, 794, 1008 disassembly, 451
1233 lifecycle management (PLM), 84, 99–101, 103, 117, 118, 127, 131–133, 135, 241, 329, 330, 339, 498, 562, 680, 737, 785, 789, 826, 827, 860, 885, 895–896, 1203 smart, 117, 228, 230, 666 Production distributed, 860–863, 887 flexible, 194, 228, 788–790 modular, 228 Productivity, 48, 67, 71, 75, 77, 81, 103, 127, 188, 191, 234, 327, 343, 456, 523, 526, 600, 610, 613, 623, 638, 648, 652, 717, 795, 850–854, 856, 887, 889, 922, 957, 959, 998, 1138, 1203 Prognostics/diagnostics, 55, 404, 578–580, 582, 585–591, 600, 603, 607, 608, 625–630, 638, 640, 651, 722, 747, 764, 990–994, 1011, 1026, 1036, 1194 Programmability, 9, 13, 300, 301, 314, 315, 1205, 1206, 1209, 1210 Project(s) complex, 11, 84, 677–700, 800, 845, 864, 897 context, 540–541 model-based management, 679, 684 non-profit, 540, 545, 546 open source, 515, 520, 532, 534, 536, 540, 542, 544, 546, 554 scope, 690, 691, 695, 868 Proof of concept (PoC), 50, 67–69 Protection, 13, 213, 375, 378–380, 461, 504, 506, 521, 571, 572, 619, 632, 645, 650, 664, 669, 707, 718, 832, 833, 880, 973, 997, 1032, 1038, 1168–1184, 1187–1194, 1196, 1198, 1199 data, 521, 1038, 1184, 1187–1194, 1199 Prototyping, 65, 88–90, 220, 221, 270, 791, 988, 1181, 1184 Q Quality and compliance, 736, 751 Quality of life, 902, 1017, 1036–1038, 1083, 1159 R Raphael’s Cartoons, 1088–1091 Real-time, 7, 23, 84, 127, 161, 186, 209, 250, 254, 280, 310, 344, 371, 400, 452, 498, 552, 561, 583, 600, 663, 678, 710, 738, 794, 801, 854, 903, 976, 1038, 1046, 1096, 1115, 1151, 1202
1234 Real world evidence, 1038 Reasoning, 7, 210, 262, 271, 302–308, 312, 317, 322, 323, 325, 328, 329, 339, 342, 350, 461, 498, 624, 628, 650, 1140 Registry, 209 Regulations, 60, 89, 184, 186, 228, 235, 237, 241, 269, 292, 295, 318, 328, 341, 572, 608, 611, 644, 645, 647, 693, 785, 803, 832, 936, 937, 975, 994, 1014, 1035, 1038, 1162, 1168, 1185, 1187–1195, 1197–1199 Reinventing construction, 851, 854, 856, 882, 886–887 Reliability, 24, 159, 173, 181, 185, 191, 194, 200, 215, 216, 320, 375, 382, 391, 400, 402, 404, 405, 408, 409, 424, 426, 434, 436, 441, 448, 452–454, 461, 498, 521, 524, 594, 609, 615, 624, 625, 628–630, 632, 633, 635, 640, 650, 652, 666, 669, 708, 730, 780, 781, 783, 784, 804, 806, 855, 887, 893, 932, 950, 958–959, 972, 975, 976, 982, 992–994, 1010, 1017, 1037, 1055, 1144 Remediation, 723 Repeatability, 154, 850, 851, 856–858, 896 Replica, 23, 25, 114, 206, 254, 310, 384, 437, 440, 441, 445, 603, 617, 618, 650, 855, 896, 929, 953, 959, 1083 Replication, 114, 116, 127, 149, 271, 307, 310, 311, 384, 826, 1210 Representation computational, 549 data, 9, 260, 517, 1210 knowledge, 303, 308, 309, 321, 349 logical, 5, 315, 399 stored, 549, 550 virtual, 57, 84, 146, 229, 230, 232, 236, 238, 256, 259, 266, 368, 384, 479, 498, 548–550, 608, 726, 745, 751, 752, 756, 760, 766, 825, 945, 946, 976, 977, 1025 visual, 113, 373, 452, 471, 472, 482, 706, 708, 709, 751, 1124 Requirements, 9, 89, 99, 129, 154, 186, 208, 228, 271, 281, 303, 340, 373, 400, 434, 448, 501, 534, 565, 586, 600, 662, 679, 709, 741, 781, 801, 890, 961, 975, 1051, 1083, 1123, 1144, 1173, 1206 regulatory, 237, 890, 1013 Resilience, 65, 79–93, 159, 161, 185, 200, 328, 400, 569, 570, 573, 601, 623, 645, 648, 655, 736, 764, 765, 769, 904, 908,
Index 920, 923, 932, 934, 936, 938, 940, 945, 946, 950 Resources, 9, 74, 102, 137, 166, 182, 206, 243, 258, 279, 301, 354, 377, 396, 435, 448, 499, 542, 560, 580, 600, 667, 678, 712, 739, 795, 800, 908, 987, 1026, 1084, 1115, 1139, 1205 Retail, 30, 41, 754, 883, 1138, 1147, 1153 Return on investment (RoI), 170, 973, 1016, 1017 Review(s), 101, 115, 166, 177, 178, 207, 214, 381, 500, 511, 535, 553, 595, 600, 610–612, 617, 634, 645, 647, 710, 714, 723, 765, 790, 805, 806, 820, 822, 831, 833, 835, 879, 944, 980, 986, 988, 1026, 1096, 1117, 1168, 1169, 1181–1184, 1186, 1198 Revolution digital, 245, 246, 777–778, 888, 896, 912, 960, 1041, 1108 Rights, 4, 35, 70, 101, 145, 169, 183, 219, 242, 271, 288, 305, 344, 370, 408, 467, 517, 534, 580, 604, 669, 686, 708, 766, 791, 819, 860, 936, 1008, 1024, 1063, 1089, 1128, 1149, 1168, 1210 Risk mitigation, 564 River Bend nuclear station, 876, 878, 892–895 Robotics, 374, 415–418, 515, 571, 604, 607, 728, 752, 787, 895, 999, 1116, 1203 Robots, 27, 52, 134, 171, 190, 191, 193, 311, 410, 416, 417, 729, 741, 745, 756, 757, 787, 794, 1034, 1051–1053, 1055, 1159, 1160 Robustness, 14, 200, 331, 366, 436, 443, 448, 461, 466, 474, 566, 612, 613, 625, 632, 648, 650, 667, 1010, 1017, 1206 S Safety, 23, 87, 127, 173, 185, 245, 317, 341, 366, 400, 464, 505, 572, 578, 600, 673, 710, 739, 775, 803, 863, 904, 972, 1032, 1051, 1153, 1195, 1206 concerns, 464, 479 Safety management nuclear power, 974–975 Scalability, 159, 216, 219, 256, 260, 271, 358–359, 622, 1115 Scale up, 67 Science data, 397, 578, 592, 594–597 S-curve adoption, 25
Index Security data, 273–274, 377, 769, 949, 1189, 1193 multi-level, 359 network security, 377 Self-reconfiguration, 442–444 Semantics, 10, 128, 143, 146, 147, 160, 206, 210, 218, 219, 221, 259–261, 264, 268, 274, 339, 341–343, 349, 350, 354–358, 423, 516, 672, 684, 700, 935, 941, 942, 948, 949, 999, 1108 Sensing, 7, 26, 139, 194, 195, 205, 206, 285, 306, 307, 314, 387, 406, 583, 594, 631, 650, 693, 826, 828, 830, 844, 959, 991, 992, 1009, 1050–1053, 1058, 1059, 1171, 1210 Sensor limitations, 578, 582 Service-oriented architecture (SOA), 158, 184 Services, 3, 23, 67, 126, 157, 182, 205, 228, 258, 280, 307, 338, 367, 398, 434, 468, 499, 534, 565, 580, 600, 662, 681, 716, 753, 776, 821, 850, 904, 975, 1035, 1054, 1108, 1115, 1142, 1178, 1202 representation, 3, 399, 548, 550 Shadows digital, 661, 666–668, 672, 673, 678, 680, 681, 684, 693–698, 700, 1117, 1140, 1154 Shipbuilding commercial, 807–808, 810 naval, 801, 807–808, 810, 826, 834, 838 Shipbuilding process development, 802, 803 disposal, 802–807 operations, 802 Shipbuilding Process Construction, 802, 803, 805–806 Shop floor reluctance, 466–467 Siemens, 99, 127, 215, 217, 230, 238, 240–246, 249, 250, 520, 796, 1150, 1163 Sikorsky, 600, 619, 634, 638–643, 646 Simulation agent-based, 683, 694, 695 computational, 791, 793 Situation awareness, 300, 312, 323, 327, 328, 767 Sizewell B Nuclear Plant, 880, 882 Smart cities areas of impact, 912–913 challenges, 910 initiatives, 912 maturity models, 926–932 success stories, 912, 913
1235 Social media, 313, 571, 572, 744, 915, 1086, 1098 Societal impacts, 1167–1199 Society 5.0, 126, 128, 129 Software as-a-service (SaaS), 42, 216, 1185 models, 4, 5, 301, 831 networks, 436 open source, 200, 517, 518, 532–534, 539–544 Software construction projects, 536–537 Softwarization, 8, 10, 11, 1211 Spatial web, 1149, 1150, 1154 SQL database, 263, 425, 740 Stakeholders, 30, 31, 43, 52, 83, 93, 140, 143, 173, 177, 197, 198, 212–214, 238, 267, 308, 312, 324, 343, 347, 396, 403, 468, 516, 538, 548, 572, 605, 612, 613, 615, 634, 637, 648, 649, 653–655, 666, 679, 681, 684, 801, 809, 811, 819, 826, 827, 834–838, 840, 842, 845, 903, 904, 907–909, 911, 932–936, 939, 940, 949, 952, 957, 959, 1012, 1014, 1016, 1035–1038, 1207 Standards certification, 389 compliance, 388, 1185 Cultural Heritage and Art, 1082, 1100, 1108–1109 ETSI NGSI-LD, 513–514, 516, 523 heterogeneity, 387 IEC TC65, 509–512, 527 ISO 23247, 500, 501 ISO/IEEE 11073 (Smart Health) Standards, 507–508 ISO/TC 184 (Industrial Data), 499–507, 527 OneM2M, 499, 512–513, 527 Stopping diseases, 1031–1032 Strategy, 33, 80, 82, 84, 211, 219, 233, 235, 239, 241, 269, 282, 293, 322, 323, 327, 328, 331, 388, 400, 448, 463, 466, 481, 521, 562, 564–571, 597, 618, 629, 645, 647, 648, 693, 699, 751, 755, 756, 764, 766–768, 775, 787, 794, 800, 819, 823, 826, 834, 836–838, 851, 856, 859, 872–874, 912, 916, 917, 934–936, 938, 939, 945, 950, 953, 954, 956, 979, 990, 993, 994, 998, 1000, 1002, 1016, 1017, 1029, 1057, 1096, 1151, 1168, 1169, 1172, 1175, 1178, 1182, 1184–1187, 1191–1198
1236 Structures, 12, 16, 22, 23, 32, 33, 37, 40, 42, 43, 53, 60, 91, 132, 137, 143, 144, 157, 191, 194, 196, 231, 241–243, 254, 256, 258–260, 263, 264, 274, 275, 279, 285, 289–291, 327, 331, 332, 349, 357, 385, 387, 396, 398, 400, 417, 427, 438, 500, 503, 509, 514, 526, 539–541, 561, 567, 592, 602, 604, 607, 609, 614, 645–648, 652, 654, 655, 672, 673, 685, 694, 695, 698, 699, 750, 751, 779, 788, 821, 837, 853, 867, 868, 874, 876, 894, 909, 934–935, 955, 959, 961, 963, 972, 974, 975, 977, 978, 986, 993, 998, 1001, 1005, 1011, 1012, 1014, 1025, 1028–1030, 1049, 1051, 1069, 1090, 1094, 1096, 1119, 1141, 1181, 1192, 1210–1212 data, 132, 254, 259–260, 263, 264, 385, 604, 654, 655, 743 Studies, 8, 66, 67, 69, 74–76, 78, 79, 84, 85, 136, 214, 301, 302, 326, 447–449, 451–462, 465, 470–472, 474, 475, 616, 622, 652, 661, 680, 681, 683, 804, 806, 820, 832, 851, 852, 854, 856, 861, 883, 934, 950, 952, 953, 959, 973, 981–983, 999, 1026, 1031–1033, 1038, 1040, 1041, 1057, 1065, 1067, 1072, 1083, 1087, 1089, 1094, 1096, 1105, 1106, 1119, 1196 Subsea operations, 704, 705, 709, 710 Success in construction management, 987, 988 Supply chain digital, 24, 28 Digital Twin, 24, 27, 59, 754–756 Modeling and Simulation, 826–827 optimization, 761–762 transformation, 28, 240, 755, 854 Sustainability, 32, 60, 79–93, 183, 186, 195–196, 295, 329, 736, 904, 907, 908, 922, 928, 932, 934, 936, 938, 940, 945, 997, 1037 Swarm intelligence, 1161 Sydney Opera House, 875 System(s) architecture, 126, 128, 129, 190, 435, 511, 607, 633, 952, 987, 1175 complex, 10, 23, 26, 30, 44, 58, 59, 84, 118, 184, 291–293, 332, 349, 353–354, 404, 406, 435, 437–439, 445, 450, 482, 559–573, 585, 625, 682, 694, 777, 786, 1011, 1012, 1025, 1041, 1156, 1168, 1184, 1185, 1190, 1198, 1206, 1209, 1210
Index control, 30, 217, 317, 318, 320, 371, 375, 383, 400, 406, 510, 525, 569, 604, 627, 666, 693, 716, 777, 946, 975, 978, 1011 dependable, 382–384 electronic, 412, 777, 810 engineering (SE), 183, 187, 237, 366, 387, 404, 603, 607, 624, 625, 646, 647, 680, 784, 785, 801, 825, 889, 1017 health, 273, 507, 508, 628, 638, 993, 1038 Health Management (SHM), 600, 608, 612, 614, 615, 618, 623–629, 632, 633, 635, 637–641, 644, 649, 653, 654 information, 109, 128, 259, 339, 348, 376, 500, 511, 517, 525, 569, 1096, 1125 Integration Laboratory (SIL), 414, 629–633 knowledge based, 303, 317, 318 mechanical, 206, 242, 578, 580–584, 590, 777, 778, 783, 1052, 1053 modelling, 434, 438, 440, 444, 567 Network Management (NMS), 434–436, 1205 reasoning, 629 rule based, 317, 318 sociotechnical, 679, 681, 682, 693–694, 700 software, 384, 387, 413, 562, 565, 631, 633, 670, 978, 1209 subsystem, 40, 45, 46, 146, 184, 228, 250, 251, 317, 351–354, 382, 427, 438, 533, 549–551, 620, 623–626, 666, 671, 687, 777, 792, 796, 801, 802, 805–807, 810, 821, 824, 977, 982, 988, 994, 1015, 1031, 1058 of Systems (SoS), 254, 267, 353, 396, 410, 413, 566, 570, 946 T Teamwork, 678, 679, 683, 684, 693–698, 700, 811 Technology art conservation, 1102–1105 art restoration, 1102–1105 creation of Digital Cultural Heritage, 1084 information (IT), 11, 12, 14, 70, 99, 102, 109, 116, 146, 157, 159, 173, 184, 190, 200, 201, 215, 220–222, 233, 260, 263, 325, 341, 349, 380, 383, 404, 406, 409, 411–413, 419, 421, 423, 425–427, 473, 525, 542, 543, 548, 549, 551, 560, 567, 666, 668, 671, 718, 739, 740, 768–769, 800, 826, 845, 885, 998, 1075,
Index 1076, 1085, 1115, 1129, 1175, 1194, 1203 information and communication (ICT), 13, 126, 208, 525, 913, 918, 934, 1105, 1108, 1203 operations (OT), 201 Telecom, 189, 340–348, 353 Telecommunications, 337, 338, 344–347, 358, 360, 434, 499, 501, 512, 926, 930, 964, 1190 network, 337, 338, 344–347, 358, 360, 434, 499, 501, 512, 926, 930, 964, 1190 Testing, 5, 8, 10, 14, 49, 80, 82, 88–90, 100, 111, 114, 116–118, 170, 187, 188, 213, 221, 241, 244, 247, 270, 285, 306, 368, 389, 390, 400, 402, 404, 512, 533, 534, 537, 603, 607, 615, 619, 624, 625, 628, 631–633, 638, 650, 651, 653, 654, 673, 726, 738, 776, 780–783, 791–793, 795, 796, 805–807, 809, 820, 821, 824, 834, 845, 863, 884, 946, 950, 977, 980, 988, 989, 995, 1026, 1029, 1030, 1115, 1145, 1147, 1175, 1184, 1195, 1205, 1207 Thread digital, 23, 42, 45–47, 49, 51, 53, 56, 59, 84, 92, 101, 102, 107, 108, 133, 216, 221, 231, 232, 234, 240, 245, 254, 256, 260, 263, 272, 275, 517, 594, 604, 612, 613, 634, 678, 698–700, 785, 794, 820, 827, 829, 1140, 1154 Three Mile Island Clean-up Project, 879–880 Threshold failure, 436, 582–584, 589 Timeliness, 15, 269, 309, 374, 640, 682, 743, 1055 Tourism, 447, 917, 946, 965, 1106–1108, 1147 Trademarks, 1178, 1180, 1183, 1185 Trade secrets, 1169, 1173–1177, 1181, 1183, 1185, 1198 Training, 5, 44, 51, 220, 241, 284, 305, 318, 321, 325, 328, 407, 447–486, 520, 541, 552, 553, 600, 607, 637–640, 648, 654, 704, 712–715, 718, 724, 726, 728, 730, 760, 762, 804, 806, 811, 824, 828, 829, 844, 857, 872, 880, 972, 981, 990, 994–995, 997, 1000, 1039, 1063, 1065, 1071, 1072, 1103, 1151, 1157, 1175, 1192, 1197, 1206 Transdisciplinary, 275, 349 Transformation
1237 digital, 67, 68, 71, 73–75, 78, 79, 110, 156, 161, 166, 172, 229, 232–237, 239, 240, 244, 255, 294, 295, 405, 410, 419, 427, 683, 707, 776, 796, 842, 887, 908, 909, 911, 912, 928, 933–940, 956, 959, 960, 1116, 1149, 1152, 1203, 1212 manufacturing, 736, 751, 754 Transparency, 93, 186, 670, 938, 946, 947, 1160, 1168, 1194–1198 Transportation, 10, 30, 37, 66, 81, 88–90, 405, 406, 412, 415, 427, 569–571, 704, 740, 746, 754, 761, 762, 774, 778, 779, 865–867, 921, 937, 941, 944, 964, 997, 1037, 1190, 1195 Trials, 44, 109, 139, 330, 388, 805–808, 824, 1024–1026, 1029, 1031, 1034, 1038, 1039, 1073 Trust, 183, 186, 199, 269, 273, 274, 322, 366–368, 371, 376, 382–391, 475, 521, 547, 551, 570–572, 607, 632, 653, 655, 708, 709, 720, 835, 937, 938, 947, 949, 1036, 1038, 1088–1090, 1147, 1158, 1161, 1162, 1197 Trusted cloud infrastructure, 199–201 Trustworthiness, 184, 185, 368, 382, 385, 389, 517, 551, 917, 1168, 1194–1198 Tuning, 214, 295, 304–306, 313, 315, 330, 631, 983, 1040, 1160, 1204, 1206, 1210 Turing, A., 116 U Underserved Patient Populations, 1026–1028 Unmanned aerial vehicle (UAV), 616, 617, 963, 1097 Urban Digital Twins (UDTs), 919–932, 939–965 Usage scenarios, 135, 265, 370–371, 373, 382, 386, 1047 Use cases, 9, 79–93, 103, 105, 117, 127, 141, 155, 156, 163, 170–172, 174, 186, 188–192, 195, 200, 237, 266, 338, 340–348, 360, 397, 405, 413, 417, 418, 434, 449, 452, 455–458, 461–463, 468, 474, 478, 484, 485, 504, 511, 516, 521, 550–554, 578, 585–586, 603, 662–666, 726, 737, 738, 757–763, 767, 802, 809–812, 816–819, 822–824, 829, 837–844, 907, 926, 930, 932, 946, 948, 952, 953, 999, 1011, 1138, 1139, 1168, 1184, 1193, 1203, 1212, 1213
1238 User acceptance, 461, 466–467, 480, 1154 centric, 737, 910–913, 919, 939, 1024 experience, 18, 30, 158, 248, 250, 251, 464–467, 475, 704, 719 interface, 154, 158, 212, 218, 401, 454, 461, 462, 467, 476, 479, 484, 504, 1059, 1066, 1173 Using digital to make art, 1098 V Value in oil & gas projects, 704–707 Value proposition, 21, 24, 154, 158, 186, 200, 257, 266, 540–544, 652–653, 800, 802, 1171 Vehicle definition, 781, 783–786 design, 89, 399, 400, 774, 776, 782, 783, 795, 796 manufacturing, 245, 410, 511, 752, 753, 776, 782, 783, 794 Verification, 89–90, 237, 251, 259, 359, 404, 612, 614, 615, 619, 629, 632, 635, 653, 670, 707, 718, 725, 780, 781, 784, 786, 787, 790–793, 802, 811, 819, 980, 981, 997, 1126, 1128, 1129 Vertical Takeoff and Landing (VTOL), 616, 617 Viability, 263, 449, 461, 468, 611, 858, 947 Vickers, J., 101, 562, 737 Virtual archeology, 1084–1086 Virtual Commissioning (VC), 136, 755, 763, 1118, 1121, 1126, 1128, 1129 Virtual Environment for Reactor Applications (VERA), 978–980, 990, 991, 995, 996, 1003
Index Virtualization, 98, 411–413, 415, 418, 425, 426, 1149, 1150 Virtual Reality (VR), 6, 7, 27, 31, 35, 55, 57, 105, 128, 133, 134, 148, 154, 161, 369, 372, 388, 404, 448, 454, 551, 553, 564, 640, 654, 714, 726, 730, 828–829, 998, 1000, 1013, 1082, 1083, 1085, 1097–1099, 1101, 1115, 1119–1120, 1129, 1145, 1153–1155, 1164, 1171 Virtual twin innovators, 1039–1041 Vision, 24, 26, 27, 78, 126, 184, 190, 206, 207, 234, 240, 251, 289, 338, 354, 360, 367–369, 372, 385, 387, 389, 391, 410, 417, 450, 452, 523, 592, 594, 605, 609, 638, 639, 743, 744, 752, 754, 819, 828, 838, 917, 958, 1012–1016, 1046, 1064, 1072, 1073, 1076, 1096 Visualization data, 217, 219, 522, 713, 763, 948 techniques, 462, 473, 709 Vocational school, 1114, 1116, 1117, 1120, 1126–1129 Vulnerability, 366, 367, 371, 376, 378–382, 718, 806, 844, 1190 W What to watch out for, 649–652 Wooden mock-ups, 890–897 Workflow, 134, 178, 182, 217, 399, 404, 450, 454, 455, 476, 522, 680, 711–713, 719, 720, 740, 784, 811, 823, 954, 963, 977, 989, 1002–1004, 1014, 1015 Z Zollhof Towers, 874