143 91 76MB
English Pages 255 [256] Year 2023
Yogini Borole, Pradnya Borkar, Roshani Raut, Vijaya Parag Balpande, Prasenjit Chatterjee Digital Twins
De Gruyter Series on Smart Computing Applications
Edited by Prasenjit Chatterjee, Dilbagh Panchal, Dragan Pamucar, Sarfaraz Hashemkhani Zolfani
Volume 8
Yogini Borole, Pradnya Borkar, Roshani Raut, Vijaya Parag Balpande, Prasenjit Chatterjee
Digital Twins
Internet of Things, Machine Learning, and Smart Manufacturing
Authors Dr. Yogini Borole 6 Chandan Shiffalika Sr. No 35-1+2 Lane No. 6 Pune 411015 Maharashtra India [email protected] Dr. Pradnya Borkar 38 Wardhman Nagar, Plot No. 237 Nagpur 440008 Maharashtra India [email protected]
Dr. Vijaya Parag Balpande 132 Ganesh Nagar, Sharda Chowk Nagpur 440009 Maharashtra India [email protected] Dr. Prasenjit Chatterjee Department of Mechanical Engineering MCKV Institute of Engineering 243 G. T. Road North Liluah 711204, Howrah West Bengal India [email protected]
Dr. Roshani Raut C-407 Phase-VI, Tingre Nagar Lane No. II Vishrantwadi Pune 411015 Maharashtra India [email protected]
ISBN 978-3-11-077878-6 e-ISBN (PDF) 978-3-11-077886-1 e-ISBN (EPUB) 978-3-11-077896-0 ISSN 2700-6239 Library of Congress Control Number: 2023936212 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2023 Walter de Gruyter GmbH, Berlin/Boston Cover image: monsitj/iStock/Getty Images Plus Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck www.degruyter.com
Preface An accurate virtual representation of a physical object is called a digital twin. A wind turbine, for instance, is equipped with various sensors connected to critical functional regions. These sensors generate information on the performance of the physical thing in multiple areas, including energy output, temperature, environmental conditions, and more. Afterward, a processing system transfers this information and applies it to the digital copy. The market for digital twins is proliferating, suggesting that even if they are already used in many different industries, demand will continue to rise for a long time. This book will be the guidelines for professionals working with industry, institutions, postgraduates, and graduate students taking courses in any engineering field like computer, electronics, mechanical, etc. It will also be the guide for research scholars, researchers doing interdisciplinary research, industrialists, and scientists learning techniques and technologies related to the intelligence industry. Each chapter reflects various application fields and methodologies. Chapter 1 presents the concept of the Digital Twin and its architecture. This chapter explains that digital twins are physical models that can accurately translate into virtual versions. With the help of digital twins, it is possible to understand better the performance characteristics of substantial parts, processes, or systems. The performance of a system, a method, or a process can be remotely visualised, predicted, and optimised by industrial operators using digital twins. Chapter 2 covers the benefits of digital twin modelling. The chapter explains the significance, challenges, and advantages of digital twins – explanation of how digital twin benefits manufacturing in the Industry 4.0 era. It mentions the Industrial Revolution and how it changed how goods are produced and how consumers interact with the manufacturing process, enabling more sophisticated requests for various consumer items. It also focuses on the goal of the fourth Industrial Revolution, called Industry 4.0, which combines the advantages of mass production with individualisation and virtual product models, referred to as “Digital Twins”. Chapter 3 provides an overview of the end-to-end conceptual model of a digital twin, which reflects its complementary physical item from the ground cloud. An explanation of the multiple layers of the digital twin model, including the overlapping security layer and the physical, communication, virtual space, data analytic and visualisation, and application layers – the comprehensive review of the hardware and software techniques utilised in constructing the digital twin model. A use case is shown to demonstrate how the layers gather, exchange, and process the physical object data from the ground to the cloud, and indicate how the layers collect, exchange, and process the physical object data from the ground to the cloud. Chapter 4 covers IoT and Digital Twin. It mentions a network of digitally connected items known as the Internet of Things (IoT). There are numerous shapes and sizes for IoT. These devices may be an intelligent virtual assistant in your living room, an innovative home security system, or your garage-parked car. They appear in the https://doi.org/10.1515/9783110778861-202
VI
Preface
form of traffic lights in smart cities that are connected to the internet. A “digital twin” is the term used to describe a system’s virtual representation of a physical component. The IoT device’s sensors collect data and transmit it to its digital duplicate. IoT researchers and developers use the data to generate new logic and schemas, which they test against the digital twin. The tested code is then uploaded via over-the-air updates into the IoT device. A digital twin is a representation of an actual object in cyberspace. Chapter 5 explains how machine learning and IoT are used to construct digital twin and how system builders should approach the development of digital twins that use artificial intelligence. What are the biggest hardware challenges regarding artificial intelligence/machine learning? The Internet of Things system transmits real-time data via edge computing and smart gateways. The digital twin model is fed the preprocessed data that is readily available online. Additionally, offline data is processed and used as input for the digital twin after some data mining methods have been used. Modelling and analytics techniques can be combined to achieve a model with a specified purpose. With the aid of IoT sensors and machine learning algorithms, the entire workflow is maintained in order to obtain accurate, precise predictions. Chapter 6 presents intelligent and smart manufacturing with AI solution. It discusses about current artificial intelligence opportunities which allow manufacturing to become smarter, with a focus on research and development, procurement, and assembly line processes. Smart manufacturing refers to a group of technologies and solutions, including artificial intelligence (AI), robotics, cyber security, the Industrial Internet of Things (IIoT), and blockchain, which are integrated into a manufacturing ecosystem to optimise manufacturing processes through data generation and/or acceptance. Chapter 7 presents an overview of various literature on the data fusion, such as different methodologies of data fusion, and the challenges and opportunities are discussed under each method. It focuses on information and data fusion for decision-making. Techniques of data fusion integrate relevant information and data from multiple sensors to draw conclusions that are more accurate than those that might be drawn from a single, independent sensor. Even though the idea of data fusion is not new, real-time data fusion has become more and more practical as a result of the addition of additional sensors, enhanced processing techniques, and upgraded hardware. Data fusion technology has quickly advanced from being a hastily put together set of related concepts to a comprehensive engineering profession with standardised language, established system design principles, and libraries of trustworthy mathematical operations or procedures. Chapter 8 presents digital twin use cases in industries. The chapter tells the prominent and dominant use cases of the fast-evolving digital twin, as well as a number of industry use cases and benefits for unambiguously substantiating the varied claims on the future of this unique and sustainable discipline. Chapter 9 is about the security in digital twin. The chapter identifies the ways and means of collecting, organising, and storing the data in a secured cloud environment. The data is filtered according to the use and priority and pushed into the cloud. It is determined to implement an exclusive algorithm for a secured cloud which would
Preface
VII
greatly benefit the users and the providers to handle and process it effectively. It also explains that cyber security and protection are ongoing concerns for every business, and the digital twin technology may hold the key for stronger online defence. Chapter 10 describes the modelling and the subsequent implementation of an integrated system that consists of a real material handling system and its digital twin, based on physics simulation. Digital actual creation frameworks offer flexibility and adaptation while producing different goods in small batches. Due to altering paths and a wide range of task components, material streams in digital actual production frameworks can get extremely difficult. This can result in genuinely induced annoyances that can result in accidents, poorer yield, and expensive costs. This problem can be solved by using a physical science motor. Recreate the real-world interaction between the tools of the trade and the equipment used for material maintenance. Chapter 11 offers an architecture for information flow between digital twins and virtual reality software before offering a use case on designing and assessing design and assessment of a collaborative workplace involving humans and robots. Furthermore, it focuses on a hybrid discrete-continuous simulations architecture created specifically for digital twin applications. The framework’s base is provided by SimPy, a well-known Python discrete-event simulation library. First, provide a step-by-step guide for integrating continuous process simulations into SimPy’s event-stepped engine. Chapter 12 gives the case study of smart city based on digital twin. The smart twins provide a clear understanding of the bright city. It can actually steer the city, from metropolitan seeking to land-use enhancement. Computerised twins enable the demonstration of plans prior to their execution, revealing flaws before they materialise. Using computerised techniques, engineering components such as lodging, remote organisation receiving cables, solar-powered chargers, and public transit can be planned and focussed on. It also covers the objectives met through the development of digital twinbased smart cities.
Contents Preface
V
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture 1 1.1 Difference Between Digital Twin and Simulations 2 1.2 History of Digital Twin Technology 2 1.3 Various Types of Digital Twins 3 1.4 Pillars of Digital Twin Technology 4 1.5 Advantages of Digital Twins 6 1.5.1 Digital Twin Market and Industries 6 1.6 Applications 7 1.7 The Future of Digital Twin 9 1.7.1 Digital Twins Predict the Present and the Future 9 1.8 Example of Digital Twin: An Engineer’s Point of View 10 1.9 Digital Twin Architecture 11 1.10 Digital Thread: A Link Between the Real World and the Virtual Worlds 13 References 14 Chapter 2 Benefits of Digital Twin Modelling 17 2.1 Industry 4.0 17 2.2 Industry 4.0’s Digital Twin 20 2.3 Advantages of a Digital Twin 20 2.4 Things to Think About Before Using Digital Twins 2.5 Challenges to Implement Digital Twin 23 2.5.1 Data Analytics Challenges 23 2.5.2 IOT/IIOT Challenges 24 2.5.3 Digital Twin Challenges 25 References 26 Chapter 3 Modelling of Digital Twin 27 3.1 Introduction 27 3.2 Design and Implementation of Digital Twin 3.2.1 Design of Digital Twin 28 3.2.2 Data 29 3.2.3 Modelling 29 3.2.4 Linking 29 3.2.5 Digital Twin Architecture 30
28
22
X
3.2.5.1 3.2.5.2 3.2.5.3 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.4
Contents
The Physical Layer 30 The Network Layer 30 The Computing Layer 31 Hardware/Software Requirement Hardware Components 33 Data Management Middleware Software Components 34 Digital Thread 34 Use of Case Study 35 References 37
33 34
Chapter 4 Digital Twin and IoT 39 4.1 Contribution of IoT in Development of Digital Twin 39 4.2 Digital Twin Use Cases by Industry 41 4.2.1 Supply Chain 41 4.2.1.1 Construction Companies 43 4.2.1.2 Healthcare 43 4.2.1.3 Use in Manufacturing Industry 43 4.2.1.4 Aerospace 44 4.2.1.5 Automotive 44 4.2.1.6 Self-Driving Car Development 44 4.3 Insight of Digital Twins in IoT 44 4.4 Applications of Digital Twin in Healthcare: A Case Study 45 4.4.1 Digital Twin of Hospitals 45 4.4.2 Digital Twin of Human Body 46 4.4.2.1 Diagnosis at Personal Level 46 4.4.2.2 Efficient Treatment Planning 47 4.4.3 Digital Twins for Development of Medical Instruments and Drugs 4.5 Challenges of Digital Twin in Healthcare 47 4.5.1 Less Adoption 47 4.5.2 Quality of Data 48 4.5.3 Privacy of Data 48 References 48 Chapter 5 Machine Learning, AI, and IoT to Construct Digital Twin 49 5.1 Introduction 49 5.2 Big Data, Artificial Intelligence, and Machine Learning 50 5.3 Big Data, Artificial Intelligence, Machine Learning, IoT, and Digital Twin: A Relationship 51 5.4 Deployment of Digital Twin Using Machine Learning and Big Data
47
52
Contents
5.4.1 5.5 5.5.1 5.5.2 5.5.3 5.5.4
Smart Manufacturing 52 Use of AI 53 Digital Twin in Aerospace 56 Application of AI in Autonomous Driving IoT in Self-Driving Cars 57 Big Data in Product Life Management References 58
56 57
Chapter 6 Intelligent and Smart Manufacturing with AI Solution 61 6.1 Introduction 61 6.2 Twinning of Components 62 6.3 Twinning of Products or Assets 62 6.4 Twinning of Process and Production 63 6.5 Twinning of Systems 63 6.6 Examples 63 6.6.1 Aviation Industry 63 6.6.2 Automobile Industry 64 6.6.3 Industry of Tyre Manufacturing 64 6.6.4 Power Generation 64 6.6.5 Supply Chain Simulation 65 6.6.6 Urban Planning 65 6.6.7 Artificial Intelligence and Industry 4.0 65 6.6.8 Opportunities of Research in AI in Smart Industry 66 6.6.9 AI in Electronic Industry 67 6.6.10 Agriculture and Artificial Intelligence 67 6.7 Conclusion 68 Chapter 7 Information and Data Fusion for Decision-Making 71 7.1 Introduction 71 7.2 Data Source and Sensor Fusion 72 7.3 Job Localisation and Positioning of System 74 7.3.1 Magnetometer and LPF 75 7.3.2 Magnetometer and Gyro 75 7.4 Different Kinds of Data Fusion Techniques 78 7.4.1 Durrant-Whyte Classification 78 7.4.1.1 Complementary Data 78 7.4.1.2 Redundant Data 79 7.4.1.3 Cooperative Data 79 7.4.2 Dasarathy’s Taxonomy 79 7.4.2.1 Data Input–Data Output (DAI–DAO) 79
XI
XII
7.4.2.2 7.4.2.3 7.4.2.4 7.4.2.5 7.4.3 7.4.3.1 7.4.3.2 7.4.3.3 7.4.4 7.4.4.1 7.4.4.2 7.4.4.3 7.4.4.4 7.4.4.5 7.4.5 7.4.5.1 7.4.5.2 7.4.5.3 7.5
Contents
Data Input–Feature Output (DAI–FEO) 79 Feature Input–Feature Output (FEI–FEO) 79 Feature Input–Decision Output (FEI–DEO) 79 Decision Input–Decision Output (DEI–DEO) 80 Abstraction Level Classification 80 Hybrid Fusion 81 Late Fusion 81 Hybrid Fusion 82 JDL Taxonomy 82 Source Pre-processing – Level 0 82 Object Refinement – Level 1 82 Situation Assessment – Level 2 83 Impact Assessment – Level 3 83 Process Refinement – Level 4 83 Architecture Level Classification 84 Centralised Architecture 84 Decentralised Architecture 84 Distributed Architecture 84 Conclusion 85 References 85
Chapter 8 Digital Twin Use Cases and Industries 87 8.1 Introduction 87 8.2 Aviation and Aircraft 89 8.3 Production 91 8.4 Medicines and Universal Healthcare 95 8.5 Energy and Power Generation 97 8.6 Automotive 98 8.7 Refineries 99 8.8 Keen Town 100 8.9 Mining 102 8.10 Shipping and Maritime 103 8.11 Academia 105 8.12 Architecture 105 8.13 Markets 106 8.14 Remarks 107 8.15 Supply Chain in Pharmaceutical Company
108
XIII
Contents
8.16 8.17
Smart Transportation System Manufacturing System 113 References 113
109
Chapter 9 Security in Digital Twin 115 9.1 Introduction 115 9.1.1 What Is a Digital Twin? 115 9.1.2 Internet of Things and Information Safety 116 9.1.3 The IoT Powers Digital Twin Technology 117 9.1.4 Is There No Longer a Divide Between Public and Private Data? 9.1.5 Challenges in Interoperability with Digital Twins 118 9.1.6 Advanced Digital Transformation and Twins 119 9.1.7 The Different Sides of Advanced Twin Security 119 9.1.8 What Are the Threats? 119 9.1.9 Your Activity Focuses 121 9.1.10 Security by Design 122 9.1.11 NIS/GDPR Compliance 122 9.2 Network Safety by Solution 122 9.2.1 Network Protection Computerised Twin 122 9.2.2 Prescient Cyber Security 122 9.2.3 Constant Assessment and Remediation 123 9.2.4 Nonstop Cyber Risk Assessment 123 9.2.5 Digital Protection Remediation and Countermeasures 123 9.2.6 Network Safety Forensics 123 9.2.7 Zero-Day Simulation and Defence 123 9.2.8 IT/OT Cyber Security 123 9.2.9 Network Protection Support 123 9.2.10 Overview 124 9.2.11 Framework Testing and Simulation 126 9.2.12 Recognising Misconfigurations 126 9.2.13 Entrance Testing 126 9.2.14 Framework 3 127 9.3 Knowledge Input 127 9.3.1 Engineer Expertise 127 9.3.2 Field Information 130 9.4 DAF Twinning Framework 130 9.4.1 Creator 130 9.4.2 Computer-Generated Environment 131 9.4.3 Simulation and Replication 131
117
XIV
9.4.4 9.4.5 9.4.6 9.4.7 9.5 9.6 9.6.1 9.6.2 9.6.3 9.6.4 9.6.3.1 9.7 9.8
Contents
Monitoring 132 Device Testing 132 Security and Safety Analysis 133 Behaviour Learning and Analysis 133 Management Client 134 Proof of Concept 134 Scenario Specification 135 Security Rule 138 Virtual Environment Generation 138 Simulation and Results 140 Comparison of the Environment 140 Related Work 142 Conclusions 144 References 145
Chapter 10 Implementation of Digital Twin 149 10.1 Introduction 149 10.2 Simulation of Physics 152 10.3 Digital Twin in Production 155 10.4 Research Gap 160 10.4.1 Modelling 161 10.4.2 Conditions 163 10.5 System Engineering 164 10.5.1 Interactions 167 10.6 Application 167 10.6.1 Actual Set-Up 167 10.6.2 Infrastructure for Communications 10.7 Digital Twin 172 10.7.1 Prognosis 172 10.7.2 Perception 174 10.7.3 Detection 175 10.8 Use Cases 175 10.8.1 Prognosis 176 10.8.2 Observing 177 10.8.3 Evaluates 178 10.9 Overview and Prospects 179 References 179
170
XV
Contents
Chapter 11 Digital Twin Simulator 185 11.1 Introduction 185 11.2 An Examination of Simulation Methods and Frameworks 186 11.2.1 Simulating a Continuous System 186 11.2.2 Simulation of Discrete Events 187 11.2.3 MDC Simulation or Mixed-Resolution Simulation 187 11.3 Framework 188 11.4 Proposed Structure 190 11.4.1 An Improved Time Advancement Plan 191 11.4.2 A Case Study 192 11.4.2.1 Advantages of the Suggested Framework 195 11.5 A Framework for the Formal Simulation Model Configuration 196 11.5.1 Model Library 197 11.5.2 Library of Scenarios 198 11.5.3 Library of Algorithms 199 11.6 Operational Modelling Integration 201 11.7 Example of an Educational Implementation 203 11.7.1 Structural Disintegration 204 11.7.2 Development or Selection of Product Parts 204 11.7.3 Identify Particular Information Need 204 11.7.4 Establish or Expand a Model Library 204 11.7.5 Specify the Simulation Scenario 205 11.7.6 Run the Scenario Model and Evaluate the Results of the Simulation 207 11.8 Conclusion 211 References 212 Chapter 12 Case Studies: Smart Cities Based on Digital Twin 215 12.1 Introduction 216 12.2 Digital Transformation Is an Unavoidable Trend for Smart Cities 12.3 Cities and Their Digital Twins 217 12.4 Advanced Technology Used in Digital Twin Cities 218 12.5 Smart Cities and Digital Twins 219 12.5.1 Digital Twin-Based Features of Green Infrastructure 220 12.5.2 Utilisations of Digital Twins in Smart Cities 221 12.5.2.1 Smart City Management Brain 221 12.6 Digital Twin Administrations for Savvy Matrix 224 12.7 Public Epidemic Services in Smart Cities 226
217
XVI
12.8 12.8.1 12.8.2 12.8.3 12.9 12.10
Index
Contents
Services for Flood Situation Observing Smart Healthcare 231 Intelligent Transit 231 Intelligent Supply Chain 232 Digital Twin Cities Framework 232 Conclusion 233 References 234 237
230
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture Abstract: The digital twin concept is an emerging technology used in various industries. An overview sheds light on this emerging technology, architectural construct, and business initiative. This chapter explains the digital twin concept using the cyberphysical system architecture, business use cases, and value proposition. A physical model that can be reflected accurately as a virtual model is called digital twin. Gaining a better knowledge of the performance characteristics of tangible parts, processes, or systems is feasible with the aid of digital twins. Industrial operators can use digital twins to remotely visualise, forecast, and optimise the performance of a system, a process, or the entire process. A virtual object called a “digital twin” precisely copies the structure, content, and status of the related real-world thing. Consider it a test version before continuing with product development or using a real-world device. Implementing digital twins and the Internet of things (IoT) go hand in hand. For instance, manufacturing systems can be simulated and tested by industrial organisations by creating models using precise real-time data. Using digital twin technology can aid numerous other organisational objectives [1]. The different definitions of digital twin are as follows: A representation of an organization’s physical resources that aids in outlining its overall business operating model. A computer programme that uses actual data about a physical system or item as inputs and delivers simulations or predictions of how the inputs will impact the physical system or object. A digital representation of a procedure, item, or service. . .
Consider an example of the object being examined, such as various sensors are connected to the wind turbine, which acts as key functioning regions. These sensors generate information about a variety of performance characteristics of the physical device, including energy output, temperature, and environmental conditions. The processing system then applies this information to the digital copy. It is possible to utilise this virtual model to run simulations, investigate performance problems, and create potential improvements after obtaining such data, all in an effort to produce insightful information that can then be applied to the real physical thing.
https://doi.org/10.1515/9783110778861-001
2
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture
1.1 Difference Between Digital Twin and Simulations Both digital twins and simulations employ digital models to replicate a system’s numerous activities, but a digital twin is actually a virtual environment, making it more valuable for research. A digital twin can perform as many useful simulations as necessary to examine numerous processes, whereas simulation often only studies one particular process. This is the major difference between a digital twin and a simulation. There are still more differences. For simulations, for instance, real-time data is often not helpful. Digital twins, on the other hand, are designed around a two-way information flow that starts when object sensors provide the system processor with relevant data and continues when the processor exchanges insights with the original source object. Because they have better and more up-to-date data about a wide range of fields and the additional computing power that comes with a virtual environment, digital twins have a greater potential to ultimately improve products and processes because they can study more problems from a much wider range of perspectives than standard simulations.
1.2 History of Digital Twin Technology With the release of Mirror Worlds by David Gelernter in 1991, the concept of digital twin technology was first presented. But Dr. Michael Grieves, who was then a professor at the University of Michigan, is recognised as having introduced the concept of digital twin software and first employed it in a production setting in 2002. Finally, the term “digital twin” was invented by NASA’s John Vickers in 2010. The fundamental concept of utilising a digital duplicate to inspect a physical object, however, can be seen far earlier. It is accurate to say that NASA was the first organisation to use digital twin technology during its space exploration mission in the 1960s. At that time, each moving spacecraft was painstakingly replicated in an earthbound version that NASA employees serving on flight crews used for training and simulation. Digital twins begin with the existence of a physical component, even before a prototype, and continue up until the end of the product’s useful life. Twins for designs, manufacturing, and operations might be considered the three main stages of the digital twin life. Since the 1960s, physics-based models that employ numerical techniques, such as finite element analysis, have been utilised as digital twins for the design phase. Today, they are a commonplace design tool for quickly identifying the best ideas. The same set of tools have also been used to forecast how a part will react to a manufacturing process, enabling engineers to take manufacturing effect into account during the design process and prevent design difficulties later in the product life cycle.
1.3 Various Types of Digital Twins
3
Although the idea of twins for processes like machine learning has been known since the 1960s, it wasn’t until recently that data pipelining and data science advancements made their use more common. The diagnostics (anomaly identification and root-cause investigation) and prognostics (remaining usable life prediction) of engineering systems are performed using these data-based predictive models, which reduce scheduled maintenance and eliminate expensive breakdowns. The two types of predictive models also complement one another in that the real-time data, such as the discovery of crucial load cases that are lacking in the operating environment, can be used to enhance physics-based models. Similarly, for circumstances not covered by real-time data, data from physics-based models can be used as a supplement [1].
1.3 Various Types of Digital Twins There are various types of digital twins, depending on how magnified the product is. These twins differ primarily in the area of application as shown in Figure 1.1. Different types of digital twins frequently coexist in a system or process. Digital twins come in a variety of forms depending on how magnified the product is. These twins most significantly diverge in the area of application. It is typical for various kinds of digital twins to coexist in a system or process.
Figure 1.1: Types of digital twin.
4
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture
– Component twins or parts twins Component twins are the core unit of a digital twin and the most basic representation of a functioning component. Parts twins are virtually the same thing as identical parts, despite the fact that they refer to far less important parts. – Asset twins An asset is formed when two or more components perform well together. Asset twins allow you to investigate the interactions between these factors, generating a wealth of performance data that can be analysed and turned into insightful information. – System or unit twins System or unit twins, which show how diverse assets work together to form a complete, usable system, are the next degree of magnification. System twins offer visibility into how assets interact and may make performance suggestions. – Process twins Process twins describe how several systems interact to construct an entire production plant. Are all of those systems coordinated for maximum efficiency, or will delays in one system affect others? The specific timing schemes that eventually affect overall efficiency can be found with the use of process twins.
1.4 Pillars of Digital Twin Technology Three important pillars for digital twin technology are discussed as follows: 1. There are actual physical objects in the real world. 2. In the digital realm, there are virtual object profiles. 3. Physical and digital worlds are connected by a bridge that facilitates information sharing and data interchange. The digital twin provides substantial advancements in the production of things and materials, how to develop services that contribute to the maximum level of customer delight in the modern world, as well as key turning points in human history like the development of industry and agriculture [2]. The stages of digital twin technology evolutions are shown in Figure 1.2 [3]: 1. Physical version: Only a tangible replica of the full production process is available. 2. Digital and physical: The digital edition adds more information to the physical version. 3. Interaction: Data and information exchange between physical and digital versions. 4. Convergence: Further interaction of two versions.
1.4 Pillars of Digital Twin Technology
Figure 1.2: Evolution of digital twin.
6. Received data
1. Sell Equipment
7. Create Assest and Component Structure
2. Ship Equipment
Physical Assest
IOT
Assest Digital Twin
8. Record Readings & Update Component
3. Install Equipment
4. Service Equipment
9. Evaluate Data
5. Gather Data
Figure 1.3: Getting digital twin from physical asset.
10. Initiate actions.
5
6
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture
Depending on their size and specific needs, companies will find the digital twin valuable. A wide variety of digital twin applications can be used to model new processes that will significantly affect an organisation’s operations or predict the outcome, how a new product will look like, how an activity will be completed, and so on, as shown in Figure 1.3 [4].
1.5 Advantages of Digital Twins Digital twin is useful in various fields such as R&D, production systems, determining product life, and market and industries and also helps in the project which is complex and physically large. Better R&D Utilising digital twins produces a wealth of data regarding expected performance results, facilitating more efficient product research and creation. Before beginning production, businesses can use this data to gain insights that will help them make the necessary product improvements. Greater efficiency Even after a new product has entered production, digital twins can help in monitoring and mirroring production systems with the aim of achieving and maintaining peak efficiency throughout the whole manufacturing process. Product’s end of life Digital twins can help manufacturers decide how to manage things that have outlived their usefulness and need to be final processed, such as recycled or in other ways. By using digital twins, they can choose which product materials can be gathered.
1.5.1 Digital Twin Market and Industries Despite the benefits that digital twins provide, not every company or every product made needs to employ them. Not all objects are intricate enough to require the continuous and intensive influx of sensor data that demand by digital twin. Additionally, investing a significant amount of resources in the creation of a digital twin is not always profitable. (Keep in mind that a digital twin is a precise replica of a physical object; as such, creating one might be costly.) However, using digital models for a variety of applications does have certain distinct advantages:
1.6 Applications
– –
– –
7
Physically substantial projects: buildings, bridges, and other intricate structures subject to stringent engineering regulations. Mechanically difficult projects: Digital twins can contribute to increased productivity in massive engines and intricate machinery such as vehicles, jet turbines, and aircraft. Power equipment: Power generation and transmission systems fall under this category. Manufacturing initiatives: Digital twins are great for increasing process efficiency since they are similar to industrial set-ups with cooperating machine systems.
As a result, industries that work on large-scale products or projects benefit the most from digital twins such as – in the field of engineering, – in the manufacturing of automobile, – in the production of aircraft, – in design of railcar, – in construction of building, and – in manufacturing of power utilities. Digital twin market: poised for growth The market for digital twins is growing quickly, which suggests that even if they are already used in many different industries, demand will persist for a while. The market for digital twins was worth USD 3.1 billion in 2020. It may continue to grow rapidly until at least 2026, rising to a projected USD 48.2 billion, according to certain industry observers. Improving manufacturing efficiency with digital twin Owner/operators can increase productivity while reducing equipment downtime by using end-to-end digital twins.
1.6 Applications Digital twins can be extensively used in various applications as depicted in Figure 1.4 [5]. Power-generation equipment The usage of digital twins is extremely advantageous for big engines, like locomotive engines, jet engines, and turbines for power generation, especially when determining whether routine maintenance is required.
8
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture
Structures and their related systems Massive physical constructions can be improved with digital twins, especially during the design phase, such as tall buildings or offshore drilling platforms. These are additionally helpful for creating the heating, ventilation, and air-conditioning systems that run within those structures. Manufacturing operations Given that they are meant to mimic a product’s whole life cycle, it is not surprising that digital twins have spread in all stages of manufacturing, taking goods from design to final product, and all processes in between.
Figure 1.4: Applications of digital twin.
1.7 The Future of Digital Twin
9
Healthcare services Patients seeking medical treatments can be profiled using digital twins just like products. The same type of sensor-generated data system can be used to monitor various health indicators and generate crucial insights. Automotive industry Digital twins are widely utilised in the car industry to improve vehicle performance and production economy. Cars represent a variety of intricate, interconnected systems. Urban planning The ability to offer 3D and 4D spatial data in real time for civil engineers and other participants in urban planning operations is part of the use of digital twins, which may embed augmented reality (AR) technologies into constructed settings. This will be achievable with digital twin, which is quite beneficial.
1.7 The Future of Digital Twin Currently, the operational paradigms are undergoing a major transformation. Assetintensive businesses are experiencing a disruptive digital reinvention that is redefining operating models and need an integrated physical and digital perspective of resources, machinery, structures, and processes, digital twins are a key element of that readjustment. Due to the ongoing allocation of more and more cognitive resources to their utilisation, the potential of digital twins is almost endless. Digital twins are able to continue to produce the insights required to improve products and streamline operations since they are always acquiring new knowledge and abilities.
1.7.1 Digital Twins Predict the Present and the Future For engineers and operators to understand both the present and future performance of products, a digital twin is an essential tool. By analysing the data from the connected sensors and fusing it with data from other sources, we can make these predictions. With this knowledge, businesses can learn more quickly. They can also dismantle preexisting barriers to value generation, complex life cycles, and product innovation. Engineers and manufacturers may achieve a lot with the aid of digital twins, like: – managing complexity and connectivity inside systems of systems; – visualising items that are in use, and going to be by real users, in real time; – creating a digital network that connects scattered systems and promotes traceability; modifying hypotheses with the help of predictive analytics; – troubleshooting distant equipment.
10
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture
1.8 Example of Digital Twin: An Engineer’s Point of View Since engineers are the main consumers of digital twins, let us use their perspective to demonstrate how they are used. It is an engineer’s responsibility to develop and test things with their entire life cycle in mind, whether they are automobiles, aircraft engines, tunnels, or household items. In other words, they need to make sure that the product they are developing will serve the intended function, be able to withstand usage and abuse, and be responsive to the environment it will be used in. – Virtually creating real-world scenarios To understand how the brake system would function in various real-world conditions, an engineer testing the system might, for instance, run a computer simulation. The benefit of using this approach over manufacturing numerous physical cars to test is that it is more faster and less expensive. However, there are still some issues. First of all, computer simulations like the one previously described are only capable of simulating current real-world situations. They are unable to foresee the car’s responses to potential scenarios or shifting conditions. Second, modern brake systems go beyond simple mechanical and electrical components. Millions of lines of code make up them as well. The IoT and digital twin both play a role in this. First of all, computer simulations like the one previously mentioned can only replicate the conditions that exist right now in the real world. They are unable to predict how the car will react to conceivable events or changing circumstances. Second, contemporary brake systems include more than just basic mechanical and electrical parts. They are made up of millions of lines of code as well. Both the IoT and the digital twin are involved in this. – Understanding the performance of product: the value of a digital twin Businesses now have an unmatched understanding of how their products function to digital twins. A digital twin can be used to locate possible issues, conduct remote troubleshooting, and ultimately raise client happiness. Additionally, it aids with product quality, distinctiveness, and additional services. You can learn a great deal by seeing how people use your product after they have purchased it. In order to save time and money, you can securely eliminate unnecessary features, functions, or components using the data if necessary. – Unprecedented control over visualisation A digital twin also has other benefits. Using digital twins allows engineers and operators a comprehensive understanding of a physical asset that may be located far away, which is one of the most important advantages. The twin can function even if the engineer and the asset are in different countries. Think about a mechanical engineer sitting in Seattle using a digital twin system to inspect an aircraft engine that is parked in a hangar at O’Hare. Engineers may also imagine the Channel Tunnel’s complete length starting in Calais. Because of thousands
1.9 Digital Twin Architecture
11
of sensors that are available in a variety of modalities, including sight, sound, vibration, and altitude, an engineer may virtually anywhere in the globe “twin” a physical thing. It exhibits a level of visualising control never before seen. – Work by IBM with digital twin A lot of work has been done by IBM using these digital twin technologies, and the applications continue to expand across all industries, asset management using AR. Many visual and voice (natural language processing) elements are “turned on” for your workforce by the IBM Maximo Lab Services. As a result, you may instantly access important data and perceive your assets in a new dimension. With an AR helmet that has voice/video in the visor, you may then share such revelations with others. The next step in the evolution of work is hence “interacting”.
1.9 Digital Twin Architecture The architecture of digital twin system includes the middleware for management of data between hardware and software components as shown in Figure 1.5.
Figure 1.5: Architecture of digital twin system.
Hardware components: The IoT sensors, which start the information exchange between assets and their software representation, are the primary technology powering digital twins. Actuators that translate digital impulsion into mechanical motions, network devices like routers, edge servers, and IoT gateways are also included in the hardware component.
12
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture
Data management middleware: It is a central repository that collects data from various sources. In an ideal world, the middleware platform also handles connectivity, data integration, processing, quality assurance, data visualisation, data modelling and governance, and other related duties. Common IoT platforms and industrial IoT platforms, which frequently include pre-built tools for digital twinning, are examples of such systems. Software components: The analytics engine is a key component of digital twinning because it transforms simple observations into insightful business information. It frequently uses machine learning models as its power source. Dashboards for real-time monitoring, design tools for modelling, and simulation software are further essential components of a digital twin puzzle. Figure 1.5 shows the overall system architecture which has basically three components: hardware, middleware, and software components. The conceptual architecture is given by the Deloitte model [6]. It is composed of five fundamental components: integration, data, analytics, a digital twin application that is continuously updated, and sensors and actuators from the real world [6]. – Sensors: Using process sensors, the twin obtains environmental and operational data from a genuine, physical process. – Actuators: These assist in manually starting the physical process if necessary. Human action is required to activate the actuators. – Data: This is all the information that the sensors send out. They are merged with other business information, including technical drawings, design and material specifications, and details of any customer complaints. – Integration: This is a technique that combines different components, including edge, security, and communication interfaces to make it possible for information to be sent back and forth between the physical and digital worlds. – Analytics: This is the process of analysing data using algorithms and visualisation techniques, which the twin employs for a variety of functions. A programme called a “digital twin” combines the aforementioned elements into a virtual model almost instantly. The primary goal of digital twin is to find unacceptably large departures from any process parameter’s ideal conditions. Deloitte claims that the business must carry out this process in two steps in order to produce a high-quality digital twin [6, 7]: 1. Design the digital twin procedures and information needs in accordance with the product lifetime, that is, from conception through actual usage and maintenance. 2. Develop a solution that enables the exchange of operational and transactional data from the company’s key systems along with sensory data between a real product and its digital version.
1.10 Digital Thread: A Link Between the Real World and the Virtual Worlds
13
Although the fundamental concepts are the same regardless of how the digital twin is configured, the conceptual architecture of the digital twin provides a more detailed explanation of the key elements shown in Figure 1.4. A sequence of six steps of conceptual architecture is discussed as follows [6, 7]: 1. Create: A physical process includes sensors that collect input data from the environment and the process itself. To augment the signals from the sensors, data from the activities of other business systems, such as resource planning and supply chain management systems, can be employed. This method uses the digital twin, which will be used for analysis, to receive a range of continuously updated input data. 2. Communicate: The physical process along with digital platform is integrated in both directions very instantly. Integration uses a variety of communication channels to transfer data from the sensor to the function. Firewalls, device certifications, and encryption keys must all be used to enable secure integration. 3. Aggregate: The receiving data has been processed and is ready for analysis in data storage. The organisation has the option of performing data gathering and processing on-premises or in the cloud. 4. Analyse: Analysts can do analysis alone or with the help of analytical decisionmaking systems, which generate recommendations and forward them to experts. 5. Insight: Critical variations in the performance of an analogous physical state in one or more dimensions are visualised and highlighted, highlighting possible areas for more study and improvement. 6. Act: Implementing optimisation concepts and altering the current procedure also updating data in the digital twin’s internal systems to reflect changes made. At this point, human involvement is essential. The circle of connectivity between the physical and digital worlds is completed by this. Thus, two dimensions are included in the Deloitte digital twin model, that is, physical and virtual, which are centred on the physical object, process, or service and on which certain actions are performed.
1.10 Digital Thread: A Link Between the Real World and the Virtual Worlds Digital thread is a closed loop which can connect physical systems and their virtual counterparts, once you have all the necessary parts in your possession. The iterative operations are going to be performed as shown in Figure 1.6. 1. Information is gathered from a physical object and its surroundings, and then delivered to a central repository. 2. Data is prepared for feeding the digital twin after analysis.
14
Chapter 1 What Is Digital Twin? Digital Twin Concept and Architecture
Figure 1.6: Digital thread (link between real world and virtual world).
3.
The digital twin tests what would happen if the environment changes, identifies bottlenecks, and mirrors the object’s operation in real time using fresh data. Artificial intelligence algorithms can be used at this stage to make product design adjustments, identify harmful tendencies, and avert expensive downtimes.
4. 5. 6.
The dashboard visualises and presents analytics insights. Stakeholders come to informed, actionable conclusions. The characteristics, procedures, or maintenance schedules of the physical thing are modified accordingly. Based on the updated information, the process is then repeated.
7.
Through the use of digital twins, the complexity of the real world is reduced to the essential data. Because of this, the technology is welcomed by numerous industries.
References [1] [2]
[3]
https://www.dataversity.net/what-is-a-digital-twin/ C. Fortin, L. Rivest, A. Bernard, A. Bouras (Eds.), Product Lifecycle Management in the Digital Twin Era, in: 16th IFIP WG 5.1 International Conference, PLM 2019. Moscow, Russia, July 8–12, 2019 Revised Selected Papers (Vol. 565). Springer Nature, 2020. M. E. Auer (Ed.), Cyber-physical Systems and Digital Twins, in: Proceedings of the 16th International Conference on Remote Engineering and Virtual Instrumentation (Vol. 80). Springer, 2019.
References
[4] [5] [6] [7]
15
F. Tao, M. Zhang, A. Y. C. Nee, Digital Twin Driven Smart Manufacturing. Elsevier: Academic Press, 2019. Q. Qinglin, F. Tao, H. Tianliang, N. Anwer, A. Liu, Y. Wei, L. Wang, A. Nee, Enabling Technologies and Tools for Digital Twin, J. Manuf. Syst. 58, 2021, 3–21. 10.1016/j.jmsy.2019.10.001. A. Mussomeli, A. Parrot, B. Umbenhauer, L. Warshaw, Digital Twins Bridging the Physical and Digital, Deloitte Insights 2020. A. Mussomeli, M. Cotteleer, A. Parrott, Industry 4.0 and the Digital Twin-Manufacturing Meets its Match, 2017.
Chapter 2 Benefits of Digital Twin Modelling Abstract: This chapter gives idea about significance, challenges, and benefits of digital twin, and also explains about how digital twin benefits manufacturing in the Industry 4.0 era. Industrial revolutions change how products are made and how customers engage with the production process, which allows for a more sophisticated request for various customer goods [1]. Before the start of the First Industrial Revolution, goods were handcrafted, built to order, and took the artisan’s time to create, meeting the unique needs of each customer. Since the beginning of the First Industrial Revolution, manufacturing processes have been mechanised, considered as enormous operations (Second Industrial Revolution), and automated (Third Industrial Revolution), standardising the products being traded for a rise in productivity of the production line [2]. The idea of the Fourth Industrial Revolution, also known as Industry 4.0, is to combine individualisation with the benefits of mass production. As a result, virtual product models, often known as digital twins (DTs), are used to accelerate time to market and generate additional benefits throughout the whole life cycle [3]. Recently, the blending of virtual and physical spaces has drawn a lot of attention [4]. A particularly accurate simulation of the process’s present state and of an individual’s behaviour in relation to their surroundings is known as DT [5]. It is utilised for both representational and behaviour prediction reasons [3]. Additionally, real-time products’ optimisation and industrial processes are conceivable due to DT capacity to connect large data to quick imitation [6].
2.1 Industry 4.0 Industry 4.0 is another phase of revolution (industrial), and is backed by mechanisation and the proliferation of computational capacity [7]. Being the new production era, the important goal of Industry 4.0 is optimisation of formerly manual, separated into components’ processes, which is achieved through their automation as shown in Figure 2.1. The fact that new technologies are built on earlier developments characterises this new manufacturing period as being similar to earlier ones [8–10]. Internet of things (IoT), augmented reality (AR), big data, and other cutting-edge technologies are the foundation of Industry 4.0. All of these technologies give the business a sizeable edge over competitors, improve customer comprehension and communication, and make life easier overall in a variety of ways [8, 11, 12]. The elements of Industry 4.0 (Figure 2.2) are discussed below [13]:
https://doi.org/10.1515/9783110778861-002
18
Chapter 2 Benefits of Digital Twin Modelling
Figure 2.1: Growth of industrial revolution.
–
–
–
Autonomous robots: Industry 4.0 goes further beyond the traditional concept of machine-to-machine communication. In the factory’s digital ecosystem, facility components that are not traditionally thought of as “machines” can be linked up and treated as such. This is a huge chance for manufacturers. The prospects for automation should be carefully considered by manufacturers, and they should investigate the supporting technologies required to integrate automation into a well-tuned operation. Autonomous robot is one of the dexterities provided by recent technology which can help manufacturers in industry automation. Augmented reality: By overlaying information such as pictures, text, and sound on the physical real world as we practice it, AR technology gives the real world an additional dimension. In this approach, a real-world setting becomes dynamic and might benefit from the addition of computer-generated imagery. By monitoring manufacturing, checking flaws in design at AR can enhance product design and development by including it at an early stage of the process and, of course, by preventing errors. This would speed up production and process development, greatly minimise the need for physical prototypes, and save the company both time and money. Simulation: Simulating the control of any process is the primary function of this technology. Simulators give users a sense of reality, and the chance to experience real life in an environment that does not exist is for some reason inaccessible, or
2.1 Industry 4.0
–
–
–
19
poses a serious threat to their lives. In order to replicate natural and human situations for scientific research, this technology is also used. Big data: There are vast amounts of diverse data, whether structured or unstructured. Big data possess the three “VVV” factors of volume, velocity, and variety. Various production and other business processes are optimised with big data processing and analysis. Cyber security: As more businesses implement Industry 4.0, attackers find the manufacturing industry to be an attractive target because they may move laterally through a manufacturing network, switching between OT and IT systems for their malicious operations. Without adequate safeguards, malicious actors may use systems for production sabotage, intellectual property theft, industrial espionage, and IP leakage [14]. System integration: This integrates various subsystems into a single, cohesive, huge system. By that time, the complete efficiency of the whole system improves while the functionality of each subsystem stays constant. These integrated systems may be used, for instance, to provide customers a fundamentally new service or automate repetitive tasks to lower human error.
Figure 2.2: Components of Industry 4.0.
20
Chapter 2 Benefits of Digital Twin Modelling
2.2 Industry 4.0’s Digital Twin One of the Industry 4.0 concepts with the quickest growth is DT technology. A DT, to put it simply, is a virtual representation of an actual object that is utilised in a setting of simulation to evaluate its usability and efficacy. A technology known as “digital twins” enables the creation of a virtual replica of any process or object from the actual world. Its goal is to evaluate, identify, and fix flaws in the service or product in order to raise its quality. By merging the digital and physical worlds, the DT supports business. Although DT technology is widely used, it is especially important for product makers. A DT is a representation of a real-world product, procedure, or service in digital form. A physical object’s present or potential behaviour can be predicted and simulated with the aid of a DT, allowing for the object’s optimisation and a rise in company productivity. DT uses sensors to collect data in real time about a physical object as a way to connect the digital and physical worlds. To better understand the operating principle of the item, analyse and regulate its behaviour, and enhance its performance, this data is utilised to generate a digital copy of any component or the complete thing [13]. Due to the help of the DT, the business might experiment and make plans for upcoming goods or services that will be better, and spend less while delivering a better customer experience. The DT of Apollo 13 was the first historical DT pre-image produced in the 1960s and was used to recreate the spacecraft’s circumstances both before and after launch. The aircraft’s several oxygen tanks exploded during the trip, which requires the rescue operation to deploy DT pre-image technology. The test engineer was able to address problems at a distance of 2,000 miles due to the virtual Apollo 13 model, and was crucial to the crew’s rescue. However, it was not until 2002, during a talk by Michael Greaves at Michigan Technological University, that the idea of a concept of DT really took off. The development of the product life cycle management centre was discussed in the presentation. This centre contained the virtual space, the physical space, and the information flow between them, which was typical of a DT. Following that, DT joined the league of important strategic technology opportunities for business in the early 2000 [13].
2.3 Advantages of a Digital Twin Several industries, including manufacturing, smart cities, healthcare, and retail, have begun to use the DT concept because of its wide range of features. Additionally, DT has been studied in the context of oil and gas shipping, agriculture, and construction. The use of DT in the manufacturing industry has an impact on how goods are developed, produced, and maintained. The DT has a wide range of capabilities, including the ability to evaluate production conclusions, to get information about performance
2.3 Advantages of a Digital Twin
21
of product, remotely control and modify machines, handle apparatus issues, and link systems and procedures for better oversight and management. The next section discusses a few advantages. 1. Accelerated time for risk analysis and manufacturing Businesses can test and approve a product before it ever exists in the real world with the aid of a DT. A DT helps engineers find any process flaws before the product is put into production by simulating the planned production process. Engineers are able to control a system to introduce unexpected events, analyse how the system responds, and create mitigation plans in response. With this new capability, risk evaluation is improved, new commodities are produced faster, and the manufacturing line’s reliability is raised. 2. Maintenance Prediction Businesses may study their data and proactively identify any systemic concerns due to the massive amounts of data that IoT sensors in a DT system produce in real time. Due to this feature, businesses may be able to more precisely plan decreasing maintenance costs and increasing the effectiveness of production lines through predictive maintenance. 3. Actual-time distant observation Often, it is very challenging, if not impossible, to obtain a complete, detailed image of a huge physical system in real time. A DT, on the other hand, may be accessed from anywhere, giving users the ability to remotely monitor and manage the system’s performance. 4. More effective teamwork Techs may concentrate more on inter-team collaboration due to process automation and constant accessibility to system data, which boosts output and operational effectiveness. 5. Improved financial judgement Financial information, such as the price of materials and labour, can be included in a virtual depiction of a physical thing. Businesses may be able to determine whether improvements to a manufacturing value chain are commercially viable more rapidly and effectively because of the availability of a sizeable amount of real-time data and powerful analytics. 6. Testing new system before manufacture Before spending money on creating or implementing new systems, equipment, or service models, businesses can employ DTs to design and test them. Theoretically, if a model is successful, its digital duplicate may be connected to the actual product for real-time monitoring.
22
Chapter 2 Benefits of Digital Twin Modelling
7. Managing asset in real time Utilising DTs to optimise manufacturing and monitor everyday operations lowers unnecessary machine wear and tear and notifies business managers of potential costsaving changes, such as adjusting fuel usage. By increasing the overall output, faster maintenance, and repair enables businesses to keep a competitive edge. 8. Understanding of data to provide better service Remote troubleshooting is one of the customer-facing applications of DTs. Instead of depending solely on standard protocols, personnel can use virtual models to do diagnostic testing from any location and guide customers through the necessary repair procedures. These sessions’ information collection yields useful insights for further product planning and development. About 13% of firms executing IoT initiatives already employ DTs, while 62% are either doing so now or have plans to do so, according to Gartner [15]. According to the most recent analysis from Markets and Markets [16], at a CAGR of 37.8%, based on forecasts, the market for DTs would increase from $3.8 billion in 2019 to $35.8 billion by 2025.
2.4 Things to Think About Before Using Digital Twins 1. Protocol updates for data security Huge amounts of data are being gathered from all the different endpoints, and each one could present a security risk. Therefore, firms should review and update current security procedures before deploying DT technology. The following are the areas where security is most crucial: – Encryption of data – Gain access to various privileges by defining user roles – By resolving recognised hardware flaws – Routine audits of security 2. Managing data quality Tens of thousands of distant sensors that connect via unsteady networks provide data that powers DT models. Businesses that want to deploy DT technology must be able to manage data streams with gaps and reject bad data. 3. Training to team members As new technological capabilities are established, users using DT technology must adapt to new working practises, which could cause issues. Businesses must ensure that their personnel have the knowledge and resources necessary to deal with DT models [17].
2.5 Challenges to Implement Digital Twin
23
2.5 Challenges to Implement Digital Twin It is increasingly clear that DT technology coexists alongside artificial intelligence (AI) and IoT technology, creating common problems. Identifying the difficulties is the first step towards solving them. Both the IoT and data analytics have problems [18].
2.5.1 Data Analytics Challenges Some of the challenges in data analytics are discussed as follows: 1. IT infrastructure The first important barrier is the overall IT infrastructure. To support the algorithms’ execution, high-performance infrastructure in the form of modern hardware and software is required to keep up with AI’s explosive growth. The infrastructure’s current problem derives from the expense of setting up and maintaining these systems. For instance, a high-performance GPU that can run machine and deep learning algorithms can cost anywhere between $1,000 and $10,000. Such systems require infrastructure with updated hardware and software in order to function properly. The use of GPUs “as a service”, which offers on-demand GPUs through the cloud for a fee, may be a solution to this problem. The demand barrier has been broken by businesses like Amazon, Google, Microsoft, and NVIDIA, to name a few, by providing distinctive ondemand services that are comparable to conventional cloud-based programmes. However, data analytics is still difficult due to poor infrastructure and high costs. Making sure that the cloud infrastructure provides strong security remains a difficulty when using it for data analytics and DTs. 2. Data to be used From a data perspective, it is critical to make sure the data is of high quality. To ensure that the AI algorithms are given with the greatest quality data possible, the data needs to be sorted and cleaned. 3. Security and privacy Anyone involved in the computing business should be worried about privacy and security, and executing data analytics is no exception. Because AI is still relatively new, laws and regulations have not yet been properly created. The difficulty will increase oversight, legislation, and controls related to AI as the technology develops. Future legislation will guarantee the creation of algorithms that safeguard user data. This highlights the issues with handling data when developing AI algorithms, despite the fact that it is a general data and security regulation. Regulating the industry and federated learning, a decentralised model training framework, are two strategies to guarantee the security of personal data. It allays privacy and security worries when
24
Chapter 2 Benefits of Digital Twin Modelling
employing data analytics within a DT by allowing user data in a learning model to remain localised without any data sharing. 4. Expectation about data analytics The belief that data analytics can be used to address all of our issues is the final obstacle to overcome. AI use requires careful study, and taking the effort to identify the right application makes sure that regular models could not achieve the same outcomes. Like other emerging technologies, they have the potential to boost things like manufacturing and the creation of smart cities. The high expectations are due to the fact that potential customers only perceive the advantages and think they will immediately save time and money. When using data analytics, it is important to keep in mind that the subject is still in its infancy. It is clear from the sheer number of scenarios that employ “AI” in places where it is unnecessary, as opposed to other circumstances where it is appropriate. To enable people to acquire the appropriate background knowledge of the field and discover how it might be used, a greater exposure to and comprehension of AI is required.
2.5.2 IOT/IIOT Challenges 1. Data, security, and privacy The issue of gathering significant amounts of data has increased along with the enormous rise of IoT devices in both the home and the workplace. The challenging task is managing the flow of data while ensuring its organisation and effective use. The difficulty increases as a result of the development of big data. Increases in unstructured data are a result of the deployment of IoT. IoT must sort and organise data to handle the volume of data because doing so increases its usefulness and value. If not, the IoT data will be lost or it would cost too much to get value out of the huge volumes gathered. There is a bigger concern because the data could be sensitive and valuable to a criminal. The risk is significantly increased in situations where a company may be managing sensitive customer data. In order to ruin the infrastructure of an organisation, criminals target systems and take them offline. Cyberattacks present significant difficulties. There is a possibility that cybercriminals would target certain organisations with their thousands of connected IoT devices in order to seize control of them and utilise them for their own purposes. 2. Infrastructure Due to the IoT technology’s quick advancement when compared to the systems currently in use, the IT infrastructure is lagging behind. IoT growth is facilitated by modernising outdated infrastructure and integrating new technology. A modernised IoT infrastructure offers the chance to take use of cutting-edge technology and use cloudbased apps and services without having to spend a lot of money upgrading current systems and equipment. Adding outdated machines to the IoT ecosystem presents
2.5 Challenges to Implement Digital Twin
25
another problem for IoT systems. Retrofitting IoT sensors to legacy machines is one approach to fight this, ensuring that data is not lost and enabling analytics on ageing computers. 3. Connectivity Despite this expansion of IoT usage, connectivity issues continue to be a problem. When attempting to attain the goal of real-time monitoring, they are very common. The simultaneous connection of a high number of sensors in a manufacturing process is a considerable problem. This general goal of connectedness is being impacted by difficulties with factors like power interruptions, software faults, or persistent deployment failures. One sensor’s incomplete connection could have a significant impact on the overall objective of the process. IoT device, for instance, is one of the data sources used by AI algorithms. This is a significant challenge because all the data are necessary for them to work well, and the system’s capacity to function if IoT data is missing could be detrimental. Retrofitting machinery and collecting the data the device has previously produced are two ways to ensure that all the data are gathered. Imputation techniques are used to find substitute values for missing IoT sensor data, ensuring full connectivity and making it simpler for AI models to run with high accuracy and little to no missing data.
2.5.3 Digital Twin Challenges 1. IT infrastructure The issue with analytics and IoT is similar in that it relates to the present IT infrastructure. The DT needs infrastructure for IoT and data analytics success; these will help with the effective functioning of a DT. Without a connected and well-planned IT infrastructure, the DT would not be able to successfully achieve its stated goals. 2. Useful data The next difficulty is obtaining the data needed for a DT. It needs to have highquality, noise-free data that is streamed continuously. The possibility exists that the DT will perform poorly if the data is incomplete and wrong because it will be acting on this information. For DT data, IoT signal strength and volume are crucial. To ensure that the right data is recorded and used for a DT’s optimal use, planning and analysis of device use are required. 3. Security and privacy The privacy and security concerns with DTs are undoubtedly a barrier in an industrial setting. Due to the massive amount of data they consume and the risk they present, they are hazardous for critical system data. Data analytics and IoT, the two essential enabling technologies for DTs, must adhere to the most recent security and
26
Chapter 2 Benefits of Digital Twin Modelling
privacy rules in order to overcome this problem. Trust difficulties with DTs are addressed in part by taking security and privacy into account.
References [1] [2] [3] [4] [5]
[6] [7] [8]
[9]
[10] [11]
[12]
[13] [14] [15] [16] [17] [18]
E. Hobsbawn, The Age of Revolution, First Vintage Books Edition, Aug. 1996. H. Kagermann, W. Wahlster, J. Helbig, Recommendations for Implementing the Strategic Initiative INDUSTRIE 4.0, Final Rep. Ind. 40 WG 82, Deloitte University Press newsletter, 2013. B. Schleich, N. Anwer, L. Mathieu, S. Wartzack, Shaping the Digital Twin for Design and Production Engineering, CIRP Ann. – Manuf. Technol. 66, 2017, 141–144. doi: 10.1016/j.cirp.2017.04.040. F. Tao, M. Zhang, Digital Twin Shop-Floor: A New Shop-Floor Paradigm Towards Smart Manufacturing, IEEE Access 5, 2017, 20418–20427. doi: 10.1109/ACCESS.2017.2756069. R. Rosen, G. Von Wichert, G. Lo, K. D. Bettenhausen, About the Importance of Autonomy and Digital Twins for the Future of Manufacturing, IFAC-Papers OnLine 28, 2015, 567–572. doi: 10.1016/j. ifacol.2015.06.141. H. Zhang, Q. Liu, X. Chen, et al., A Digital Twin-Based Approach for Designing and Decoupling of Hollow Glass Production Line, IEEE Access 2017, 1–1. doi: 10.1109/ACCESS.2017.2766453. H. Kortelainen, A. Happonen, From Data to Decisions – The Redistribution of Roles in Manufacturing Ecosystems, VTT Blog. 1, 2017, 1–5. P. J. da Silva Bartolo, F. M. da Silva, S. Jaradat, H. Bartolo (Eds.), Industry 4.0–Shaping the Future of the Digital World: Proceedings of the 2nd International Conference on Sustainable Smart Manufacturing (S2M 2019), April 2019. CRC Press: Manchester, UK, 2020, pp. 9–11. J. Brazina, V. Stanek, J. Kroupa, Z. Tuma, Industry 4.0 in Educational Process, in: Digital Conversion on the Way to Industry 4.0: Selected Papers from ISPR2020 September 24–26, 2020 Online-Turkey, International Journal of Advanced Natural Sciences and Engineering Researches, 2020, p. 324. E. G. Popkova, Y. V. Ragulina, A. V. Bogoviz, Industry 4.0: Industrial Revolution of the 21st Century, Springer E-book, 2018. I. Donoghue, L. Hannola, J. Papinniemi, A. Mikkola, The Benefits and Impact of Digital Twins in Product Development Phase of PLM, in: P. Chiabert, A. Bouras, F. Noël, J. Ríos (Eds.), Product Lifecycle Management to Support Industry 4.0. Springer International Publishing: Cham, 2018, pp. 432–441. L. Koh, G. Orzes, F. J. Jia, The Fourth Industrial Revolution (Industry 4.0): Technologies Disruption on Operations and Supply Chain Management, Int. J. Oper. Prod. Manage. 2019, Manuscript ID IJOPM11-2019-0743, pp. 1–17. A. Happonen, A. Nedashkovskiy, E. Azheganova, R. Gupta, A. Teyssier, Application of Digital Twin in Industry 4.0, 2020. doi: 10.1735/bjk-2020-159263. https://www.balbix.com/insights/cybersecurity-in-the-age-of-industry-4-0/ Gartner Survey Reveals Digital Twins Are Entering Mainstream Use Digital Twin Market Size, Share, Industry Report, 2022–2027, marketsandmarkets.com https://www.globallogic.com/insights/blogs/if-you-build-products-you-should-be-using-digital-twins/ A. Fuller, Z. Fan, C. Day, C. Barlow, Digital Twin: Enabling Technologies, Challenges and Open Research, IEEE Access 8, 2020, 108952–108971. doi: 10.1109/ACCESS.2020.2998358.
Chapter 3 Modelling of Digital Twin Abstract: This chapter gives a brief about end-to-end digital twin conceptual model that represents its complementary physical object from the ground to the cloud. This chapter explains about digital twin model’s multilayers, namely, physical, communication, virtual space, data analytic and visualisation, and application as well as the overlapping security layer. A detailed overview of the hardware and software technologies that are used in building digital twin model is shown. A use case will be presented to show how the layers collect, exchange, and process the physical object data from the ground to the cloud.
3.1 Introduction Numerous studies are currently being conducted using real-time data as a result of the recent advancements in information and communication technology. An extensive physical and virtual modelling is necessary to produce products that function right away, but doing so while physical prototypes are being developed leads to two related issues: the lack of revision control for physical prototypes and the need for designers to manually inspect, measure, and interpret changes to one of the virtual or physical models in order to update the other. Using real-time data, executing virtual space algorithms in various ways, and evaluating them in real time with digital twin (DT) technology – an invention of the Fourth Industrial Revolution – these problems can be overcome. The DT is always created with a particular purpose in mind. One of the most crucial tools for digitalising industry to better fulfil demanding needs is the DT, which is always created for a specific physical object. Any product will be more economically efficient over its lifetime if DT technology is incorporated into the production process. The development of a suitable virtual representation of the area surrounding this physical object is a prerequisite for the effective application of the DT technology. To test a prototype or design, define and track life cycles, or assess how a process or product will perform in various scenarios, DTs can be created for a variety of objectives. In order to develop a DT, computational models and data are acquired. This may necessitate a real-time interface between the computer model and the actual physical object for data transmission and feedback. Throughout the industry and beyond, DTs are being used for manufacturing, maintenance, and failure prevention/life cycle monitoring. Applications include those in the manufacturing industry, where DTs simulate processes and suggest improvements, those in the automotive sector, where telemetry sensors transmit feedback
https://doi.org/10.1515/9783110778861-003
28
Chapter 3 Modelling of Digital Twin
from moving vehicles to a DT programme, and those in the healthcare industry, where sensors can aid a DT in monitoring and predicting a patient’s well-being. Twin Thread, IBM Watson, Microsoft Azure, Oracle Cloud, SIEMENS PLM, PTC ThingWorx, Aveva, DNV-GL, Dassault 3DExperience, Sight Machine, and GE Predix are a few businesses that work in the DT industries.
3.2 Design and Implementation of Digital Twin Numerous academics and researchers have looked at the idea of a DT; however, the results can differ depending on the situation (aerospace, manufacturing, and city management). Design, manufacture, and servicing are the three phases of the product’s life cycle that each context’s DTs are unique. Each DT application so differs based on a unique perspective and set of requirements. The DT is widely employed in complex and uncertain workplaces, where the working environment is subject to both internal and external factors. Throughout the full life cycle of the real system, the DT should evolve simultaneously. It should be capable of altering its default settings and adjusting to the new configuration. A DT of the virtual world replicates and reflects the actual system. In order to comprehend what has to be done and respond to changes in the actual world, it may be useful to be proactive. By combining data and behaviour models, the virtual system allows for the “digital twin” of the physical system to be created along the entire value chain. The primary elements that determine the concept of a DT are its ability to simulate throughout the life cycle of a product, its synchronisation with physical assets, its integration with real-time data, its behavioural modelling of the physical environment, and the services it offers [2]. The definition of a DT can be summed up as follows in light of the aforementioned information: “A collection of adaptive models that duplicate the behaviour of a physical system in a virtual system that accepts real-time input to update itself throughout its life cycle. The digital twin replicates the physical system so that errors and opportunities for improvement can be anticipated. Then, while keeping an eye on and evaluating the running profile system, it suggests real-time actions for optimising and/or reducing unexpected situations” [2].
3.2.1 Design of Digital Twin Data collection and computational model development are used to create a DT design. This may involve a real-time interface for sending and receiving data and feedback between the digital model and the actual physical object.
3.2 Design and Implementation of Digital Twin
29
3.2.2 Data A DT requires knowledge of the relevant thing or process in order to construct a virtual model that can replicate the behaviours or states of the real-world object or procedure. This information, which could be related to a product’s life cycle, could contain engineering details, production procedures, or design standards. Along with equipment, supplies, parts, procedures, and quality control, production information may also comprise these things. Data that is relevant to operations can also include real-time feedback, historical analysis, and maintenance logs. Business information or end-of-life procedures are examples of additional data that can be used in DT design. Data from many sources can be combined by a DT. The virtual model is updated with information obtained from the real world. Utilising knowledge gained from the data, the physical twin enhances its performance while operating in real time. Regardless of the physical space being represented, it is necessary to specify which aspects of the physical space should be carried over to the virtual world. Data and information must also consider all characteristics of the physical environment, such as structure, semantics, and behaviour, in order to bridge the gap between the virtual and physical worlds.
3.2.3 Modelling After the data have been gathered, computational analytical models can be built to identify behaviours, anticipate states like weariness, and explain how operations are impacted. Based on engineering simulations, physics, chemistry, statistics, machine learning, artificial intelligence, business logic, or objectives, these models are capable of recommending courses of action. To help people understand the outcomes of these models, 3D visualisations and augmented reality modelling can be used.
3.2.4 Linking DT results can be merged to create an overview, for instance, by merging equipment twin results into a production line twin, which can then direct a factory-scale DT. Through the use of linked DTs, smart industrial applications for practical operational adjustments and improvements are made possible. By utilising linked DTs, it is possible to provide smart industrial applications for real-world operational advancements and improvements.
30
Chapter 3 Modelling of Digital Twin
3.2.5 Digital Twin Architecture The physical layer, the network layer, and the computer layer are the three main categories that make up an architecture for a DT. These three categories are shown in the table below. The physical entities that make up the physical layer are identified according to the stage of the product’s life cycle. The network layer connects the physical domain to the virtual one. It exchanges data and knowledge. In the computer layer, virtual entities that simulate their corresponding actual things – such as datadriven models and analytics, physics-based models, services, and users – are used. Each layer is distinguished by a few DT elements (such as models, information structures, or hardware or software technologies) that share similarities in their range of application and interactions while also exhibiting supportive functions:
Computing layer
Modelling features Models: machine learning model, data mining model, pattern evaluation model, and knowledge representation model Data
Network layer Application programming interfaces, wireless communication, middleware, and conversion of communication protocols and interfaces Physical layer
Sensors, actuators, embedded communication, RFID, RFID sensor networks, and wireless sensor networks (WSN)
3.2.5.1 The Physical Layer It encompasses a set of sensory components and subsystems that gather information and control operating conditions. The many components of the physical layer include sensors, actuators, and embedded communication for gathering operational conditions and real-time states (such as vibration, force, torque, and speed) from the surrounding environment in the real world. Sensor networks, including wireless and RFID networks (wireless sensor network (WSN)), automatic identification, and data gathering are made possible by RFID using radio waves, tags, and readers. RFID sensor networks, which include a sizeable number of nodes, keep track of and record the environmental physical conditions. In order to monitor physical or environmental variables, WSNs comprise geographically dispersed autonomous sensor-equipped devices. 3.2.5.2 The Network Layer The connections and exchanges between real world and virtual objects take place at the network layer. All components are linked via this layer, which enables them to exchange information and data with other components.
3.2 Design and Implementation of Digital Twin
31
Communication protocol: The DT’s communication protocol enables information to be transmitted between two or more entities. The most popular protocols for realtime data access and transmission in DT applications are MT-Connect and OPC Unified Architecture (OPC UA). Different communication protocols and interfaces are combined in a unit form via the communication protocol/interface conversion. AutomationML is used by DT to model attributes related to DT. Connecting the various digital production tool chains is the goal. The DT and other systems share data using a framework for communication and data sharing. Middleware: A software layer that sits between the technology and application layers is known as middleware. The service-oriented architecture (SOA) methodology is the middleware architecture utilised increasingly frequently in DTs. It is feasible to deconstruct complicated, monolithic systems into applications that are made up of an ecosystem of simpler, well-defined components by putting the SOA concepts into practise. Wireless communication: The wireless communication can establish wireless connections between DT units, enhancing the flexibility of data transmission. Application programming interfaces (APIs): In a virtual environment that replicates the computer layer, APIs facilitate interoperability across multiple software models and systems. 3.2.5.3 The Computing Layer It is made up of digital replicas of the corresponding physical entities. A computing layer provides decision assistance for DTs to function computationally. The following components make up the computing layer, which can be conceptualised as a series of interrelated “layers”: data, models, and modelling characteristics. Data layer: There are many different sorts of data in the data layer. The process of preparing data includes the selection of data, cleaning of data, data modelling, data integration, and transformation of the data. Data-driven models: Large volumes of data can be analysed using models that are driven by data. Examples of data-driven models used in the creation of DTs include machine learning, data mining, pattern recognition, and knowledge representation. The method of teaching computers to learn without having to explicitly programme them is known as machine learning. It is divided into three categories: supervised learning, unsupervised learning, and reinforcement learning. The development of models using supervised learning is based on input and output data. The DT’s supervised learning is used to predict system failures or the physical
32
Chapter 3 Modelling of Digital Twin
twin’s remaining useful life. While unsupervised learning creates an internal representation purely from input data, it does not require any supervision. By utilising clustering algorithms, it makes possible to find related groupings of data. These methods are applied to DTs to build autonomous clusters for various working regimes to analyse machine conditions. Modelling is the act of converting a real object into digital representations that can be handled, processed, and managed by computers. The foundation of DT is undoubtedly modelling, which offers a means of representing information for purposes like product design, analysis, CNC machining, quality control, and production management. Geometric modelling, physical modelling, behavioural modelling, and rule modelling are all components of DT modelling. A geometric model uses data formats suitable for computer information translation, and processing to describe a physical thing in terms of its geometric shape, embodiment, and appearance. The geometric model contains information that is both topological and geometric (such as points, lines, surfaces, and bodies), and element relations such as intersection, adjacent, tangent, vertical, and parallel). Wireframe, surface, and solid modelling are all examples of geometric modelling. By using fundamental lines, wireframe modelling creates a stereoscopic frame by defining the target’s ridgeline. When creating a holistic model, surface modelling joins all surfaces together after describing each surface of an entity individually. Using information like vertices, edges, surfaces, and bodies, among other things, solid modelling explains the internal structure of a three-dimensional item. In order to increase the look of realism, producers also use bitmaps that represent the entity’s surface features to create appearance texture effects (such as wear, cracks, fingerprints, and stains). Light maps and texture blending, whether transparent or not, are the two basic texture approaches. Geometric models do not define the properties and restrictions of an object; they only describe the geometrical aspects. Accuracy data (such as dimensional tolerance, shape tolerance, position tolerance, and surface roughness), material data (such as material type, performance, heat treatment need, and hardness), and assembly data are additional details that are included in the physical model (e.g. mating relationship and assembly order). Feature-based design, automatic feature detection, and interactive feature definition are the three components of feature modelling. The activities that a physical entity can do to carry out its responsibilities, respond to changes, interact with others, modify internal processes, maintain its health, and so on are defined in a behavioural model. Simulating physical behaviour is a difficult process that includes many different models, such as issue models, state models, dynamics models, and assessment models. Finite state machines, Markov chains, and modelling methods based on ontologies are a few of the strategies that can be used to develop these models. State modelling includes both state diagrams and activity diagrams. The former explains an object’s dynamic behaviours over the duration of its lifespan, whereas the latter
3.3 Hardware/Software Requirement
33
identifies the steps required to carry out an operation. Rigid body motion, elastic system motion, fast rotating body motion, and fluid motion are all covered by dynamics modelling. A rule model provides an explanation of the rules that are based on historical information, specialised expertise, and preset logic. The rules enable the virtual model to engage in critical reasoning, decision-making, scenario analysis, and prediction. The procedures of rule extraction, rule description, rule association, and rule evolution are all included in the process of rule modelling. Decision trees and rough set theory are used in rule extraction along with connectionist and symbolic techniques (e.g. neural network). Rules are defined using a variety of techniques, including logic notation, production representation, frame representation, object-oriented representation, semantic web representation, XML-based representation, and ontology representation. Category association, diagnostic/inferential association, cluster association, behaviour association, attribute association, and so on are only a few of the techniques used in rule association. Rule evolution can take two different forms, including periodic and application evolution. Application evolution is the process of updating and changing the rules based on feedback from the application process, whereas periodic evolution is the practise of periodically examining the effectiveness of current rules over a particular period of time (the time varies depending on the application). The main modelling techniques that are advised are XML-based representation and ontology representation for the rule model, solid modelling for the geometric model, texture technologies to enhance realism, finite element analysis for the physical model, finite state machines for the behavioural model, and finite element analysis for the physical model.
3.3 Hardware/Software Requirement The hardware and software components of a DT system are managed by a middleware layer for data management.
3.3.1 Hardware Components The Internet of things (IoT) sensors, which start the information exchange between assets and their software representation, are the primary technology-powering DTs. Actuators that translate digital impulses into mechanical motions, and network devices like routers, edge servers, and IoT gateways are also included in the hardware component.
34
Chapter 3 Modelling of Digital Twin
3.3.2 Data Management Middleware Its fundamental component is a central repository that collects data from many sources. In an ideal world, the middleware platform also handles connectivity, data integration, processing, quality assurance, data visualisation, data modelling and governance, and other related duties. Common IoT platforms and industrial IoT platforms, which frequently include pre-built tools for digital twinning, are examples of such systems.
3.3.3 Software Components The analytic engine is a key component of digital twinning because it transforms simple observations into insightful business information. It frequently uses machine learning models as its power source. Dashboards for real-time monitoring, design tools for modelling, and simulation software are further essential components of a DT puzzle.
3.3.4 Digital Thread A digital thread serves as a link between the physical and virtual worlds. With all of the necessary components in hand, you can connect physical systems and their virtual representations into a closed loop known as a digital thread. The iterative operations listed below are carried out within it: 1. Data is collected from a physical object and its surroundings and transmitted to a centralised repository. 2. Data is analysed and prepared for transmission to the DT. 3. The DT uses real-time data to mirror the object’s work, test what will happen if the environment changes, and identify bottlenecks. At this point, AI algorithms can be used to improve the product design or detect unhealthy trends in order to avoid costly downtimes. 4. Analytics insights are visualised and presented via the dashboard. 5. Stakeholders make data-driven, actionable decisions. 6. Physical object parameters, processes, or maintenance schedules are modified as needed. The procedure will be repeated using the new data.
3.4 Use of Case Study
35
3.4 Use of Case Study With reference to case study discussed by Ali et al. [1], this case study demonstrates how a DT model can be used in smart transportation. A high-level structure for sensing, communication, and control between the physical and virtual environments of a vehicle is shown in Figure 3.1. As shown in Figure 3.1, there are numerous physical sensors for cars, including cameras, radar, gyroscopes, GPS, tyre pressure sensors, speed sensors, and sound sensors. These sensors may gather real-time parametric values from the cars and transmit them in real time to the virtual space layer. Effective control decisions for the steering control, motion planning, vehicle speed, and ABS will be communicated via the virtual space layer.
Figure 3.1: A high-level framework for real-time and virtual vehicle sensing, communication, and control [1].
For further extending the end-to-end conceptual model of a DT in the context of cars, the authors of this case study used four different types of vehicles from two models, namely the Toyota Avalon and Camry. Through on-board automobile sensors, the system gathers a variety of physical parameters from moving cars, including the ABS, transmission, mileage, and tyre pressure. Figure 3.2 illustrates how the virtual twin in the virtual space layer maintains the vehicle’s 2D/3D models, bill of materials, and historical data [1]. In the data analytics and visualisation layer of the end-to-end conceptual model, as shown in Figure 3.2, complex big data analytics and machine learning processing algorithms are used to perform automated decision making based on real time, measured sensor data from the cars, and historical data. The data is stored on a high-end, off-the-shelf cluster built with commodity hardware, such as Hadoop Distributed File Storage system, and is made up of multiple geographically distributed data nodes. The MapReduce processing algorithm, for example, can be used.
36
Chapter 3 Modelling of Digital Twin
Figure 3.2: End-to-end digital twin conceptual paradigm for cars [1].
Splitting: The logical division of data across cluster nodes based on the parameters/ sensor readings measured by different automobiles is known as splitting. Data node 1 of the cluster, for example, stores the ABS status for all cars, data node 2 stores the transmission parameter for all cars, data node 3 stores the tyre status, and data node 4 stores the mileage covered for each car. Mapping: During this stage, data from every cluster is transmitted to a mapping function, which produces an aggregated value for every model of automobile in relation to a particular parameter. For instance, the mapping function in data node 1 gives an aggregate result for ABS for each automobile type (Avalon and Camry). Other data node mapping functions also return the total of each automobile model’s transmission, tyre condition, and mileage information. Shuffling: At this stage, various mapping results are combined, and pertinent records are categorised according to the model of each car. For example, various parameters from all Camrys and Avalons will be gathered on various cluster data nodes. Reduce: Based on the stakeholder’s status question, the outputs from the shuffle stage are consolidated into a single output in this stage. In other words, this phase summarises the full dataset of cars and offers information to the relevant parties in accordance with their individual requirements.
References
37
Following the processing and analytics phases, the results will be communicated to various levels of stakeholders via dashboard graphs, charts, tables, and reports for data monetisation and visualisation. As shown in Figure 3.3, individual car owners, local dealers, state dealers, country dealers, and car manufacturers will all have different monitoring privileges. Some evaluation metrics that can be used to validate the proposed model’s performance include accuracy in predicting failures in vehicular sensors in real time, total latency, and throughput for end-to-end communication from physical space to virtual space of DT, and vice versa.
Figure 3.3: Model of use-case analytics for vehicular digital twin [1].
References [1] [2]
A. R. Al-Ali, R. Gupta, T. Zaman Batool, T. Landolsi, F. Aloul, A. Al Nabulsi, Digital Twin Conceptual Model within the Context of Internet of Things. 2020, MDPI, 12, 163–175. C. Semeraro, M. Lezoche, H. Panetto, M. Dassisti Digital Twin Paradigm: A Systematic Literature Review, Elsevier, Computers in Industry, 130.
Chapter 4 Digital Twin and IoT Abstract: The chapter provides the new concept about how digital twins accelerate the growth of IoT with a case study. The Internet of things (IoT) is a digital network of connected objects. IoT is available in various forms and sizes. They might be a smart virtual assistant in your living room, a smart home security system, or your car parked in the garage. In the scheme of things, they take the shape of Internet-connected traffic signals in smart cities. According to statistics, 127 new IoT devices are connected to the web every second. Hence, IoT is rising greatly. An IoT gadget assumes occupancy much like a real-world physical thing would. Many people are unaware of a silent factor that is facilitating the rapid rise of IoT: allied digital twins. A system’s virtual version of a physical component is called a “digital twin”. It reproduces in a virtual setting the IoT device’s real size, capabilities, and functionalities. The sensors of IoT device collect data and transmit it to its digital duplicate. IoT researchers and developers use the data to generate fresh logic and schemas, which they then test against the digital twin. The tested code is then uploaded via over-the-air updates into the IoT device. A digital twin is a representation of an actual object in cyberspace. They are utilised by scientists, researchers, and IoT developers to execute simulations devoid of a physical gadget. Digital twins can be partly blamed for the explosive development of IoT. Additionally, digital twins can use artificial intelligence (AI) and data analytics to optimise performance using real-time IoT data.
4.1 Contribution of IoT in Development of Digital Twin IoT devices contribute to the growth of the digital twin. Digital twins, which comprise increasingly compact and simpler products as IoT devices evolve, may provide businesses more advantages. In order to satisfy individualisation in light of the market’s escalating global competition, businesses must develop data-oriented interactions [1, 2]. From production to service and operations, the Fourth Industrial Revolution (Industry 4.0) is a focus of digital transformation [3]. It promises more adaptability, higher standards, and more output. But there has been a lot of criticism focused on the issue of meeting customised features [4]. Despite the success of Industry 4.0, there are several problems with the individualisation paradigm in practical application [5]. In the age of Industry 4.0, one of the most sought-after competencies is the ability to offer distinctive features at scale [6].
https://doi.org/10.1515/9783110778861-004
40
Chapter 4 Digital Twin and IoT
For Industry 4.0 capabilities to be used to their fullest potential, every physical asset must have a digital representation [7]. Addressing complex business difficulties can be greatly aided by mirroring digital representations of physical assets [8]. Additionally, the lack of convergence between physical and cyberspace causes the data to remain static throughout the product life cycle, which has a negative impact on efficiency and sustainability [9]. Advanced engineering and related Industry 4.0 technologies are revolutionising how engineers interact with physical assets by using representational digital data in an ever-expanding digital world. Having a two-way dynamic mapping between a physical thing and its digital model, which contains a structure of connected elements and meta-information, a digital twin is a digital clone of a physical entity. This means it describes a framework for digital twin manufacturing as virtual representations of real-world manufacturing components such as workers, goods, equipment, and process definitions. The physical infrastructure must be reconfigured for individualisation as manufacturing processes become more digital.
Figure 4.1: Digital twin connection with various components.
Figure 4.1 shows the connection between digital twin and other components of IoT environment. With this real-time data, it is possible for a digital twin to play an imitation of physical object. Digital twin is always being improved and can, by design, identify assets. Researchers’ interest in digital twins has grown recently, but there hasn’t been a lot of work done on the topic for many paradigms, such as enormous unique assets. Due to
4.2 Digital Twin Use Cases by Industry
41
Figure 4.2: Digital twin in IoT.
significant global concerns, it is vital to address interactions across the physical, digital, and human worlds.
4.2 Digital Twin Use Cases by Industry Digital twins are changing the way work is done across several industries with a number of corporate applications. Businesses that are aware of those applications can include digital twins into their procedures. As a result, we examined how digital twins are used in manufacturing, retail, healthcare, and supply chains. We also gathered a number of examples, use cases, and applications for digital twins.
4.2.1 Supply Chain In logistic industry/supply chain, digital twin plays an important role as shown in Figure 4.2. Some of the uses in supply chain are listed below as shown in Figure 4.3.
42
Chapter 4 Digital Twin and IoT
Figure 4.3: Various applications of digital twin.
– Prediction of packaging material’s performance Before packing a product, it can first be virtualised and checked for flaws. Logistic firms can determine the material feasibility with the aid of digital twins. – Enhancement in product delivery With the aid of digital twins, logistic organisations may examine how various packing circumstances may impact product delivery. – Optimising warehouse design and operational performance With the help of digital twins, logistic organisations may test different warehouse designs and select the one that would maximise operational effectiveness. Digital twins can optimise many aspects of warehouses than just the layout. To improve their operations, clients might create a digital twin of an organisation. Users can simulate operations and compare actual warehouse performance to modelled ones to find areas for optimisation. – Creating a logistics network A road network’s digital twin contains data on the construction, design, and flow of traffic. Logistic firms can plan the distribution routes and the sites for inventory storage using this information.
4.2 Digital Twin Use Cases by Industry
43
4.2.1.1 Construction Companies Construction companies can enhance efficiency by using a digital twin to better understand how a building is working in real time. Future building planning and design can make use of the data gathered from the digital twin. 4.2.1.2 Healthcare To improve patient care, cost, and performance, digital twins can assist healthcare practitioners in virtualising the healthcare experience. Use cases for healthcare can be divided into two categories: – Improving operational efficiency of healthcare operations Healthcare providers can assess the operational success of the organisation by building a digital twin of the hospital, its operational strategies, capacity, personnel, and care models. – Personalised care improvement In order to provide individualised care, such as specific medications for each patient, healthcare providers and pharmaceutical corporations can employ digital twins to mimic the genomic code, physiological traits, and lifestyle of patients. 4.2.1.3 Use in Manufacturing Industry The manufacturing sector is where digital twins are employed the most frequently. Manufacturing relies on expensive machinery that produces a lot of data, which makes it easier to create digital twins. The following are some manufacturing applications for digital twins: – Product development Engineers can use digital twins to assess a product’s viability before it is released. Engineers begin production or change their focus to building a marketable product in response to the test results. – Design customisation Businesses can build many variations of the product using digital twins, enabling them to provide customised goods and services to their clients. – Performance improvement Engineers can use a digital twin to track and examine finished goods and identify those that are flawed or perform less well than expected. – Predictive maintenance In order to reduce non-value-adding maintenance tasks and increase overall machine performance, manufacturers use digital twins to forecast probable downtimes of their products. This allows specialists to take corrective action before a malfunction occurs, reducing costs for enterprises.
44
Chapter 4 Digital Twin and IoT
4.2.1.4 Aerospace Today the importance of digital twins in the aerospace industry is acknowledged. With digital twins, engineers can use predictive analytics to foresee any future problem involving the airframes, engine, or other components to ensure the safety of the people on board. 4.2.1.5 Automotive The majority of new automotive development occurs online. In the car industry, digital twins are utilised to build a virtual representation of a linked vehicle. Even before production begins, automobile businesses use technology to build the ultimate car product. They model and examine both the manufacturing process and potential issues when the vehicle is put into service. 4.2.1.6 Self-Driving Car Development Though digital twin practices can be used in the traditional automotive manufacturing industry, digital twins are handy for autonomous vehicle companies. Self-driving cars contain numerous sensors that collect data regarding the vehicle itself and the environment of the car. By creating a digital twin of a car and testing every aspect of the vehicles is helping companies ensure unexpected damage, and injuries will be minimised. Some applications of digital twins in the automotive industry are road testing and vehicle maintenance.
4.3 Insight of Digital Twins in IoT A digital twin aids IoT in a variety of ways, from enhancing the capacity to conduct various experiments to providing real-time insights is discussed as follows: – Effective experiments Any kind of experiment is challenging to begin with. They require expensive resources, and if things do not go as planned, they can even end up costing more. Because IoT is a young technology, there is a huge demand for experimentation. The testing must be conducted with careful consideration for the use of resources. Even when there aren’t enough physical devices accessible, digital twins offer the virtual infrastructure needed to do multiple experiments. – Real-time challenges These are some information that a digital twin in an IoT setting can provide. Instead of updating all the things to the actual hardware in the production environment, all this is possible.
4.4 Applications of Digital Twin in Healthcare: A Case Study
45
– Reduction in risk The most well-known advantage of IoT is that it provides simultaneous access to a huge number of devices. This presents another drawback. A little security flaw could allow hackers and other unauthorised users to access the IoT network. The risk may grow when testing involves using actual physical equipment that is currently in use in manufacturing. With digital twins, there is no risk. It enables researchers and developers to experiment securely with several situations before settling on a chosen one that is safe and practical to implement. Based on varying data, digital twins can be utilised to anticipate various outcomes. This is comparable to the run-the-simulation scenario, which is frequently featured in science fiction movies and in which a potential situation is validated in a virtual setting. Digital twins may frequently help designers determine where objects should go or how they should operate before they are physically deployed, as well as optimise an IoT deployment for maximum efficiency, with the help of extra software and data analytics. Efficiency gains and other advantages are more likely to be realised the more closely a digital twin can replicate the original product. Digital twins might model how the devices have performed over time, for example, in manufacturing when extensively instrumented equipment are deployed, which could help forecast future performance and potential failure.
4.4 Applications of Digital Twin in Healthcare: A Case Study 4.4.1 Digital Twin of Hospitals The utilisation of digital twins in the healthcare industry is an excellent illustration of their organisational level use as shown in Figure 4.4. Facility administrators, physicians, and nurses can gain valuable, real-time insight into patient health and procedures by digitally duplicating a hospital. They provide a better method of assessing processes and alerting the appropriate individuals at the appropriate moment when immediate action is required by using sensors to monitor patients and coordinate equipment and employees. In turn, this can lower operating costs and enhance patient satisfaction by reducing emergency room wait times and streamlining patient flow. After deploying digital twin technology to eliminate patient flow bottlenecks, one hospital observed a 900% gain in cost savings. More lives can be saved by using digital replicas to forecast and avoid patient emergencies like cardiopulmonary or respiratory arrest, also known as code blue emergencies. A healthcare network really reported a 61% decrease in code blue incidents after implementing digital twin technology in their organisation. A virtual twin of a hospital can be created using digital twin technology, and it can be used to analyse operational tactics, resource allocation, staffing, and care models in order to pinpoint problem areas, anticipate upcoming difficulties, and enhance organisational tactics. As a result, hospital digital twins can be employed to create replicas of the facilities. Some of the measures’ advantages are given as follows:
46
Chapter 4 Digital Twin and IoT
Figure 4.4: Digital twin healthcare.
Efficient utilisation of resources: The creation of digital twins using historical and current data on hospital operations and the surrounding area (such as COVID-19 cases and car accidents) enables hospital management to identify bed shortages, optimise staff schedules, and assist with room operation. Such data lowers costs while improving resource use, hospital performance, and employee productivity. For instance, a review study revealed that using digital twins to control the efficient coordination of various processes allowed a hospital to shorten the time required to treat stroke patients. Management of risk: Digital twins offer a secure environment to evaluate system performance adjustments (staffing levels, operating room vacancies, device maintenance, etc.), allowing data-driven strategic decisions to be implemented in a delicate and complicated setting.
4.4.2 Digital Twin of Human Body In order to develop individualised medication and treatment strategies, digital twins are also used to simulate organs and single cells as well as a person’s genetic make-up, physiological traits, and lifestyle choices. These exact duplicates of the internal organ systems of the human body advance patient care and medical practise as follows. 4.4.2.1 Diagnosis at Personal Level 1. Digital twins enable the gathering and use of essential data (such as blood pressure and oxygen levels) at the individual level, which by providing basic information aids people in tracking persistent conditions, and, as a result, their priorities
4.5 Challenges of Digital Twin in Healthcare
2.
47
and contacts with doctors. Consequently, these unique facts form the foundation for clinical studies and laboratory research. By focusing on each patient separately, clinicians avoid utilising big samples to determine the best course of treatment. Instead, they use specialised simulations to monitor how each patient responds to various treatments, improving the precision of the entire treatment strategy. There are no uses for digital twins for real patients, despite the interest in and rising quantity of efforts for tailored care. Linköping University in Sweden, one of the institutions specialising in customised medicine, mapped mouse RNA into a digital twin to forecast the effects of treatment.
4.4.2.2 Efficient Treatment Planning Doctors may experiment with medicines, find pathology before diseases become observable, and better prepare for procedures due to sophisticated modelling of the human body.
4.4.3 Digital Twins for Development of Medical Instruments and Drugs New medications and medical gadgets can be better designed, developed, tested, and monitored with the help of digital twins. Drugs: Scientists can modify or remodel medications by taking into account particle size and composition properties to increase delivery efficiency due to digital twins of pharmaceuticals and chemical substances. Medical instruments: A digital twin of a medical device gives designers the ability to test the functions or properties of a device, make changes to the design or the materials, and test the effectiveness of the changes in a virtual setting before manufacturing. This greatly improves the performance and safety of the finished product while lowering the expenses associated with failures.
4.5 Challenges of Digital Twin in Healthcare Digital twin implementation in healthcare faces some challenges as discussed.
4.5.1 Less Adoption The routine of clinical practise has not yet widely embraced digital twin technologies. The use of technology in healthcare facilities (such as hospitals and labs) should be increased in order to improve clinical procedures, digital simulations, and patient care in general. However, despite the growing usage of digital twins in the healthcare
48
Chapter 4 Digital Twin and IoT
system, it is suggested that they will continue to be pricey and out of reach for most people. Digital twin technology will start to be seen as a perk available only to those with more financial means, leading to inequalities in the healthcare system.
4.5.2 Quality of Data A digital twin’s AI system learns from the accessible biological data, but be aware that since the data was collected by private companies, the quality may be poor. As a result, it is difficult to analyse and portray such data. That eventually has a negative impact on the models, which also has an impact on the models’ dependability during the diagnosis and treatment processes.
4.5.3 Privacy of Data The use of digital twins calls for healthcare providers and insurance providers to collect an increasing amount of data at the individual level. These health organisations gradually develop a thorough understanding of a person’s biological, genetic, physical, and lifestyle-related facts. Such individualised information might be used for business purposes rather than the interests of the individuals. An insurance firm using the data to make more exact distinctions important to personal identity would be one example.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
D. Mourtzis, M. Doukas, F. Psarommatis, C. Giannoulis, G. Michalos, A Web-Based Platform for Mass Customisation and Personalisation, CIRP J. Manuf. Sci. Technol. 7(2), 2014, 112–128. Y. Lu, X. Xu, L. Wang, Smart Manufacturing Process and System Automation – A Critical Review of the Standards and Envisioned Scenarios, J. Manuf. Syst. 56(June), 2020, 312–325. R. Y. Zhong, X. Xu, E. Klotz, S. T. S. T. Newman, Intelligent Manufacturing in the Context of Industry 4.0: A Review, Engineering 3(5), 2017, 616–630. Y. Koren, M. Shpitalni, P. Gu, S. J. Hu, Product Design for Mass-Individualization, Procedia CIRP 36, 2015, 64–71. G. Büchi, M. Cugno, R. Castagnoli, Smart Factory Performance and Industry 4.0, Technol. Forecast. Soc. Change 150(November 2019), 2020, 119790. Y. Liu, X. Xu, Industry 4.0 and Cloud Manufacturing: A Comparative Analysis, J. Manuf. Sci. Eng. Trans. ASME 139(3), 2017, 1–8. Q. Qi, F. Tao, Digital Twin and Big Data Towards Smart Manufacturing and Industry 4.0: 360 Degree Comparison, IEEE Access [Internet] 6, 2018, 3585–3593. R. Soderberg, K. Warmefjord, J. S. Carlson, L. Lindkvist, Toward a Digital Twin for Real-Time Geometry Assurance in Individualized Production, CIRP Ann. – Manuf. Technol. 66(1), 2017, 137–140. F. Tao, J. Cheng, Q. Qi, M. Zhang, H. Zhang, F. Sui, Digital Twin-Driven Product Design, Manufacturing and Service with Big Data, Int. J. Adv. Manuf. Technol. 94(9–12), 2018, 3563–3576.
Chapter 5 Machine Learning, AI, and IoT to Construct Digital Twin Abstract: This chapter explains how system builders should approach the development of digital twins that use artificial intelligence. What are the biggest hardware challenges when it comes to artificial intelligence/machine learning?
5.1 Introduction Sensors, artificial Intelligence (AI), Internet of things (IoT), and machine learning (ML) are the important constituents of digital twin (DT) architecture. In the view of computational perception, to boost up the working of DT, the combination of data with information provides the meaningful information which is collected from various sensory devices and converted into high-level understanding. The main key functionality of implementation of DT is to produce precise asset model through data-driven analytics with physics-based models. Due to this, the various activities such as prediction of various parameters, optimisation, warnings, and anomaly detection come to know in advance. Through edge computing and using smart gateway, IoT system carries realtime data. The online available pre-processed data gets feed to the DT model. Also after processing and by applying some data-mining algorithms on offline data act as input to the DT. By combining the two aspects, that is modelling and analytics technique, a model of specific target can be achieved. To obtain accuracy and exact forecasting, the complete workflow is maintained with the help of IoT sensors through the ML algorithms. The number of fields that contributes to the implementation of DT are cloud computing, edge computing, AI, IoT, ML, and so on. In the world of IoT, AI improves the capabilities of the DT, which creates a dynamic software model of a physical object or system that is entirely dependent on data understanding, data improvement, and value addition. To extract information, the processing of collected data should go through a number of processes. The collection of data gets done through the different ways such as through sensors, application programming interface, and software development kit, and the data gets cleaned before processing and analysing it. Further, more analyses and data mining are carried out with the help of various tools such as deep learning, ML, and AI.
https://doi.org/10.1515/9783110778861-005
50
Chapter 5 Machine Learning, AI, and IoT to Construct Digital Twin
5.2 Big Data, Artificial Intelligence, and Machine Learning Big data continues to be one of the most popular academic topics from the previous few years. It differs from conventional data because of enormous size, rapid speed, and uneven difference in data. Researchers refer to volume, velocity, and variety as “the 3Vs of big data”. As a result, we refer to any data that is diverse, of large size (volume), produced quickly (velocity), and has an organised, semi-structured, or unstructured character as big data. Value and authenticity were then sum up to the list as additional Vs (variety). The fourth V, or value, is incorporated into the characteristics of big data analytics and benefits the organisation, as it is valuable. Massive data analytics uses cutting-edge different mathematical models, statistical models, probabilistic models, or AI models to analyse massive data and turn it into useful information. The three Vs of big data, however, provide us with a whole new set of problems, such as how to collect, store, share, manage, process, analyse, and visualise this enormous amount of data at such a rapid rate. To achieve this, a number of frameworks [1, 2] have been developed to handle large data and enable efficient analytics across a range of applications. Learning, reasoning, and self-correction are the three human cognitive abilities that AI replicates digitally. The rules of digital learning are a set of computer algorithms that transform actual historical data into information that may be used. To achieve a particular result, the emphasis of digital reasoning is on choosing the proper rules. On the other hand, the results of learning and reasoning are included in the iterative process of digital self-correction. Every AI model uses this technique to build intelligent systems that do tasks that often call for human intelligence. While some AI systems use logicand knowledge-based approaches, most of them are powered by rule-based algorithms, deep learning, data mining, or ML. Two well-liked AI methods at the moment are deep learning and ML. It might be difficult to distinguish between different techniques of AI, ML, and deep learning. One of AI techniques is ML that searches through historical data for certain patterns to assist in decision-making. The learning process becomes more exact as we accumulate more data. Three types of ML can be used to train a model for classification or future predictions: (1) supervised learning, which only accepts datasets with outputs that have labels; (2) unsupervised learning, which uses unlabelled datasets and is used for grouping or clustering; and (3) reinforcement learning, which also only accepts data records without labels but gives the AI system feedback after performing certain actions. Regression, decision trees, support vector machines, naive Bayes classifiers, and random forests are a few supervised learning techniques. Similar techniques for unsupervised learning include mixture models, hierarchical clustering, and K-means. The category of reinforcement learning also includes Monte Carlo learning and Q-learning. On the other hand, deep learning is a method of computer learning that incorporates one or more hidden layers of artificial neurons and takes inspiration from biological neural networks. The repetitive processing of prior infor-
5.3 Big Data, Artificial Intelligence, Machine Learning, IoT, and Digital Twin
51
mation by multiple layers, the formation of connections, and the ongoing weighing of neuron inputs for the greatest results are all aspects of the learning process.
5.3 Big Data, Artificial Intelligence, Machine Learning, IoT, and Digital Twin: A Relationship Monitoring of physical objects [3], inside asset tracking [4], and outside asset tracking [5] are just a few of the intriguing applications made possible by the deployment of IoT and developing sensor technologies in industrial settings. By connecting the actual environment to its virtual representation, IoT devices make it easier to collect realtime data that is required to create physical component’s DT and to optimise it and maintain it (using sensors and actuators). A good DT can be developed by using big data analytics because the aforementioned data collected through IoT is vast in nature [6] (as discussed in Section 5.2). Because industrial processes are so complicated, spotting potential problems in the earliest stages would be difficult using conventional methods. On the other hand, these problems are simple to extract from the gathered data, adding intelligence and efficiency to the processes of industries. But to handle this voluminous data in DT and industrial realms calls for sophisticated methods, structures, frameworks, instruments, and algorithms. The greatest platform for processing and analysing massive data is frequently cloud computing [7]. Additionally, only by utilising cutting-edge technology can an intelligent DT system be created. AI methods used the gathered data. To achieve this, wisdom is accomplished by giving the DT the ability in detecting (e.g. the optimum strategy of process, the allocation of resources at the best level, safety detection, and fault optimisation such as planning, process control, scheduling, and assembly line) [8, 9], predicting (e.g. status of health and early maintenance) [10], and decisionmaking dynamically using virtual twins or physical sensor data. Big data is gathered from the physical world using IoT. Then, an AI model fed the data to produce a DT. The enhanced DT can then be used in other business processes. The relationship among AI, ML, big data, and IoT is shown in Figure 5.1.
52
Chapter 5 Machine Learning, AI, and IoT to Construct Digital Twin
Figure 5.1: Association between Internet of things, AI-ML, big data, and digital twin.
5.4 Deployment of Digital Twin Using Machine Learning and Big Data The DT deployments in various sectors are discussed as follows:
5.4.1 Smart Manufacturing Smart manufacturing includes three main components: (1) data collection from manufacturing cells using a range of sensors, (2) data management, and (3) data communication between various devices and computers. In a DT environment, information is acquired from a real manufacturing cell and/or its virtual equivalent. Using AI techniques, such data can also be put to use for problem diagnosis, efficient assembly lines, and other manufacturing-related tasks. The DT process based on AI-ML is shown in Figure 5.2. Manufacturing is the primary sector where DT development is taking place. To optimise the dynamic scheduler for smart manufacturing, Xia et al. [10] suggested creating a digital duplicate of a production cell. Deep reinforcement learning (DRL) techniques, such as natural deep Q-learning, double deep Q-learning, and prioritised experience replay, were used to develop and train the digital engine, an intelligent scheduler agent [12]. Through the use of an open platform communication server, the fundamental properties of the cell were ascertained from its physical and virtual attributes. Gradient descent was used to train the DRL network because it only needs a minimal number of learning iterations and is capable of intelligence, dependability, and resilience. The created DT-based dynamic scheduler expedites the design, testing, and validation of intelligent control systems in order to optimise the production process.
5.5 Use of AI
53
Figure 5.2: Big data analytics and AI-ML in smart manufacturing.
5.5 Use of AI DTs must have three key components: data modelling, data application, and data collection. The process of fully utilising lidar measurement, cameras, tilted aerial photogrammetry, satellite remote sensing, and other technologies to capture three-dimensional (3D) data from a complete physical space scene is known as data collection. The sensor’s job is to gather various sorts of actual data from the outside world. The cost, efficiency, and quality of data collection are impacted by the technological complexity of the trick as well as the high precision and efficiency of data collection. After collecting a substantial amount of original physical world data, a 3D model of the actual recovery of the physical world was made utilising automated modelling methods. DT data is better at supporting a variety of operating procedures and highly accurate virtual reconstruction of the environment. Visual 3D modelling and semantic modelling are the two categories into which data modelling may be split. Physical world’s 3D reproduction is represented by visual 3D modelling. The “structure” of the gathered data and the identification of
54
Chapter 5 Machine Learning, AI, and IoT to Construct Digital Twin
Figure 5.3: Mapping in digital twin.
items like cars, highways, people, and internal objects are both parts of the semantic modelling of DTs. The concept of mapping is shown in Figure 5.3. AI has significantly changed many businesses in addition to how computer science as a whole has developed. In an effort to develop new intelligent devices that can react to human intelligence in a human-like manner, it seeks to understand the origins of intelligence. Robotics, language understanding, expert systems, and natural language processing are all subfields of research domain. Robots and AI will be used in simulation systems, control systems, as well as economic and political decisionmaking. As shown in Figure 5.4, our way of life is being subtly changed by it. A huge duplex house can be cleaned by a sweeping robot with ease, and the smart watches we wear can monitor and foretell health hazards. In addition, the robots in our homes can tell books to our children in their voices by imitating our parents. When driving, we can use map software to avoid traffic. DT and AI working together will transform everything about our lives in unimaginable ways. A theoretical and technical foundation for AI in DT has numerous applications in the fields of product design, equipment manufacture, medical analysis, aerospace, and other industries. In China right now, engineering construction is the area with the most extensive application, and the area of research that has drawn the most interest is intelligent manufacturing. The classified applications of AI are shown in Figure 5.5.
5.5 Use of AI
Figure 5.4: AI in digital twin simulation system.
Figure 5.5: Artificial intelligence classification applications in digital twin.
55
56
Chapter 5 Machine Learning, AI, and IoT to Construct Digital Twin
5.5.1 Digital Twin in Aerospace Initially, the aircraft sector was suggested as a potential application for DTs. For example as shown in Figure 5.6, the maintenance and quality control of flight simulators and other aerospace flight equipment is done using DTs [13]. The genuine aeroplane model is established when the digital space has been incorporated by utilising the sensors. The status of the virtual aircraft and the actual flying aircraft is in sync. This makes it possible to simulate and save the take-off and landing procedures for each aircraft. It is possible to clearly determine whether the aircraft requires maintenance and whether it can continue for the next flight through the data analysis of the digital space [14]. Fleet statistics-based information is not only sufficient in determining the health and capability of a specific aircraft in aircraft health monitoring, because due to variations in production, material qualities, and mission history, the damage status differs from aircraft to aircraft, and variation in pilot. Instead, a system that is specifically designed for each aircraft is preferred. Compared to other sectors, the aviation assembly is characterised by a complex structure and a massive amount of components and extremely stringent guidelines on the product’s aerodynamic shape [15]. Consequently, it is essential to use assembly frames properly to ensure that the will not be damaged by installation-related human factors processes, leading to the deformation and assembly problem [16].
Structural digital twinning
Damage state Select task
Flight load and environment digital twin
Stress, temperature and vibration prediction
Damage and residual life prediction
Structural reliability analysis
Damage driven
Update refined digital twin Perform tasks Structural health monitoring
Figure 5.6: Flight life prediction with the help of digital twin.
5.5.2 Application of AI in Autonomous Driving Applications for AI are emerging as deep technologies for big data analysis and learning progress. AI algorithms must be used to create automated vehicle systems. In the real world, technology for autonomous driving can decrease accidents on the road,
5.5 Use of AI
57
increase the effectiveness of how time, space, and other resources are used, and even offer the driver a great deal of convenience during the disabled. But because of the demanding technological requirements, a necessity for DTs to provide the simulation of driving in a simulation of a virtual world is possible because of autonomous driving. Self-driving vehicles must pass stringent virtual simulation testing before being allowed on the road to assure their safety [17]. Testing for safety and proactive performance is frequently done using high threshold logic equipment [18] in the conventional virtual simulation test environment. However, only the controller is really used in this type of test. In a virtual environment, the road conditions, power, gearbox, and other controller-related information are followed.
5.5.3 IoT in Self-Driving Cars The objective of IoT-based self-driving cars is to join and transform the networked vehicles into autonomous “things”. Assuring the interoperability of various parts and IoT systems, such as offering services for vehicle and road equipment and sensors, is one of the biggest hurdles this technology faces. Organisation for Internet of Things Industry Standardisation makes sure that there are no barriers to communicate between all elements. The benefits brought about by the digital age have been used to implement DTs in self-driving cars. Additionally, maintaining the security of selfdriving vehicles can significantly lower the frequency of traffic accidents. Moreover, the benefits of keeping a safe distance when driving are numerous. In order to realise the end-to-end transportation mode and enable the intelligent manufacturing safety transportation system, the study recommends developing a new safety design to increase the flexibility and security of the entire autonomous driving system. DT-based simulation is a crucial step in the creation of self-driving cars. However, a lot of efforts must attempt to create a simulation environment that is identical to actual road conditions, and it is also cost-effective and quite low because there are numerous tasks that need to be done [19].
5.5.4 Big Data in Product Life Management The three elements listed below are an overview of the “big data” applications that are currently being used in PLM: (1) Big data-based data management and planning. When data are received, the question of how to manage them effectively must be taken into account. Scientists are highly concerned about “big data” management, and several research findings have been made. An integrated information model, for instance [20], validates the vertical information flow throughout all levels of the factory, and new techniques based on “big data” have been suggested to improve data warehouses
58
Chapter 5 Machine Learning, AI, and IoT to Construct Digital Twin
for more data, quicker processing, and more users [21]. The dynamic, real-time properties of “big data” also simplify batch work scheduling [22]. Additionally, meta-scheduling is used to overcome “big data” concerns by presenting efficient and affordable scheduling techniques [23]. (2) “Big data”-based supply chain management (SCM). Unlike the conventional understanding of logistics, SCM refers to networks of businesses that operate together. The use of specific tools to improve supply chain collaboration, such as real-time simulation games and social media, necessitates the usage of “big data,” as these tools must produce massive data volumes. Additionally, vast amount of data has made predictive kind of analytics possible in a supply chain for the maker movement [24], and the fusion of “big data” with modern manufacturing methods has transformed supply networks into requested chains, which may result in lesser wastes and quick consumer responses [25]. (3) Using big data for mass customisation (MC). According to MC, in a competitive approach, every consumer receives personalised products because of its highly integrated and flexible processes [26]. “Big data”, one of the three main technology advances, is analytics, enabling MC that has sped up the Third Industrial Revolution [27, 28].
References [1]
[2]
[3] [4] [5] [6]
[7] [8]
[9]
M. M. U. Rathore, M. J. J. Gul, A. Paul, A. A. Khan, R. W. Ahmad, J. Rodrigues, S. Bakiras, Multilevel Graph-based Decision Making in Big Scholarly Data: An Approach to Identify Expert Reviewer, Finding Quality Impact Factor, Ranking Journals and Researchers, IEEE Trans. Emerg. Topics Comput., early access, Sep. 10, 2018. doi: 10.1109/TETC.2018.2869458. S. A. Shah, D. Z. Seker, M. M. Rathore, S. Hameed, S. Ben Yahia, D. Draheim, Towards Disaster Resilient Smart Cities: Can Internet of Things and Big Data Analytics Be the Game Changers? IEEE Access [Internet] 7, 2019, 91885–91903. X. Yuan, C. J. Anumba, M. K. Parfitt, Cyber-physical Systems for Temporary Structure Monitoring, Autom. Construct. 66, Jun. 2016, 1–14. F. Thiesse, M. Dierkes, E. Fleisch, LotTrack: RFID-based Process Control in the Semiconductor Industry, IEEE Pervas. Comput. 5(1), Jan. 2006, 47–53. H. Choi, Y. Baek, B. Lee, Design and Implementation of Practical Asset Tracking System in Container Terminals, Int. J. Precis. Eng. Manuf. 13(11), Nov. 2012, 1955–1964. Y. Wang, S. Wang, B. Yang, L. Zhu, F. Liu, Big Data Driven Hierarchical Digital Twin Predictive Remanufacturing Paradigm: Architecture, Control Mechanism, Application Scenario and Benefits, J. Cleaner Prod. 248, Mar. 2020, Art. no. 119299. M. Zhang, F. Tao, A. Y. C. Nee, Digital Twin Enhanced Dynamic Job-shop Scheduling, J. Manuf. Syst. May 2020, Volume 58, Part B, January 2021, Pages 146–156. M. Schluse, M. Priggemeyer, L. Atorf, J. Rossmann, Experimentable Digital Twins – Streamlining Simulation-based Systems Engineering for Industry 4.0, IEEE Trans. Ind. Informat. 14(4), Feb. 2018, 1722–1731. A. Oluwasegun, J.-C. Jung, The Application of Machine Learning for the Prognostics and Health Management of Control Element Drive System, Nucl. Eng. Technol. 52(10), Oct. 2020, 2262–2273.
References
59
[10] K. Xia, C. Sacco, M. Kirkpatrick, C. Saidy, L. Nguyen, A. Kircaliali, R. Harik, A Digital Twin to Train Deep Reinforcement Learning Agent for Smart Manufacturing Plants: Environment, Interfaces and Intelligence, J. Manuf. Syst. Jul. 12, 2020, 23. [11] H. Van Hasselt, A. Guez, D. Silver, Deep Reinforcement Learning with Double Q-learning, in: Proc. 13th AAAI Conf. Artif. Intell., pp. 1–7, 2016. [12] T. Schaul, J. Quan, I. Antonoglou, D. Silver, Prioritized Experience Replay, arXiv:1511.05952 2015. [Online]. Available: http://arxiv.org/abs/1511.05952, Published as a conference paper at ICLR 2016, page no 1–16. [13] R. K. Phanden, P. Sharma, A. Dubey, A Review on Simulation in Digital Twin for Aerospace, Manufacturing and Robotics, Mater Today: Proc. 38, 2021, 174–178. [14] H. Aydemir, U. Zengin, U. Durak, The Digital Twin Paradigm for Aircraft Review and Outlook, AIAA Scitech 2020 Forum 2020, 0553. [15] T. Pogarskaia, M. Churilova, M. Petukhova, et al., Simulation and Optimization of Aircraft Assembly Process Using Supercomputer Technologies, Russ. Supercomput. Days 965, 2018, 367–378. [16] N. Zaitseva, S. Lupuleac, M. Petukhova, et al., High Performance Computing for Aircraft Assembly Optimization, in: 2018 Global Smart Industry Conference (GloSIC), pp. 1–6, 2018. [17] M. Dikmen, C. Burns: Trust in Autonomous Vehicles: The Case of Tesla Autopilot and Summon, in: IEEE International Conference on Systems, Man, and Cybernetics, pp. 1093–1098, 2017. [18] A. James, O. Krestinskaya, A. Maan, Recursive Threshold Logic-A Bioinspired Reconfigurable Dynamic Logic System with Crossbar Arrays, IEEE Trans. Biomed. Circuits Syst. 14(6), 2020, 1311–1322. [19] H. Yun, D. Park: Simulation of Self-driving System by Implementing Digital Twin with GTA5, in: 2021 International Conference on Electronics, Information, and Communication (ICEIC), pp. 1–2, 2021. [20] K. Crawford, Six Provocations for Big Data. Oxford Internet Institute’s. A Decade in Internet Time: Symposium on the Dynamics of the Internet and Society, 2011. Retrieved from http://papers.ssrn. com/sol3/papers.cfm?abstract_id=1926431 [21] R. G. Goss, K. Veeramuthu, Heading Towards “Big Data” Building a Better Data Warehouse for More Data, More Speed, and Int J Adv Manuf Technol Author’s Personal Copy More Users, in: Advanced Semiconductor Manufacturing Conference (ASMC) 24th Annual SEMI, pp. 220–225, 2013. [22] C. F. Jian, Y. Wang, Batch Task Scheduling-oriented Optimization Modelling and Simulation in Cloud Manufacturing, Int. J. Simul. Model 13(1), 2014, 93–101. [23] S. K. Garg, R. Buyya, H. J. Siegel, Time and Cost Trade-off Management for Scheduling Parallel Applications on Utility Grids, Futur. Gener. Comput. Syst. 26(8), 2010, 1344–1355. [24] K. C. Laudon, J. P. Laudon, Essentials of Management Information Systems. Pearson: Upper Saddle River, 2011. [25] M. A. Waller, S. E. Fawcett, Click Here for a Data Scientist: “Big Data”, Predictive Analytics, and Theory Development in the Era of a Maker Movement Supply Chain, J. Bus. Logist. 34(4), 2013, 249–252. [26] M. Christopher, L. J. Ryals, The Supply Chain becomes the Demand Chain, J. Bus. Logist. 35(1), 2014, 29–35. [27] G. Da Silveira, D. Borenstein, F. S. Fogliatto, Mass Customization: Literature Review and Research Directions, Int. J. Prod. Econ. 72(1), 2001, 1–13. [28] K. C. Laudon, J. P. Laudon, Essentials of Management Information Systems. Pearson: Upper Saddle River, 2011.
Chapter 6 Intelligent and Smart Manufacturing with AI Solution Abstract: This chapter presents discussion on current artificial intelligence opportunities which allows manufacturing to become smarter, with a focus on research and development, procurement, and assembly line processes.
6.1 Introduction Numerous initiatives have been made throughout the history of manufacturing to manage human error, decrease waste, eliminate inefficiencies, and enhance the skills necessary for contemporary manufacturing. Manufacturing has been fighting a never-ending battle against quality and cost-to-customer obstacles. In order to achieve these aims, radical changes have been imposed, including digitalisation and more individualised client experiences. Smart manufacturing is a collection of technologies and solutions that is integrated into a manufacturing ecosystem to optimise manufacturing processes through data generation and/or acceptance, such as artificial intelligence (AI), robotics, cyber security, the industrial Internet of things (IIoT), and blockchain. An IIoT method for process analysis provides a framework of smart manufacturing design. Data analytics can reveal what is needed to make the manufacturing process more efficient, transparent, adaptable, and ultimately profitable. The goal of smart machines and smart systems is to improve processes and automate certain manufacturing systems in order to streamline operations. Cyber security is critical to the success of smart factories because smart manufacturing is all about gathering and utilising data. AI can provide the most value in manufacturing during planning and production operations. AI tools can analyse and predict consumer behaviour, detect anomalies in realtime production processes, and much more. These tools help manufacturers gain endto-end visibility of all manufacturing operations in facilities around the world. Machine learning (ML) algorithms enable AI-powered systems to continuously learn, adapt, and improve. Such capabilities are essential for manufacturers to survive and prosper in the aftereffects of pandemic-induced rapid digitisation. AI provides various benefits in manufacturing:
https://doi.org/10.1515/9783110778861-006
62
– –
–
Chapter 6 Intelligent and Smart Manufacturing with AI Solution
Predictive maintenance can assist you in avoiding unplanned downtime. Operating near-shore facilities using advanced manufacturing technologies (3D printers, robots) to reduce labour costs and remain resilient in the face of supply chain disruptions. To ensure efficiency and waste reduction, create an optimal, AI-enabled generative design.
Intelligent, self-optimising machines that automate production processes, forecasting efficiency losses to improve planning, detecting quality flaws to aid in predictive maintenance are the most important AI use cases in the manufacturing industry. In 2002, Professor Michael Grieves developed the idea of digital twins (DTs) as a fresh method for controlling product life cycles. Since then, it has grown in prominence across a variety of industries, including supply chain management, remote equipment diagnostics, and preventive maintenance. At any level of the product development cycle, from design to post-production monitoring and maintenance, they can offer assistance. Building construction, bridge construction, drilling platforms, and other huge items, as well as industrial settings, have seen the most successful use of virtual models because they are best suited to complicated and large-scale projects and multicomponent systems: – Creating and producing complex items, such as new pharmaceuticals, jet turbines for aircraft, or cars; – urban design or city planning; and – the energy sector, with its extensive transmission and generation facilities. Twinning can take place at several scales within these industries, from a single component to the complete product, production, and system of systems.
6.2 Twinning of Components At the simplest level of twinning, engineers can evaluate the robustness, resilience, energy efficiency, and other properties of the separate components that make up a product. They are able to foresee how a component will behave under static or thermal stress, as well as in other real-world situations, by using simulation software.
6.3 Twinning of Products or Assets The product replica shows how different elements interact with one another and suggests ways to improve performance and dependability. Instead of creating numerous prototypes, digital twinning can be used to design new technical solutions. This speeds up development and makes iterations more efficient.
6.6 Examples
63
6.4 Twinning of Process and Production Physical assets and business processes are both suitable for digital twinning. Here, you build virtual representations of the complete manufacturing process. This approach might assist you in finding the answers to crucial issues like “How long will it take to create a particular product?” How much will it cost? How should each machine operate? What processes are automatable? Is it even possible to manufacture a certain item? Visualising the entire production process also helps to prevent expensive downtimes.
6.5 Twinning of Systems The complicated linkages and connections between functions and components are shown in the DT of a system. It is common to refer to a twinned system as a “system of systems”, and it can be as large as a city, an electrical grid, or a tall building. On the other hand, the expense of creating such a copy would not always be justified by the expected profit. System twinning consequently occurs less frequently than other DT types.
6.6 Examples Examples are from the real world of how DTs are applied at all levels and in diverse industries.
6.6.1 Aviation Industry General Electric (GE) engines power up to 70% of all aeroplanes in the world. Because of this, the corporation bears some responsibility for the safety of millions of passengers. To predict the deterioration of the engine’s heart over time, GE created a DT for its GE90 engine, which powers the long-range Boeing 777. The composite fan blades, which are vulnerable to spallation or the peeling off of material due to harsh conditions, are what the twin represents rather than the overall mechanism. This is particularly true in areas like the Middle East where engines are exposed to another harmful element like sand. The DT assists in determining the best maintenance window before any problem arises.
64
Chapter 6 Intelligent and Smart Manufacturing with AI Solution
6.6.2 Automobile Industry In automotive industries, it is helpful for remote diagnosis of problems in vehicle. Every new Tesla vehicle has a DT. A cloud-based virtual copy receives data from embedded sensors in a vehicle on a constant basis about its surroundings and performance. These feeds are analysed by AI systems to ascertain the health of the vehicle. If not, over-the-air software updates are used to resolve the issues. As a result, Tesla is able to virtually optimise the vehicle’s performance, configure it for different climates, and offer remote diagnostics, which minimises the need for service centre visits.
6.6.3 Industry of Tyre Manufacturing The largest tyre and rubber producer in the world, Bridgestone, frequently uses DTs to better understand how elements such as driving style, road conditions, and speed affect the performance and lifespan of its products. With these data, the business helps fleets choose the alternatives that are ideal for their individual requirements and offers guidance on how to avoid wheel breakage and prolong wheel life. The market leader also used digital twinning to create and test innovative tyre designs. According to Bridgestone, this approach will save development time by 50%.
6.6.4 Power Generation Siemens, the largest industrial manufacturer in Europe and an early adopter of digital twinning, developed a virtual representation of the gas turbine and compressor division it acquired from Rolls-Royce. A DT called ATOM (Agent-Based Turbine Operations and Maintenance) depicts supply chain operations for the manufacturing and maintenance of their fleet of turbines. In order to accurately represent the maze of engine settings, performance metrics, maintenance procedures, and logistical stages throughout the turbine life cycle, ATOM consumes real-time data from numerous sources. It helps stakeholders by running different what-if scenarios and visualising the outcomes, which helps them make smarter investment decisions.
6.6 Examples
65
6.6.5 Supply Chain Simulation In September 2021, Google unveiled a brand new tool that enables companies to digitally replicate their actual supply chains. The retail, medical care, manufacturing, and automobile industries are targeted by the solution. It gathers data from several sources and gives consumers a complete and understandable picture of their logistics. According to Google, the DT opens the door for far quicker data analysis; processes that used to take up to 2 h now just require a few minutes. The company, which released its new product a little earlier than IBM, Amazon, and Microsoft, is following in their footsteps by providing supply chain and other DT possibilities.
6.6.6 Urban Planning Any building in the USA can have a DT created, thanks to AutoBEM, or Automatic Building Energy Modeling. The project, which was developed at the Oak Ridge National Laboratory of the Department of Energy, took 5 years to complete and will be made available in 2021. By using publicly accessible data, such as satellite images, street views, light detection and ranging, prototype buildings, and conventional building codes, AutoBEM generates energy profiles for buildings. A twin depicts a building’s height, size, and kind as well as the number of floors and windows, the materials used for the building’s envelope, the style of roof, and the heating, ventilation, and air-conditioning systems. Advanced algorithms are included in the twin forecast in which energy-saving technology will be used for. There are additionally contemporary water heaters, smart thermostats, solar panels, and other products.
6.6.7 Artificial Intelligence and Industry 4.0 Industry 4.0’s AI incorporates a number of technologies to provide software and computers the ability to perceive, understand, act upon, and learn from human activities. The industrial production system can be made more effective by using this technology. With Industry 4.0, manufacturing is continuously growing as technology develops. One of the cutting-edge technologies utilised to increase productivity, product quality, and operational expenses is AI. Multiple machines communicate with one another in the smart factory, which is made up of hyper-connected production processes. In order to increase quality control, standardisation, and maintenance, manufacturers go through a digital transformation in which they manage and use their datasets by utilising AI and ML. In Industry 4.0, there are many benefits to using AI in production processes.
66
Chapter 6 Intelligent and Smart Manufacturing with AI Solution
It helps us work more effectively by generating accurate outcomes with less manual labour. Industry 4.0 uses digital technologies to increase productivity and intelligence. AI development opens up new platforms for skill growth and allows computing systems to see, hear, and learn. As part of Industry 4.0, networked factories with a deep supply chain, design team, production line, and quality control must be integrated into an intelligent engine that uses AI to deliver useful insights. Manufacturers must develop a system that considers the entire manufacturing process in order to capitalise on the numerous opportunities provided by Industry 4.0, as collaboration across the entire supply chain cycle is required. Today, asset management, supply chain management, and resource management are the most common applications of AI, ML, and IoT (Internet of things). When these new solutions are combined, they can improve stock utilisation, supply chain visibility, and asset tracking accuracy. AI-powered real-time data analysis will help manufacturers make goods more quickly and with higher quality, raising the bar for production and logistics. Humans and conventional systems are incapable of identifying and avoiding errors, but deep learning models can. Predictive analytics will eventually be able to improve output in numerous ways. Because of AI, businesses will be able to aggressively customise their products without significantly increasing process effort. AI will aid robotics in reaching previously inaccessible areas. Transfer learning happens as a result of reduced learning effort and enhanced vision in the robot, both of which are facilitated by AI. Data analysis from factory control systems has always been a time-consuming operation. In addition to sending the insights it generates to dashboards that alert operators when something goes wrong, AI can be utilised to speed up data processing. These discoveries might one day be used to add additional instructions, like adjusting parameters, to the machine’s control system. Finally, AI-driven closed-loop manufacturing systems may become self-regulating or self-optimising. There is a list of potential root causes for issues to be fixed in it.
6.6.8 Opportunities of Research in AI in Smart Industry Bosch is creating scalable AI and analytics solutions in this space under the Manufacturing Analysis Solutions (MAS) brand to identify anomalies and faults in the manufacturing process early on and identify the fundamental causes. These solutions process millions of data points from numerous sources. Advanced deep learning techniques can be utilised to enhance optical inspection procedures in automated systems.
6.6 Examples
67
In assembling robots to automate the most complex assembly processes, various perception and control algorithms need to be developed, which will allow for skilled/ efficient processing of parts and tools. The focus of process improvement needed to be on the creation of algorithms that automatically adapt machinery to the corresponding production and processing requirements. With the Indian government’s emphasis on “Make in India” and “Industry 4.0”, start-ups, software firms, and manufacturing facilities have been encouraged to integrate technology, including AI, into their daily operations in order to improve accuracy, productivity, and efficiency.
6.6.9 AI in Electronic Industry The electronics sector in India also includes companies that produce consumer electronics including computers, televisions, and circuit boards. Examples of these industries include telecommunications, equipment, electronic components, industrial electronics, and consumer electronics, as well as companies that produce and market electrical components and equipment to customers. Indian factories now employ AI-powered equipment to create electrical goods and appliances. Through the use of IoT and detecting mechanisms, intelligent solutions are helping the industry reduce manual testing procedures. Additionally, AI is being incorporated into finished products like user interfaces, robotic appliances, and virtual assistants. Research work focused on AI-powered chatbot, robotic automation, and language processing are the solutions required by the electronic companies in India. To name few companies in this sector are ASIMoV’s Robotic manipulator, Gridbots, and Helpforsure which provides AI-based solution.
6.6.10 Agriculture and Artificial Intelligence Efforts in India have centred on enabling data-driven agriculture by utilising technologies like image recognition, drones, ML, sensors, 3D laser scanning, automatic tractors which drive without drivers, and chatbots – for monitoring, detecting abnormalities or defects, carrying out tasks like chemical spraying, and predicting and forecasting growth and price. Through the application of AI-driven analytics, agriculture is using AI to boost agricultural yields. Although using AI in agriculture has the potential to lead to more productive farming and higher yields, difficulties have been found, including access to reliable data due to a lack of power and connectivity in fields and technical competence to apply the technology.
68
Chapter 6 Intelligent and Smart Manufacturing with AI Solution
Researcher can focus on developing solutions for alerting farmers regarding pest attack in advance, real-time data analytics solutions for agri-supply chain, rainfall prediction, and weather information for growing proper crops suitable to climatic conditions and improve productivity of crops so that farmers get maximum profits. Crop science, the IoT, and AI are all used by the agricultural start-up Aibono to help farmers produce more. Aibono uses AI, shared services and resources, and data science to help farmers make decisions that reduce risk and increase output. For instance, Gobasco is a business that seeks to increase the effectiveness of the current agri-supply chain by employing data streams and real-time data analytics from sources across the nation, made possible by AI-optimised automated pipelines. SatSure effectively determines crop yield risk by combining geographic, economic, and meteorological data with information on crop yield risk. The company uses a webbased platform to combine big data, ML, cloud computing, and IoT to provide accurate decision points for traders, banks, insurance companies, and the government. An ML-based sowing app that uses Power BI, Cortana, and ICRISAT (the International Crop Research Institute for Semi-Arid Tropics) has been developed by Microsoft in collaboration with ICRISAT. Microsoft and United Phosphorus collaborated to create a Pest Risk Prediction API. AI excel in every field of research includes manufacturing process of the automotive sector, autonomous electric vehicles, department of transport in various states for road safety, intelligent platforms and virtual assistants, healthcare, data analysis, and business solutions for e-commerce that use conversational marketing chatbots to identify client wants and preferences using natural language processing aid with regulatory compliance, fraud detection, and customer behavioural analysis. In order to promote, comprehend, and make it easier to develop and apply AI, the central and state governments of India have launched a number of projects. Examples include centres of excellence, task forces, strategic partnerships, Digital India and Make in India, and cooperative initiatives by the governments of Karnataka and Microsoft in the area of digital agriculture, the Andhra Pradesh government’s policy on cloud hubs, and the government of Andhra Pradesh’s policy on AI all boost the employability of young people.
6.7 Conclusion Smart manufacturing in Industry 4.0 due to advancement in technology is highly demanded by market. AI tools can analyse and predict consumer behaviour, detect anomalies in real-time production processes, and much more. These tools help manufacturers gain end-to-end visibility of all manufacturing operations in facilities around the world. ML algorithms enable AI-powered systems to continuously learn, adapt, and improve. AI excel in every field of research includes manufacturing process of the automotive sector, autonomous electric vehicles, department of transport
6.7 Conclusion
69
in various states for road safety, intelligent platforms and virtual assistants, healthcare, data analysis, business solutions for e-commerce that use conversational marketing chatbots to identify client wants and preferences using natural language processing aid with regulatory compliance, fraud detection, and customer behavioural analysis. In order to promote, comprehend, and make it easier to develop and apply AI, the central and state governments of India have launched a number of projects.
Chapter 7 Information and Data Fusion for Decision-Making Abstract: The chapter presents an overview of various literature on the data fusion such as different methodologies of data fusion, and the challenges and opportunities are discussed under each method.
7.1 Introduction In order to reach conclusions that are more accurate than those that might be reached from a single, independent sensor, techniques of data fusion combine related information and data from numerous sensors. Real-time data fusion has grown increasingly feasible as a result of adding more sensors, improving processing methods, and upgrading processing equipment or hardware even if the concept of data fusion is not new. Target tracking, automatic target recognition, and a few applications for automated reasoning are currently where data fusion systems are most often used. Technology for combining data (data fusion) has developed fast from being a hastily putting together group of linked ideas to a full engineering profession with standardised language, known system design guidelines or principles, as well as libraries of reliable mathematical procedures or operations. Multiple uses or applications exist for multisensor data fusion. Systems that use identification-friend-foe-neutral, remote sensing, autonomous vehicle guidance, automated target recognition, and battlefield surveillance are among the automated threat recognition technologies used by the military. Outside of the military, applications include robots, medical equipment, and condition-based maintenance of complex machinery. A consistent, thorough estimate and projection of the situation of some important areas of the world are the objectives of the data fusion challenge. This viewpoint defines data fusion as the utilisation of all accessible data sources to resolve all pertinent challenges with state prediction and evaluation, where pertinent is evaluated by utility in formulating strategies. As a result, the subject of data fusion includes a number of interrelated problems, such as estimating and predicting the states of entities that are both internal and external to the functioning system as well as their interconnections. The task of determining the actual condition of the world also includes evaluating the models of all of these organic and external entities’ traits and behaviour. Because DT-related data is derived from a variety of sources, there are data noise, consistency issues, and disagreement (e.g. physical entity, virtual model, and service). Sensor failure, environmental change, and human interaction can all affect information entropy for data collected from a physical thing (a higher value indicates greater data uncertainty). Data simulated by virtual models would provide less reliable
https://doi.org/10.1515/9783110778861-007
72
Chapter 7 Information and Data Fusion for Decision-Making
results due to deviations from physical reality brought on by ineffective models. Additionally, neither the real-world nor the simulated data can be used to create global viewpoints. Data fusion is therefore necessary, which involves combining data from diverse sources. The analysis of multiple interconnected datasets that provide complementary views of the same phenomenon is known as data fusion. Correlating and fusing data from multiple sources produce more accurate conclusions than analysing a single dataset. Data fusion is a comprehensive idea with clear benefits as well as several difficulties that must be carefully resolved. The inaugural Joint Directors of Laboratories (JDL) Data Fusion Lexicon provided the following definition of data fusion [1]: A method that involves combining, correlating, and associating data and detail information from both individual and numerous sources in order to produce accurate position and identification estimates as well as thorough and timely evaluations of events or occurrences and threats and their relevance. The process is identified by continuous improvements to its estimates and assessments, as well as an evaluation of whether it needs to be modified or expanded to produce better results. Data fusion is commonly defined as “techniques for combining data from multiple sensors and related information from linked databases in order to obtain more accurate and specific conclusions or inferences than a single sensor could achieve”, as stated by Hall and Llinas [2].
7.2 Data Source and Sensor Fusion Radar tracking systems, self-driving cars, and the Internet of things all require some form of sensor fusion. When two or more data sources are combined in an autonomous system, it is called data fusion. This process helps us understand the system better. The quality of sensor data must be higher, that is, more consistent, accurate, and dependable than data from a single source. We can assume that, for the majority of applications, data is coming from sensors or models, and that the system’s comprehension is provided by the data such instruments are collecting, for example, the rate of acceleration or the separation between an object and the location, as shown in Figure 7.1. A mathematical model can also act as a data source because its creators are familiar with the real situation and can incorporate that knowledge into the fusion procedure or process to improve sensor measurement. Think about the requirements that autonomous systems must meet in order to interact with their surroundings. This may improve your understanding of the situation. The four main areas of these skills are sense, perceive, plan, and act, as shown in Figure 7.2.
7.2 Data Source and Sensor Fusion
73
Figure 7.1: Types of data source.
Figure 7.2: Capabilities of system.
1.
2.
3. 4.
Sense: It alluded to the utilisation of sensors for precise environmental measurement. Data is being gathered from the system and the environment outside. There are many different sensors that can be used in a self-driving automobile, including radar, lidar, and visible cameras, but simply utilising sensors to collect data is insufficient because the system also has to comprehend the data. Perceive: In this example, the system transforms the sensor data into information that the autonomous system can understand and use. This step’s purpose is to appropriately interpret data that has been detected. For example: Take a look at Figure 7.3 for an illustration of how an automobile would understand a blob of pixels representing a road with lane markings and an object off to the side that could be a crossing pedestrian or a fixed mailbox. It is crucial to have this level of comprehension. The system needs planning in order to decide what to do next. Plan: The system decides what it wants to do and then finds a route to get there. Act: In order to get the system to choose that path, the system determines the optimum course of action. The controller and control system are currently performing the final stage.
74
Chapter 7 Information and Data Fusion for Decision-Making
Figure 7.3: Image from vehicle camera sensor.
The Perceive step has two important responsibilities: Self-awareness: Self-awareness, sometimes referred to as localisation and placement, is its responsibility. It answers questions about my whereabouts, activities, and emotional state. Situational awareness: Detection and track such as locating and tracking other items in the surroundings. As it contributes to both sense and perceive, sensor fusion functions as a kind of bridge between the two. In order to gain a deeper global knowledge that the system can utilise to plan and take action, various sensor data are combined with extra information from mathematical models. This process is known as sensor fusion.
7.3 Job Localisation and Positioning of System Through the use of sensor fusion in four different ways, we are able to more precisely localise and position our own system as well as identify and follow other objects. i. It may improve data quality We always favour working with clear, comprehensive data that is devoid of error, uncertainty, and noise. For example: Consider a single accelerometer that is mounted on a table and is only used to measure the acceleration caused by gravity. The output would produce a constant of 9.81 m/s2 if this sensor were ideal. However, the measurement itself will be noisy. We cannot eliminate noise through calibration because it depends on the sensor’s quality
7.3 Job Localisation and Positioning of System
75
and is unexpected. However, by adding a second accelerometer and averaging its two readings, we can lessen the signal’s overall noise. When the noise between the sensors is not linked, the total noise is reduced by a factor of the square root of the number of sensors. Four identical sensors combined will reduce noise to half that of a single sensor, for example, 4 sensors = 1/2 noise. Averaging is a function in the straightforward fusion algorithm. Combining readings from two or more distinct types of sensors also helps us reduce noise, which is advantageous for dealing with sources of associated noise. For example, imagine that our goal is to discover which way your phone is oriented with respect to the north. The magnetometer on the phone can be used to determine the angle from magnetic north. The need to reduce the noise in this sensor data may lead us to consider installing a second magnetometer. However, certain noises come from the varying magnetic fields that the phone’s circuits produce. This suggests that the linked noise source will affect each magnetometer individually and would not be completely reduced by averaging the sensors. There are two approaches to resolve the above issue now.
7.3.1 Magnetometer and LPF The first approach is to move the sensors away from the deteriorating magnetic fields, but doing so with a phone is difficult. Another option is to run the data via a low-pass filter, but doing so would make it less sensitive and cause the measurement to lag, as shown in Figure 7.4.
Figure 7.4: Magnetometer and LPF.
7.3.2 Magnetometer and Gyro Another option is to pair a magnetometer with a gyro, a sensor of angular rate. Although the gyro will likewise be noisy, by employing two different types of sensors, we reduce the likelihood that the noise is connected and enable the sensors to be calibrated against one another as shown in Figure 7.5. The basic idea is that the gyro may be used to validate whether a change or variation in the magnetic field was caused by the phone moving physically or was just noise, as detected by the magnetometer.
76
Chapter 7 Information and Data Fusion for Decision-Making
Figure 7.5: Magnetometer and gyro.
A common filter is likely one of the more often used fusion algorithms for accomplishing this merging. This aspect is particularly interesting because a mathematical model of the system is already included in standard filters. The advantage of combining data or information measured by a sensor with an understanding of the physical real world or environment is what you are getting. i. It can increase reliability If two similar sensors are fused together, as is the case with conventional accelerometers, we have a backup, in case one sensor fails. If one sensor fails in this circumstance, we will inevitably lose quality, but at least we would not lose the entire measurement. We may also include a third sensor, and the fusion algorithm will be able to discard any data from a single sensor that produces measurements that are different from the other two. Example: An illustration of this would be an aircraft that measures its air speed accurately using three pitot tubes. It is still possible to determine the air speed using the other two sensors if one fails or reads inaccurately. Therefore, adding additional sensors is a good strategy to boost dependability. However, we must be cautious of single failure modes that could simultaneously damage all other sensors. All three pitot tubes could freeze up in an aeroplane that is flying in freezing rain, and no amount of sensor fusion or voting will be able to salvage the measurement. Again, in this case, combining sensors that measure various values can be beneficial. The aircraft can be configured to estimate airspeed using global positioning system and atmospheric wind models in addition to the pitot tube airspeed data. When the primary sensor suite is not available, airspeed can still be calculated. Once more, quality can be a concern, but it is still able to determine the airspeed, which is essential for the aircraft’s safety.
7.3 Job Localisation and Positioning of System
77
Now, losing a sensor does not always imply that the sensor did not work or malfunctioned; it could also indicate that the measured quantity briefly vanishes or momentarily disappears. ii. It is able to predict unmeasurable states The distinction between unmeasured states and unmeasurable states must be made clear. Simply put, it indicates that the system is missing a sensor that can accurately measure the state of interest. An object that is within the field of view of a visible camera, for instance, cannot be measured in terms of distance. The number of pixels in a large, far-off object may be equal to that in a small, close object. Nevertheless, we can obtain three-dimensional data by fusing two optical sensors. The scene would be examined from two distinct angles, and the separations between the elements or objects or features in the two photographs or photos or images would be determined using the fusion approach. Because of this, using just one of these two sensors to determine the distance is ineffective. iii. It can increase coverage area Think about a parking aid short-range ultrasonic sensor for a vehicle. These sensors measure the separation between nearby objects, such as other parked vehicles such as cars and the curb. The range and field of vision of each individual sensor may only be a few feet. As a result, extra sensors were required if the vehicle was to have complete coverage on all four sides. The readings from these additional sensors were then combined to create a broader overall field of vision. Because it is typically useful to know which sensor is registering an object in order to determine its location relative to the car, all of these measurements will be averaged or integrated analytically or mathematically. This is still a form of sensor fusion because it involves combining multiple sensors into one unified system. Using several data sources is crucial to this notion because it allows us to improve measurement quality, reliability, and coverage while also providing us with the ability to estimate states that are difficult to measure directly. Due to the redundant nature of the observations, multisensor data fusion offers a more precise approximation of physical phenomena than single source data. An aircraft’s range can be accurately determined by a radar, for instance, but its angular direction can only be determined to a limited extent [5]. An infrared imaging sensor, on the other hand, is unable to measure ranges but can calculate angles with accuracy. It is possible to determine a position more precisely when both of these are utilised together than when they are used independently. The methodologies that are utilised in data fusion are derived from a variety of other fields that have been studied and developed for a longer period of time. These fields include digital signal processing, statistical estimation, control theory, artificial intelligence, and conventional numerical approaches. In the past, military applications were the main reason that data fusion techniques were created. Recent years
78
Chapter 7 Information and Data Fusion for Decision-Making
have seen the beginning of a two-way transfer of technology, which was sparked by the use of similar methods in non-military contexts.
7.4 Different Kinds of Data Fusion Techniques Strategies or methods for integrating data, knowledge, and information might be helpful for any activity that calls for parameter estimate from various sources. Although the phrases “information fusion” and “data fusion” are sometimes interchangeably used, information fusion refers to data that has already been processed, whereas data fusion refers to “raw data”, or data acquired directly from sensors. Depending on the types of applications, different data fusion methodologies and strategies can be categorised [6]. Different techniques classified are as follows: 1. Durrant-Whyte [7] focuses on relationship that exists between the input data sources and defined it as complimentary data, redundant data, and cooperative data. 2. Dasarathy [8] proposed classification based on the type and nature of input/output data. 3. Luo et al. [9] divided the data into three categories based on the degree of abstraction: raw measurement, signals, and features or judgements. 4. The JDL’s data fusion classification [10] specifies five different data fusion levels like level 0 as the lowest level, level 1 as the first level, level 2 as the second level, level 3 as the third level, and level 4 as the highest level. 5. Depending on the architecture, the classification can be (a) centralised, (b) decentralised, and (c) distributed.
7.4.1 Durrant-Whyte Classification Considering the connections among various data sources, Durrant-Whyte has provided the following classification criteria for data fusion [7]: 7.4.1.1 Complementary Data Here, the input data represents many aspects of the scene and may therefore can be used to provide more comprehensive global data. For instance, in visual sensor networks, two cameras with numerous fields of view offer significant amount of data on the same target.
7.4 Different Kinds of Data Fusion Techniques
79
7.4.1.2 Redundant Data Data from overlapping areas, such as in visual sensor networks, are regarded as redundant data when multiple input sources offer information about the same target and can therefore be combined to increase confidence. 7.4.1.3 Cooperative Data The process by which the information is merged to produce new information, usually more complicated than the original information. Cooperation is regarded as being multimodal data fusion, such as voice and video.
7.4.2 Dasarathy’s Taxonomy Depending on the nature of the data being processed and output is generated, Dasarathy’s proposed data fusion model is divided into five main groups, as illustrated in Figure 7.6. 7.4.2.1 Data Input–Data Output (DAI–DAO) This is the most fundamental or simplest type of data fusion used for categorisation. The results of these data fusion techniques, which use unprocessed data for both its input and output, are frequently more precise or dependable. At this level, data fusion starts as soon as sensor data is collected. Algorithms for signal and image processing are used at this level. 7.4.2.2 Data Input–Feature Output (DAI–FEO) The data fusion process at this level makes use of source-specific raw data to extract characteristics or qualities that characterise an entity in the environment. 7.4.2.3 Feature Input–Feature Output (FEI–FEO) This strategy is also known as “information fusion,” “feature fusion,” “symbolic fusion,” “information fusion,” and “information fusion.” This level’s features include the data fusion process’s input and output. The data fusion technique focuses on a set of features with the purpose of improving, refining, or acquiring new ones. 7.4.2.4 Feature Input–Decision Output (FEI–DEO) A collection of features is sent to this level as input, and it produces a group of choices or solutions. This classification category contains the vast majority of taxonomies or classifications that base their conclusions on sensor inputs.
80
Chapter 7 Information and Data Fusion for Decision-Making
7.4.2.5 Decision Input–Decision Output (DEI–DEO) Decision fusion is another name for this classification method. To produce better or novel judgements, it integrates input decisions. The most significant contribution comes from Dasarathy’s categorisation, which provides a framework for classifying different approaches or strategies by defining the input/output abstraction level.
Figure 7.6: Dasarathy’s classification.
7.4.3 Abstraction Level Classification According to Luo et al., there are four abstraction levels listed below: i. Signal level: The term “signal level” is used to describe information that has been gathered empirically by using sensors. ii. Pixel level: This works at the level of image and has the ability to enhance image processing processes. iii. Characteristic: This utilises features that have been retrieved from signals or images (i.e. shape or velocity). iv. Symbol or decision level: Information extraction at this stage is represented by symbols. It is also called the “decision level”, which describes the significance of this stage.
7.4 Different Kinds of Data Fusion Techniques
81
Metrics (measurement), attributes (characteristics), and conclusions (decisions) are the three main focuses of information fusion. Data fusion can be broken down into early fusion, late fusion, and hybrid fusion categories, as shown in Table 7.1 [11]. 7.4.3.1 Hybrid Fusion The term “early fusion” describes the process of integrating various properties of input data to generate new source of raw data that can be utilised as an input to improve subsequent analysis. This new updated input should provide more useful information than the originals, which were compressed using compression techniques. However, as the number of features increases (particularly from different modalities), management becomes more difficult. Table 7.1: Abstraction level classification. Fusion level characteristics Early fusion
Late fusion
Hybrid fusion
Fusion at data level
Fusion at feature level
Fusion at feature and decision level
Input data Raw data type
Closely related data
Loosely related data
Noise/ failures
Data is highly susceptible to noise or failure
Data is less sensitive to noise or failures
Data is highly resistant to noise or failures
Usage
It is widely not used
It is used for particular modes
Most widely used fusion
Level of fusion
Drawbacks In the absence of processing, data may be concatenated rather than merged, resulting in redundant data Examples
Due to the curse of Using a lot if pre-processed dimensionality, it requires a data can lead to mutual large training dataset disambiguation
Bringing two video streams, text, Speed recognition based on Tracking and recognition and images together voice and lips of human faces and gaits
7.4.3.2 Late Fusion A select group of important features needs to be gathered through late fusion. The features of various representations can be integrated at this level of fusion into a single representation format. The late fusion process improves flexibility and scalability.
82
Chapter 7 Information and Data Fusion for Decision-Making
7.4.3.3 Hybrid Fusion Benefits of both early and late fusion are incorporated in hybrid fusion. As a result, many problems in multimedia analysis are chosen to be addressed by employing this hybrid approach.
7.4.4 JDL Taxonomy In the field of data fusion, this is the most widely accepted theoretical framework. This was initially made public by the JDL and the US Department of Defense. According to these organisations, the steps taken during data fusion can be divided into five distinct phases, a database that goes along with each level, and an information bus that links all five elements, as shown in Figure 7.7. The input data for the data fusion architecture is provided by sources. Possible sources include sensors, a priori information (such as references or geographic data), databases, and human observations. Users can send and receive data to and from the system using the Human Computer Interface (HCI). Questions, directives, information about the outcomes, and warnings are all part of human computer interaction. The system for database management, which stores both the input data and the combined outcomes, is also a part of the framework of data fusion. Due to the vast volume of extremely varied information stored, this system is crucial. JDL framework comprises five levels of data processing: source pre-processing is done at level 0, object refinement is done at level 1, situation evaluation is carried out at level 2, impact evaluation is done at level 3, and refinement of process is carried out at level 4. 7.4.4.1 Source Pre-processing – Level 0 Source pre-processing, which includes signal and pixel-level fusion, is the first phase of data fusion. For text sources, the information extraction mechanism is present at this level. This level simplifies the data while preserving the crucial facts for the procedures at a higher level. The burden on the data fusion process is relieved at this level by assigning data to the pertinent processes. This aids in concentrating on the knowledge that is most relevant to the current situation. 7.4.4.2 Object Refinement – Level 1 The data that has already been processed is used in object refining. Common techniques at this level include identity fusion, spatiotemporal alignment, association, correlation, clustering or grouping techniques, state estimation, false-positive reduction, and the combination of picture information. The results of this step include object tracking and object discrimination (classification and identification; state of the object and orientation). This step is in charge of creating consistent data structures from the input data.
7.4 Different Kinds of Data Fusion Techniques
83
7.4.4.3 Situation Assessment – Level 2 Compared to level 1, this level is more focused on inference. Finding possible situations based on observable events and data is the aim of situation assessment. The links between the objects are established. To assess the importance of entities or products in a given environment, relationships are appraised (e.g. proximity and communication). Making high-level inferences and recognising noteworthy actions and occurrences are the objectives of this level (patterns in general). High-level conclusions are the end result. 7.4.4.4 Impact Assessment – Level 3 In order to have a proper perspective, this level focuses on assessing the effect of the activities that were observed in level 2. To identify prospective risks, vulnerabilities, and operational possibilities, the existing situation is evaluated, and a forecast is produced. This level involves (2) a forecast of the logical conclusion and (1) an assessment of the risk or hazard. 7.4.4.5 Process Refinement – Level 4 This level handles resources and sensors as it moves the process from level 0 to level 3. The objective is to manage resources effectively while taking into account work priorities, scheduling, and resource control.
Figure 7.7: JDL data fusion framework.
84
Chapter 7 Information and Data Fusion for Decision-Making
7.4.5 Architecture Level Classification Depending on where the data fusion procedure will take place while designing a data fusion system, different architectural types are classified. 7.4.5.1 Centralised Architecture In a centralised design, a fusion node is a central processor that receives information from all input nodes [3]. As a result, the raw measurements provided by the sources are used in a centralised processor to perform the fusion procedures. Sources in this method solely collect observational measurements, which are then sent to a centralised processor for fusion. Assuming correct data association and alignment is performed and the time required to transport the data is reasonable, the centralised system is the optimal solution. But for systems in the actual world, the aforementioned presumptions are rarely accurate. Disadvantages: The enormous bandwidth available needed to transport raw data via the networking is a drawback of the centralised strategy. This issues turns into a bottleneck when visual sensor networks utilise this type of architecture to combine data. In the centralised plan, as opposed to other schemes, information transfer time delays [4] differ and have a stronger impact on the outcomes. 7.4.5.2 Decentralised Architecture A decentralised architecture consists of a network of processing-independent nodes without a central hub for data fusion. Each node thus blends local information with data obtained from its peers. When data fusion happens independently, each node takes into account both its own local knowledge and the information it receives from its peers. Decentralised data fusion techniques generally exchange information via Fisher and Shannon measures rather than the object’s state. Disadvantages: The main flaw in this architecture is the communication cost, which is O(n2) for each communication step, where n is the number of nodes. This architecture also accounts for the extreme case in which every node interacts with every other node. When the number of nodes grows as a result, this architecture may experience scalability issues. The computational and connectivity constraints make them harder to construct a decentralised data fusion system. 7.4.5.3 Distributed Architecture Each source node’s measurement is handled independently before being sent to the fusion node, which combines all of the sources’ data. In other words, the source node performs data association and state estimate before sending data to the fusion node.
References
85
Each node then determines the object’s state based on its own local perspective, and this information is fed into the fusion process to produce a unified global perspective. A single fusion node to numerous intermediary fusion nodes are only a few of the options and permutations that this architecture provides. The obtained measurements are pre-processed in a distributed architecture, resulting in a feature vector (following the fusing of features). However, there isn’t only one ideal design that works in every situation; thus, the best architecture should be chosen considering the specific requirements, demand, pre-existing networks availability of data, processing power of node, and structure of the data fusion system.
7.5 Conclusion Based on the nature of the problem and the underlying assumptions of each strategy, the best approach of data fusion or sensor fusion is selected. The data fusion scene is growing quickly. Microprocessors, enhanced sensors, and novel methodologies have all advanced quickly in recent years, and this has led to new possibilities for merging data from several sensors for better inferences. A fundamental knowledge of terminology, data fusion processing methods or approaches, and categorisation is needed to implement such systems.
References [1] [2] [3] [4] [5]
[6]
[7] [8]
E. L. Waltz, Data Fusion for C3I: A Tutorial, in: IEEE Xplore, Command, Control, Communications Intelligence (C3I) Handbook. EW Communications: Palo Alto, CA, 1986, pp. 217–226. D. L. Hall, J. Llinas, An Introduction to Multisensor Data Fusion, Proc. IEEE 85(1), 1997, 6–23. https://www.sciencedirect.com/topics/computer-science/data-fusion https://www.youtube.com/watch?v=6qV3YjFppuc H. F. Durrant-Whyte, Sensor Models and Multisensor Integration, Int. J. Rob. Res. 7(6), Dec. 1988, 97–113. Goddard Trajector Determination Systems User’s Guide, Tech. Rep. CSC/SD-7S/6005 Computer Science Corporation, Apr. 1975. Mathematical Theory of the Goddard Trajectory Determination System, in: J. O. Cappellari Computer Sciences Corporation and C. E. Velez and A. J. Fuchs Goddard Space Flight Center (Eds.), Greenbelt, Maryland: Goddard Space Flight Center. H. F. Durrant-Whyte, Sensor Models and Multisensor Integration, Int. J. Rob. Res. 7(6), Dec. 1988, 97–113. B. V. Dasarathy, Sensor Fusion Potential Exploitation-Innovative Architectures and Illustrative Applications, Proc. IEEE 85(1), 1997, 24–38.
86
[9]
Chapter 7 Information and Data Fusion for Decision-Making
R. C. Luo, C.-C. Yih, K. L. Su, Multisensor Fusion and Integration: Approaches, Applications, and Future Research Directions, IEEE Sens. J. 2(2), 2002, 107–119. [10] J. Llinas, C. Bowman, G. Rogova, A. Steinberg, E. Waltz, F. White, Revisiting the JDL Data Fusion Model II, Technical Report, DTIC Document, 2004. [11] V. S. Krushnasamy, P. Rashinkar, An Overview of Data Fusion Techniques, in: International Conference on Innovative Mechanisms for Industry Applications (ICIMIA 2017), Institute of Electrical and Electronics Engineers, 978-1-5090-5960-7/17/2017.
Chapter 8 Digital Twin Use Cases and Industries Abstract: Digital twin (DT) is among the most key techniques pushing digitalisation in a variety of sectors. The digital reproduction or model of any actual item is referred to as DT (physical twin). The automated bidirectional nature of DT distinguishes it from modelling as well as other digitised or AutoCAD models, and real-time data transmission between digital and corporeal twins. The advantages of applying DT in reduced operating expenses and time, higher productivity, and better decision-making are all advantages in any industry. Enhanced predictive/preventive maintenance, and so forth. As a consequence, its adoption is projected to increase. In the coming decades, as Industry 4.0 products and systems are introduced, they will develop quickly, relying on incremental data collection and storage. Connecting that data to DTs properly may open up numerous new opportunities, and this study investigates several industrial areas where DT implementation has access to these chances and how these prospects are propelling the industry ahead. The chapter discusses DT applications in 13 sectors, including industry, agricultural, educational, infrastructure, medicine, and retail, as well as industrial use cases in these areas. Keywords: Industrial 4.0, intelligent production, adaptive control, condition monitoring
8.1 Introduction Manufacturing and production processes are at the forefront of the digital transformation towards Industry 4.0 [1]. To enable this digital transformation, the real and digital production worlds must be blended, with all parts interconnected, including sensors, machines, products, systems, processes, and people. Cyber-physical systems, digital infrastructures, industrial Internet of things (IoT) connectivity, and specialised software, among other things, enable continuous monitoring of physical assets and processes involved in the manufacturing process. This enables effective event detection, simulation, and optimisation of production assets and processes, allowing key stakeholders to make informed decisions. Digital twins (DTs), which are defined as the virtual transference of the physical world to the digital realm, are a key enabler of this digital transformation. DTs can add a wide range of capabilities to modern manufacturing systems. On this basis, DTs quickly drew the attention of industry and academia, with the International Data Corporation [2], predicting that by 2020, 30% of global 2000 companies will use data derived from DTs to improve product innovation success rates and organisational productivity, achieving up to 25% gains. The rapid adoption of DTs has resulted in a market size of $2.26 billion in 2017 [3], rising to https://doi.org/10.1515/9783110778861-008
88
Chapter 8 Digital Twin Use Cases and Industries
$3.8 billion in 2019 with a projected growth rate of 45.4% and $35.8 billion until 2025 [4]. These projections are also supported by the estimated 20.8 billion connected sensors and endpoints in the IoT market by 2020 [5], which is a key enabler of DTs. The report also identifies DTs as a prominent way to enable long-term savings of billions of dollars in maintenance, repair, and operation, as well as improved IoT asset performance, asset management, and overall operational efficiency. DTs have found a place in Industry 4.0 and smart manufacturing primarily through cases that provide information continuity, data management throughout the product life cycle, as well as monitoring of physical twin assets to even optimisation of system behaviour in product design, production line design, shop floors, and production process optimisation. DTs are most commonly found in the discrete manufacturing sector, where items (and/or parts) are designed and produced individually or in lots. In this case, DTs were used at several stages, from the initial design of the end product to its optimised production. Despite being envisaged decades ago, the term “digital twin” has lately gained favour in academic and industry circles. The notion of DT was established in 2002, which is defined as “a collection of digital information structures that completely describes a possible or existent physically sustainable and quality from the microscopic atomic level to the macro geometrical level”. Any information that might be gathered from inspecting a physically made object may, at its best, be retrieved from its “digital twin” [1]. A DT, as defined, is made up of three parts (Figure 8.1): (1) Real twin: An actual (living/non-living) thing such as a part/product, machine, process, organisation, or human, for example. (2) DT: A digital version of a physical twin that can simulate its research pointed in real time. (3) The bilateral movement of information in between two that functions automatically in real time. (4) Though academics and industry have defined DT in many ways, one thing almost all rely on is its advantages.
pe oty rot ling p l e tua mod Vir
or
Information flows that is immediate and predictable
Figure 8.1: A digital twin’s block diagram.
Bo
dil
yt wi ns
8.2 Aviation and Aircraft
89
It lowers operating costs and time, boosts existing system productivity, aids decision-making, enhances service intervals and operations, offers remote access, creates a safer workplace conditions, and benefits the environment [2]. Because of the various benefits and uses of DTs, their development in several areas has surged in recent years. According to Grand View Research, the worldwide DT market was valued at US $5.05 million in 2019 and is predicted to increase to US $86.10 billion by 2029, with a per cent per annum of 43.7% from 2020 to 2029 [3]. The recent emergence of the pandemic, COVID19, is one of the causes for the increased need for DTs [4, 5]. Lockdowns caused by the epidemic resulted in product failure, labour scarcity, and a distant or non-contact work culture [6], making digitisation and the progress of processes with little human touch even more important [4]. According to a Gartner survey, 32% of firms are employing DT for remote monitoring of assets, including as patients and mining operations, to minimise the regularity of in-person surveillance and boost the safety of staff and consumers [7]. DT technology has discovered uses in a variety of industries that are undergoing digitalisation. There were 10 significant industrial areas where DT has been used in their research on DT implementations. These industries are as follows: (i) aircraft, (ii) automotive, (iii) universal healthcare, (iv) electricity, (v) automotive, (vi) hydrocarbons, (vii) civil service, (viii) mines, (ix) navy, and (x) agriculture. They also stated that DT has been employed in these industries for three major purposes: modelling, monitoring, and control. However, the uses of DT are not confined to only these three. Development, verification, modification, improvement, predictions, and management are all now done using DT. This chapter presents an overview of current DT applications in several sectors. The most often documented DT applications in the literature are in the sphere of production, computer crime systems, or in the framework of Industrial revolution 4.0, in general. This chapter discusses how DT technology is employed in the update that exists. The report also expands on the concept of real-world DT uses in Figure 8.2. A DT is shown as a block diagram. DT technology has found uses in a variety of industries that are undergoing digital transformation. The goal of this chapter is to recognise and comprehend the potential of DT in any sector, as well as the aim of attempting to implement it for the right design, which can benefit any scientist, corporate, or industry before investing in the technology in order to unleash the true potential of the technology.
8.2 Aviation and Aircraft Space agency and the United States Aircraft were the first to explore DT in the aircraft disciplines. In this field, the key uses of DTs include maximising the performance and dependability of the aerospace vehicular, predicting and addressing maintenance difficulties, and making flights safer for the crew. The primary use of DT in this business
90
Chapter 8 Digital Twin Use Cases and Industries
Infrastructure
Aviation
Automobile industries
Factories
Retailer
Industrial Sector Using Digital Twins
Mines
Medical Services
Marine Academia
The Modernist Building
Energie Farming
Refinery
Figure 8.2: DT uses in a variety of industries.
began with the goal of improving the performance and dependability of the aerospace vehicular. According to NASA’s 2011 technical plan, the four uses of DT for them were as follows: 1) Simulating flight prior to real vehicle launch to enhance mission success. 2) Constantly mirroring the real flight and updating circumstances such as applied stress, temperatures, and other natural features in order to anticipate future scenarios. 3) Determining the extent of vehicle damage. 4) Providing a framework for research into the consequences of changed factors that were not addressed during the design process. Any damage discovered by DT might be repaired by initiating in situ repairs or advising relevant mission adjustments, ensuring a longer mission lifespan and a greater mission success rate [7]. DT increases the planning and dependability of future missions by minimising the ambiguity of forecast and maintenance intervals [8]. DT was utilised not only to secure the spaceship and the operation, but also to ensure the
8.3 Production
91
security of the team members operating in distant and undiscovered territory by simulating multiple possible rescue scenarios in the event of an emergency. Aside from the reliability and performance, one advantage of employing DT is that it may be used to foresee faults, making spaceship or aeroplane maintenance easier and less expensive than regular planned maintenance. The Airbase Testing Facility stated in 2019 that they will create a DT of one of their superfast aircrafts, the B-1B, for condition monitoring, which would be accomplished by 3D scanning every element of the aviation down to a nuts and bolts. Any structural flaws or defects will be identified throughout the scanning procedure, which will aid in the creation of a medical record. The aeroplane data will then be used to anticipate places that are more likely to have physical difficulties. Layers of maintenance data, test/inspection results, and analytical tools will be layered on the digital model throughout the aircraft’s life cycle. AFRL earlier signed a $19 contract to supply with North Corporation for the production of a DT to foresee issues with the structure of several types of the air force aviation so that preventative maintenance may be undertaken [8]. Their specialists are attempting to enhance aircraft predictive maintenance by enhancing flying load data, generating more real structure process model, and measuring and lowering modelling uncertainty. The life of an aircraft is greatly reliant on the load it flies; a 21% weight decrease can increase the life of an aircraft by up to 199% [8]. DT enables the aerospace business to create new products more quickly and cheaply, while also increasing manufacturing line quality and lowering aeroplane time. SpaceX, a new participant in the aerospace business, is also employing DT technologies for products development and testing to address problems and achieve needed performance before building the goods, allowing them to manufacture cheaper rockets. Deployment of a DT in the aerospace business is very advantageous, but it may be a difficult process owing to laws and data-gathering methods from the aircraft, which makes it costly, as well as certification problems for aboard software and hardware.
8.3 Production Even though the creation of DT began in the aerospace business, the manufacturing industry is the one that is most interested in the technology. The major drivers of Industry 4.0 and intelligent Production have been identified as DTs. During life cycle, each produced product passes through four major phases: conception, production, functioning, and destruction (Figure 8.3). Smart producers can use DTs in all four product stages. DT allows designers to digitally validate their product design throughout the design phase, allowing them to test numerous versions of the item and pick the best one. Designers gain insight into the aspects that perform best for users and those that
92
Chapter 8 Digital Twin Use Cases and Industries
Efficiency Confirmation Materials Choice User opinions
Layout
Waste Management The abolition of experimentation Concepts for the Future
Destruction
Production
Resources Administration Producing Schedule Monitoring And control
Execution
Actual Surveillance Servicing Approach Interface Direction Evaluation Projection analysis Forecast, projection, and assessment
Figure 8.3: Uses of DT across the course of a product’s lifespan.
require improvement by using real-time data from earlier generations of goods. This makes the entire design improvement process easier and faster. Maserati is one example of a company that uses DT to optimise their vehicles body aerodynamics utilising virtual wind tunnel tests, which are time-consuming and costly. They also adjusted the interior acoustics of the prototype automobile using data from a dummy outfitted with microphones. Using a bicycle as an example, Marmolejo-Saucedo [6], developed a DT-driven products framework that companies may utilise to build DT to help in the process of designing products. Unlike conventional bike conceptual models, which rely on the designers’ expertise and experience, a bicycle’s DT continually captures data from the actual space, which can then be compared, analysed, and utilised to create or modify next-generation bicycles. Designers may gain a better knowledge of consumer requirements through user feedback and usage behaviours, which can be converted into better and enhanced functionality. Capturing client preferences through a DT informs firms about market trends, which may then be combined with customer usage data to assess the implications on product performance. This enables organisations to make educated design decisions, simplifying the process of incorporating client input into the product to create
8.3 Production
93
personalised products easier. A DT may also assist designers with material selection for their goods. A DT-driven strategy for optimal green material selection was developed and used to laptop casings. Different production steps were modelled for various materials, from design through waste management, and materials were graded based on their physical attributes, cost, and environmental effect to optimise the material selection process. A next stage in production is converting crude ingredients into a completed product. At this level, a DT may be a valuable tool in management systems, manufacturing, and control systems. A DT can assist in 1) manufacturing by automatically planning and executing orders and improving decision support through depth treatment; 2) servicing by evaluating and analysing device requirements, trying to identify any modifications to the system and their effects, and performing predictive monitoring; and 3) layout planning by continuously evaluating the production system. The DT reduces downtime by forecasting failure and allowing for scheduled maintenance or preventative actions. Manufacturers may get real-time product operating state via its DT after the product has been sold and can build a maintenance strategy appropriately. A DT can provide nine types of product-related services: 1) realty tracking, 2) energy consumption assessment and predict, 3) user access and behaviour analysis, 4) user operation guidance, 5) smart improvement and notifications, 6) equipment failure analysis and prediction, 7) device servicing strategic plan, 8) item simulated management, and 9) item simulated functionality. The product’s last step, that is, disposal, is frequently overlooked. As a result, when a product is decommissioned, the information that may improve the following generation product/system is frequently lost [7]. A unique DT-based system is for the restoration of electrical and electronic scrap to support manufacturing and remanufacturing processes from design to recovery. They established product information models based on worldwide standards, from design through remanufacturing. A DT, in addition to optimising the production process at various stages of a product, also enhances the area of manufacturing processes (AM). Developing a DT of a 3D printer can assist us in attaining the needed qualities by reducing the need for repeated periods of trial-and-error testing, which will, in turn,
94
Chapter 8 Digital Twin Use Cases and Industries
cut the time between engineering and fabrication, making the entire AM process time and cost-effective [7]. A mechanical model, a control and sensing prototype, a method based, big data, and deep learning are all parts of the proposed DT model for the 3D printer. Developing a DT of the AM process that could identify using simulations, in-situ sensors, and supervised learning. DT of the AM process created using modelling, in proximity sensors, and machine learning could identify flaws in AM components throughout the process. Their DT approach used real-time data from in situ sensors, a physics-based model, and a data augmentation framework to forecast faults with greater statistical fidelity than theoretical model predictions or in situ sensor data alone. DT may also assist to make the entire process of production more sustainable and smart by making it more autonomous and self-optimised, in conjunction with a system of production equipment, processes, and structures that facilitate each other [3]. These three methods are as follows: (i) offline, a dedicated virtual environment (usually each robot brand has their own) for programming each aspect of the robotic cell for later deployment through the network to the physical robot; (ii) online, which is being adapted by sensor information, usually twinned in the dedicated virtual environment (e.g. robot operation system), and is able to directly affect the pre-programmed path and routine of the robotic system. (iii) Manually, which involves robotic programmed using a flex pendant, but with the arrival of VR technology and immersive virtual reality connections, it also employs a twin for remotely manipulating the virtual robot [5]. As can be seen, variations of those technologies take use of DT of the robot cell and are widely utilised in a variety of industrial sectors. Furthermore, DT is utilised as a validation method for HRC safety requirements, evaluating the system’s safety level first, for example, using virtual reality human models, before tests with real operators in the actual system [6]. DTs in the production line benefit future goods even before they are created. It aids in the design, manufacturing, optimisation, condition monitoring, product services, and disposal of products. DT-based smart manufacturing systems can minimise the time and expense of physical commissioning/reconfiguration by recognising early design problems [7]. Furthermore, DT aids in improved visualisation, which aids in better problem-solving and decision-making, as well as collaboration by allowing access to a larger range of experts to collaborate. As a result, DTs result in a better knowledge of tools and processes by everybody, leading to better-designed products and more effective, time and recourse operations, while also ensuring a strong and fluid link between the different phases of production [8].
8.4 Medicines and Universal Healthcare
95
8.4 Medicines and Universal Healthcare Healthcare is one of the world’s greatest sectors, and its impact on daily life is pervasive. The DT technologies in the healthcare system examines hospitals, operational strategies, capabilities, employment, and care models to optimise the healthcare industry’s care, cost, and performance, as well as to aid in educated and intentional decisionmaking. Aside from facility and infrastructure upgrades, it has uses in emulating patients’ behaviour and so giving them with personalised treatment and care. Siemens created a DT for a radiology department in a clinic in London, England, which was struggling to provide efficient care and experience due to rising patient demand, increasing diagnostic complex nature, ageing infrastructure, a lack of space, as well as longer waiting times, interruptions, and delays. Siemens and the hospital collaborated to revamp the department’s structure and test new steps that can be taken by creating a 3D simulation of the radiologist and its activities and then implementing optimisation using “Workflow Simulation”. To have a better understanding of the department, they carried out a week-long onsite inspection that comprised seminars, stakeholder interviews, and process observation. For a few weeks of implementing the DT, there was a considerable improvement, resulting in lower wait times and a higher level of service, faster patient turnover, more efficient use of equipment, and reduced personnel expenses [8]. Since the same disease can develop and affect various individuals differently depending on their socio- and regional surroundings, help us make, personalised medicine, natural identity, age group, family medical history, and lifestyle choices, there has been a transition within the industry away from “one-size-fits-all” treatments/ medicines towards “tailor-made” treatments; this is also known as personalised/precision medicine. Precision medicine is a perfect fit for DT technology since the results are totally dependent on the data that is provided to it (Figure 8.4).
Figure 8.4: A DT concept for individualised healthcare. (A) A patient displaying a localised illness symptom (red). (B) A DT for the patient is generated and virtually treated with various therapies and medications. (C) The treatment with the best outcomes is selected. Customised medicine is administered.in order to treat the patient (green).
96
Chapter 8 Digital Twin Use Cases and Industries
A DT can be created for the entire human body, only one bodily function (such as the digestive or respiratory systems), just one body organ (such as the liver or heart), just finer levels of body components (such as cellular, the molecular, subcellular, or subatomic levels), for a particular disorder or illness, or for other pertinent organisms, such as a virus interacting with a particular component of a human body or organ [2]. Unlike other sectors, this makes it challenging to link people and their DT since people cannot be fitted with sensors. The DT for humans would be able to recommend the best course of therapy for the basis of symptoms on its data once it is built [3]. Visualisations of organs and body components have been created, despite the fact that human DT may appear like a sci-fi fantasy. In order to simulate and recreate a digital model of brains in order to comprehend the intricacies and many brain illnesses at various levels, Hewlett Packard Enterprise started the “Blue Brain Project” [4]. For the early identification and prediction of cardiovascular problems, Philips has created “Heart Model”, which integrates patient data with a generic heart model to produce a personalised 3D heart model [5]. They have also created a software package called Heart Navigator that combines heart model and X-ray data with pictures from a computerised tomography scan to help with pre-surgery planning and instrument selection. X-ray and model data: Siemens is another business that is attempting to create a DT of a living body. Electrocardiogram (ECG) readings and magnetic resonance pictures have successfully been utilised by the physician at the Heidelberg University to simulate the physiological functions of the heart in order to pick the differential diagnosis with the highest likelihood of success [6]. One example of a system of the human heart that converts a 2D scanning of the atrium into a 3D model is Simulation Real Heart from Dassault Systemes. Researchers studying the topic introduced a DT structure for detecting ischaemic heart conditions by gathering the patients’ data from internal and external sensing devices, health data, and social media, and assisting them correspondingly; they called it the “Cardio Twin”. Big companies are not the only ones attempting to create DTs of organs. By utilising the ECG data to train their model which was before datasets to verify its accuracy, they were effective in putting it into practice. Along with regulatory authorisation, Health and Pharmaceutical Agency authority is also investigating DTs as a viable tool for clinical studies. They gathered information from a number of previous clinical trials for Alzheimer’s disease and utilised it to train a deep learning model to create a DT to forecast disease development. They discovered that the DTs were highly comparable to the real control arm participants. In the healthcare sector, DT technology is quickly becoming a fascinating instrument with a bright future. DTs of sufferers can be a move towards individualised therapy and medications, whereas DTs of healthcare infrastructure, such as clinics, functions, and employees, can maximise patient care, cost, and performance. The achievement of the treatments and the patient experience will be improved by both kinds of DT
8.5 Energy and Power Generation
97
applications in the healthcare sector. However, when it comes to the use of the technology, particularly in healthcare, there are some worries about certification and regulatory challenges, as well as privacy and security.
8.5 Energy and Power Generation The energy sector includes businesses engaged in the production and/or selling of energy either from non-renewable resources like oil, gas, and nuclear power, or renewable resources like wind, solar, and hydropower [2]. From wind farms to nuclear facilities, the energy sector uses DT technology. The DT of a complete wind farm has been successfully created by General Electronics (GE). A wind turbine’s DT collects actual information such as meteorological, maintenance, and preparing reports, among other things, to maximise energy output, optimise maintenance procedures, and increase dependability. With the help of their cloud-based software DT, GE offers its clients full hardware and software for developing, improving, and optimising their wind farms. The future generation of electricity grids will make use of the prediction and interrogative capabilities of DTs. Siemens created DTs by building a single digitised grid for the design, management, and upkeep of the Finnish electricity system. Its benefits, along with increased security and dependability, mention (i) changing the majority of converting manual simulation work into automated simulation work; (ii) the effective and improved use of data; (iii) the simpler application of data for new uses; and (iv) the enhanced decision-making using large-scale data [8]. A multi-block network structure generated from the Chinese national distributed generation management was used to implement and evaluate. System for monitoring in real time: With only a fraction of a second’s delay, they were able to follow the power grid’s operational condition. Norway intends to follow Finland’s lead and use a DT to optimise the performance of the electricity network in light of the new challenges posed by renewable power and microgrids. Operators will be able to forecast network situations, regulate panels, and avert blackouts, thanks to the DT. Power plant (nuclear) can benefit from the application of DT technology by optimising the parameters of automated regulators and enhancing the algorithms for regulating and testing the plant. Some electrical power plants have launched a 4-year initiative to create DTs for the nation’s nuclear reactors. Due to the fleet of 56 nuclear reactors that EDF manages in France, the company focuses on combining data that is unique to each plant site, including design data, operational records, and real-time observations, as opposed to developing a DT of a general reactor. By gathering, collecting, and handling data, the DT may be utilised efficiently throughout the nuclear power plants (NPP) units’ life cycles, ensuring that no important
98
Chapter 8 Digital Twin Use Cases and Industries
data is lost once the plant is shut down and that the same data can be used to construct the next generations of plants [4]. The need for their DTs, which may be utilised for the best product, maintenance, and performance of renewable energy sources as well as for electricity network, is rising in tandem with the need for renewable energy sources. Schneider electric asserts that in order to protect the globe from the harmful impacts of climate change, DT must be implemented [5]. The distribution and production of power, as well as the upkeep of the equipment used in energy production, may all be optimised with the help of the DT.
8.6 Automotive Predictive maintenance, like in other sectors, is one use for a DT in the automobile sector. Just using brake calliper as an illustration, the real-time data was gathered and compared from a braking system and the information from the brake pad’s simulated DT. The information from both sources indicates that brake pad predictive maintenance is feasible with their model and longer-term real-time data collection time. DT technology is also being used by the automotive industry to deliver greater customised/personalised services to its clients by observing and evaluating their behavioural and the vehicle’s operational information [7]. By tracking the features according to their usage, automakers can design a car that meets the requirements of their customers. According to the Tata consulting services, a DT can increase car sales in addition to doing monitoring and preventative maintenance on vehicle services and parts [8]. When paired with AR, DTs can offer a 360° picture of the vehicle while incorporating customer choices, which can improve the entire sales process by making it much more immersive and participatory. A DT is being developed by Tesla Motors for every vehicle it produces. The corporation wants to make sure that each of its automobiles is operating as intended by using information gathered from individual vehicles. Tesla uses the data to update each car’s software separately and upload it in order to address a variety of mechanical issues, such as correcting for a rattling door by modifying hydraulics. Volkswagen is another automaker that employs DTs. Volkswagen, in contrast to Tesla, employed DTs to incorporate a humanoid robot workspace within one of their factories. They built a very accurate 3D modelling of their facility, complete with robotic manipulators, sensing circuitry, and protection elements, and then they ran simulations of various procedures and processes before incorporating the manufacturing line. They were able to save 3 weeks of work and 40 m2 of production area because of DT [1]. Two new production lines were integrated into an existing plant by Maserati using a DT. Using a DT, they determined how the changes in the car’s design would affect manufacturing, and they then modified the production procedures.
8.7 Refineries
99
Even competitive sports like Equation 1 (F1) race now feature DTs. To evaluate data, the Mercedes-AMG Petronas Motorsport team created a DT their vehicle’s performance following each race. The car’s 152 sensors collectively gather information on every 0.001 s, and virtual instruments measure warmth, humidity, velocity, pressures, velocity, and so on. With the addition of sensing, there will be 6 billion groups to analyse by the middle of the year [2] 2-h race. Race engineers use a DT to analyse the data and create a car that has improved performance and greater reliability and safety. Information from the team uses the race to refine its tactics for upcoming competitions. Additionally, the F1 drivers use simulations now to get ready for the actual races, test out new features, and learn more greater comprehension of the behaviour of the vehicle [3]. The autonomous vehicle sector is another area in the automobile industry where DTs are finding use. Every autonomous vehicle has its safety and movement algorithms verified in either open-source or commercial digital environments. Manufacturers are attempting to duplicate every part of the cars, including the electric system, for various brands and types of machines because the majority of vehicles have distinct development and manufacturing processes. The simulated vehicle may drive about the virtual environment using physical data and transmit back the machine’s behaviour when connected to an actual drive test bench, which causes the physical drive to behave as it would if it were mounted in the car [4, 5]. This industry can utilise all of the DT applications discussed in the manufacturing sector, from design through disposal. Therefore, by delivering a more engaging and immersive client experience, DTs in the automobile industry can be advantageous for producers in their production process as well as the vehicle dealers in selling them. In addition to that, it may also allow customers to modify their cars however they see fit. DTs are providing benefit to all users, whether they are typical drivers or aggressive drivers.
8.7 Refineries One of the greatest industries on the planet in terms of value, the oil and gas business has yearly revenue of roughly $3.3 trillion [6]. The development of DT new tech in the petrochemical industry has greatly benefited this sector by serving as a powerful tool for (i) lowering risks and improving risk understanding, (ii) developing and trying to manage command line timetables, and (iii) spotting process changes and reacting accordingly [7]. Competitive in nature, DT can also be a valuable resource for oil corporations, whenever it comes to obtaining resources offshore [8] because of all these advantages of DT, In one of its polls from 2017, Accenturem discovered that, out of 210 refineries,
100
Chapter 8 Digital Twin Use Cases and Industries
65% of them were aggressive in nature, and 57% had already increased their investments over the prior year. In the upcoming years, they intended to boost their efforts in digital transformation, giving an answer suitably [7]. One of the biggest oil and gas firms in the world has been employing DTs for a number of major APEX facilities in the North Sea. APEX aids them in time-saving testing, remote monitoring, predictive maintenance, and safe production optimisation. The biggest gain for BP from DTs was an increase in global crude oil and gas production of 30,000 more barrels per day. Royal Dutch Shell is another gas and oil business to use DT technology. By predicting the conditions to maximise asset performance and controlling them autonomously, they are able to collect more than 10 billion operational data points each minute, which they use for maintenance, productivity and safety improvements, and emission reduction. In order to precisely forecast the life of the assets so that they may optimise the inspections plan and security cases around it, they also constructed an enhanced fatigued models for their assets utilising the DT idea by fusing sensor data with a structure finite element model [2]. Other businesses, such as Eni [3] and Outcomes measured [4], are also making use of the technology to increase production while lowering costs and risks. Honeywell investigated the advantages of using DTs inside the field of oil and gas and discovered that now the development cycle was cut by a total of 4–8 weeks in the design process and 4 weeks for stabilising the operation. Additionally, the expenditures for operations and capital projects were cut by $70–120 million and $5–8 million, respectively. It was also argued that it would be more cost-effective to build a DT as part of the construction rather than retrofit to the current plant, because the petroleum industry depends on large, complex plants. As well as a DT makes use of equipment situated in remote locations with harsh environmental conditions. It is safer to watch over and manage the procedures, which lowers the dangers. Additionally, the DT enhances the total procedure by anticipating the downtime, which directly translates into time and financial savings.
8.8 Keen Town A smart city is one that effectively manages itself through the use of information and communication technology. It does this by sensing, analysing, and integrating key data from core systems to make decisions that are in line with the needs of the city, including those related to livelihood, the surroundings, community security, urban infrastructure, and economic processes [6]. Urban planning has benefited from the use of DTs in smart cities by enhancing residents’ quality of life. The American city of Carson City is one such instance.
8.8 Keen Town
101
The Carson City Public Works Department created a DT to manage their water system, which allowed them to efficiently provide water to 50,000 city residents while scaling back on operating hours by 15% [7]. Planning and decision-making can result in more economically, ecologically, and socially sustainable cities when a DT is used. A “Virtual Singapore” was developed in 2018 by the Medical Research Council (NRF) of Singapore, integrating 3D mapping, metropolitan modelling, and an information platform with precise features like surface, building products, geometry, and facility components. NRF thinks that Virtual Singapore will be advantageous to its inhabitants as well as the government. By serving as a trial run for fresh concepts, business owners as well as other research institutions and by supplying data can assist in the judgement process, in addition to planning and management of resources. It may also be utilised to increase a building’s accessibility. Simulate emergency conditions in a certain area [8]. Amravati, an Indian city, is another addition to this list. The city’s DT is being constructed at a cost of $6.5 billion for a populace of 3.5 million. The project will enclose 218 km2 in total, which has 317 km of important roads, 135 km of the metropolitan area, highways, more than 100+ medical facilities and educational facilities, 40 colleges, 3 academic institutions, and 3 state and local government structures. According to City zenith, the business in charge of creating the DT, Amravati, the city, will rank among the world’s most technologically connected cities on completion. By putting cities on digital platforms, its stakeholders would be able to monitor the real-time updates on the status of the construction project, the city’s environment and general health, and mobility and the traffic. Additionally, it will act as a portal for all citizens to access all official documents, notices, applications, and submissions. The state of Victoria in Australia has started a similar initiative to construct a DT of a 480 acre region named Fishermens Curve near Melbourne, Australia. The project will include tools for planning research and statistics, traffic flow estimates, water and electric use projections, real-time data visualisation of public transportation and building occupancy [1]. Plans for emergency response or disaster management can be implemented using a smart city’s DT [2, 3]. Data on rainfall totals and river levels were entered into the DT of the Ports area in London, UK, in order to forecast when flooding would happen and inform locals to possible floods. In smart cities when floods have been detected as an issue, records of the city may be used for large-scale initiatives like adding water storage room or redirecting rivers properly, as well as for building city skylines and longer term flood protection techniques. A DT can alter the way we perceive our towns and homes. It can aid in sustainable development, urban planning, and resource allocation. It can also provide an opportunity for urban designers, along with architects, engineers, builders, property owners, and citizens, to study and analyse the city’s infrastructure in various scenarios and
102
Chapter 8 Digital Twin Use Cases and Industries
assess any potential risks, increasing the overall achievement of a city, including its processes, infrastructure, and services [5]. Cities may become more democratic with the involvement of all municipal stakeholders, from residents to the government via the DT, since everyone can have an opinion and voice on what is missing in their neighbourhoods and how their surroundings and municipal services can be changed [6].
8.9 Mining One of the oldest existing firms is mining, which focuses on finding and removing metal and mineral deposits from the Earth’s surface [7]. A DT is a fantastic instrument for process/operation optimisation on mining sites, just like it is for the oil and gas sector. As the mining sector transitions to digitisation, Ernst and Young have recognised a DT to be one of the major enabling technologies. They identified four places where DTs support mining in their report: 1) Mining activities: Predictive maintenance increases asset dependability and decreases unexpected downtime. 2) Processing: Improving plant set points increases process effectiveness and product quality and decreases bottlenecking. 3) Transportation: Forecasting and streamlining the network of transportation enhances the dependability of transportation. 4) End-to-end: Modelling and examining various scenarios throughout the value chain identifies the best plans and schedules. Additionally, DTs assist mines in testing new procedures or equipment before putting it to use on the job site and train new miners [8], particularly for emergency scenarios, by teaching them the proper method and providing instruction virtually in a riskfree and strain atmosphere. The first DT in the world was developed for the improvement of the mining value chain by PETRA Search for evidence [1], a supplier of digitised underground mining technology for value chain optimisation. Their deep learning DT simulates “mine planning, blast, metallurgical, and process control choices” using 2 years’ worth of past data. The Ban Houayxai Gold-Silver Mine in Laos employed the DT, which, utilising information on the different kinds of rock, enhanced crushing effectiveness during drilling, boosting the return rate of the metals from the mine [2]. British, a global mining corporation, uses the DT in its mines in Argentina and Brazil to optimise its fleet of vehicles by lowering fuel use. They may evaluate the data needed for condition monitoring, predictive maintenance, and optimising equipment effectiveness by monitoring the operation of the haulage fleet. To enhance the
8.10 Shipping and Maritime
103
process’ effectiveness and efficiency, they also intend to deploy DT for its pipes, smelters, and refineries. Through the use of a DT, they are able to test novel approaches to increase production without endangering any plant components or processes. Mining corporations are engaging in DT technologies because they have the potential to benefit the sector in innovative ways, such as by boosting productivity and creating a safer working environment. According to the Worldwide Data Corporation, a premier global source of industry data and consulting services, 70% of mining enterprises will cease operations within the next two years [5]. Additionally, DT has attracted interest nationwide for “revitalising domestic resource development” [6].
8.10 Shipping and Maritime The marine industry, which also covers the movement of people and goods on both interior and international seas, represents any or all activities linked to the sea, deep ocean, boats, sailing of ships from one location to the other, sailors, and so on [7, 8]. Based on the UN System, the sea carries around 80% of trade globally by quantity and 70% by value, giving it the most significant mode of transportation for products. Similar to an aviation sector, the primary applications of DT in this sector focus on improving asset dependability, enhancing maintenance, and lowering operational costs. With assistance from GE, Army Naval support Division, the top provider of military transport ships to the American Army and the Department of Defense, is constructing a DT for its cargo armament vessels. The real-time achievement of the ship is compared to the calculated one using data from marine equipment like frequency drives, propellant engines, diesel powered, and generators. Any performance deviations that point to potential engine failure or other critical infrastructure issues are reported and addressed before they occur, enhancing the availability, effectiveness, operations, and readiness of their assets and missions. Additionally, remote monitoring and diagnostics are made available by the DT. The DT tech is spreading to ports as well, expanding beyond merely ships and boats. The Port of Rotterdam’s docks are equipped with sensors that collect data on the surrounding environment and water conditions in real time. These parameters include air temperature, wind direction, humid, turbid, salination, velocity, elevations, tides, and currents. Even “Digital Dolphins”, intelligent quay walls, and camera buoys are present at the port. Additionally, they have a real container, Container 42, that is equipped with a sensor and utilises AI to optimise when the ship should sail and how many different cargoes it should carry. By 2031, they hope to become the first digital port accessibility, economy, logistics, and responsiveness of missions. Additionally, remote monitoring and diagnostics are made available by the DT. Agriculture businesses in the sector gather and produce agricultural products including crops, animals, poultry, and fish as well as soil conditioner, food products,
104
Chapter 8 Digital Twin Use Cases and Industries
and agricultural equipment [4]. The agriculture sector is extremely burdened by having to increase production to fulfil the needs of this rising number of people as a result of the expanding human population. Humans will require access to all available tools in order to meet productivity goals in agriculture. The creation of a DT for precision agriculture can promote sustainable growth and improve global food security [3]. Potential uses for DT technology in agriculture have been found, despite the fact that this sector is still in the early stages of development [4]. The DT applications investigated consist of: – distant surveillance of cattle for the analysis and detection of their health, including heat and anoestrous cycles as well as the tracking of the animals’ movements; – the detection of plant diseases or pests; – managing and improving production facilities and resupply methods by keeping an eye on silo stock levels; – real-time tracking of machinery and the assessment of the cost-effectiveness of cultivation techniques; – finding and classifying flies in olive plantations to apply pesticides properly; – keeping an eye out for illnesses or infections in bee colonies and controlling honey storage. A conceptual framework for the use of DT in vertical farming was offered by enhancing the physical system with sensors that gather data regarding temperature, humidity, brightness, and the proportional carbon dioxide concentration, which would be stored on the cloud. The technology will be able to advise ways to plan vertical farms and boost output via intelligent data analysis. Some researchers proposed a comparable approach that includes design and simulation, modelling, big data, and visualisation for cattle farms. In addition to overseeing farms and cattle, the DT may advance sustainability. About 70% of total freshwater in the world is used for agriculture, which also accounts for 80 percentage points of forestry. DT technology can assist in monitoring changes in carbon emissions, diversity, flowering, and river catchment services and their causes. This will necessitate the development of the integrated sensors, algorithms, and interfaces for these particular goals. Author raises concern over the neglect of real-world systems brought on by DT developments, which may result in farmers who lack empathy and who are emotionally and intellectually disconnected. The author advises farmers and agricultural groups to use technology to their advantage, but to remember to consider factors that are not included in the digital model. The application of DT in education is fascinating, particularly in engineering subjects where students receive hands-on instruction on complex systems, where systems is a crucial component. A DT is an excellent tool for the classroom communication since it represents various domains and visualises the operation of systems and their component parts, leading to a greater understanding of systems, and learning is
8.12 Architecture
105
accelerated and made simpler by the exchange of several technical sectors [1, 2]. DT technology might be enhanced, encouraging learning and academic effort in pupils [2]. The advantages of DT for learning are discussed further [3].
8.11 Academia The development of effective knowledge, better than average, knowledge transfer, and self-efficacy are supported by authentic learning experiences: – Gaining knowledge about the physical twin’s behaviour under various operating circumstances in the actual world – Receiving quick feedback on system behaviour that aids in the discovery and resolution of problems – Inquiry-based learning throughout the design and testing of systems – As opposed to the previous situation, where students had to split limited resources, each learner can work on a separate DT – For students who are enrolled in distance learning and cannot access a physical twin, DT is an excellent tool – DT guarantees the equipment’s and students’ safety The conventional engineering curriculum equips students with specialised information that can be used in a certain industry, which was ideal for the old-fashioned industrial organisation where divisions and departments operated independently of one another. The silos between various departments needing experts are diminishing as more sectors follow the path of digitisation, and the demand for cross-disciplinary skills is increasing. For master’s students Science and Technology Institute in Russia, designed a module focused on creating a DT for developing, manufacturing, and evaluating complex systems; in this example, an autonomous aerial vehicle. Similar to this, the University of Florida has also suggested an educational DT testing ground for students, which may improve their educational experience by allowing them to engage with their DTs and investigate the intricacies and behaviour of physical systems [3]. The need for qualified engineers and technicians will increase along with the demand for DTs across a number of sectors. More courses centred on will likely be added at colleges and universities in the future.
8.12 Architecture Not only is the building sector incredibly time- and labour-intensive but also a sector that uses a lot of information. A large amount of data is created during the project lifespan of a construction project, from conceptual design through decommissioning [5].
106
Chapter 8 Digital Twin Use Cases and Industries
As a result, each stage or phase of the project lifecycle which involves effective communication and information flow across all relevant parties, including the architects, engineers, construction workers, facility managers, and contractors, can be reached via DTs [6]. DTs in the construction industry can be used in the same way as in the manufacturing sector, employed at several stages of a building project’s lifespan, including designing and engineering phase, the building phase, the operating and maintaining phase, and the deconstruction and recovery phase [3]. Building information modelling (BIM), that is, a system that creates and organises information on a construction project throughout its lifespan, is one of the major benefits of employing DTs inside the construction industry. The primary purpose of BIM is still resource efficiency by improving and exchanging information at the stage of design and construction for assisting in building stakeholders’ jobs and preventing costly design errors, which are simpler to do with DTs [7]. Additionally, DTs encourage integrating sustainability starting with the designing stage. The ecological emissions and energy use when using a DT of any structure may be calculated in advance and considered while planning and construction. A DT may be a useful tool in the construction industry from the design stage to the operational phase. Due to the fast-paced nature of this sector, space requirements frequently diverge from the initial plans and purposes of the structures. A DT makes it simpler to keep track of all of the changes and determine their impact. The use of a DT does have the potential to revolutionise the building sector. As a result, the sector has to be adaptable and open to new prospects. The only way the housing industry can stay up with other industries is by using DTs [3]. This can be considered when designing and construction.
8.13 Markets All businesses and establishments that sell goods or items directly to consumers or customers are considered to be part of the retail industry. Any retail business’s ability to succeed depends on its clientele; luring in new clients is insufficient, as crucial is keeping current clients happy by offering the highest level of customer service. The retail industry has a lot of promise for DT technology in terms of marketing and customer experience [5]. Retail stores will be able to offer a distinctive and personalised user experience based on consumers’ interest patterns by developing a DT of the consumer. By offering customers’ ideas that are relevant to them based on their DT rather than bombarding them with too many choices, customer happiness may be further increased [7]. Retailers may use DT technology to transform their offerings into an interactive and iterative that continually collects data on consumer wants and buying patterns in order to offer them better goods and services [7]. It will be advantageous for both
8.14 Remarks
107
customers and merchants to combine data from a customer’s DT with machine learning to analyse the customer’s behaviour [2]. The transportation and supply chain of the retail sector may benefit from DT technology as well. For more accurate and effective managing inventory, aggregate planning, and demand forecasting, DT may be used to monitor and trace items across the supply chain [1, 3]. To collect real-time stock information and evaluate the effectiveness of various shop layouts before deploying them, a French grocery business called Intermarché developed a DT using user data from their brick-and-mortar stores and IoT-enabled shelves as well as sales systems. About 83% of merchants are modifying the way they manage their supply chains in response to the COVID-19 pandemic’s disruptions, and they are using DTs as a tool to do so [1, 5]. Retailers may use new, more effective non-linear strategic sourcing fulfilment models, thanks to DTs’ real-time monitoring capabilities. Additionally, DTs may be used to mimic various what-if situations in the virtual environment before making any choices in the actual world in situations of emergency or disruption, as the COVID-19 pandemic. BMW and 3M are the forerunner businesses in creating a DT for their distribution networks [6, 7]. Retailers may use new, more effective quasi-strategic sourcing fulfilment models, thanks to DTs’ real-time monitoring capabilities. Furthermore, DTs may be used to mimic various what-if situations in the virtual environment before making any choices in the actual world in emergency situations or disruption, as the COVID-19 pandemic. Porsche and 3M are the forerunner businesses in creating a DT of its supply networks. Compared to other industries, the deployment of DTs in retailing is relatively young, but there are many advantages, particularly in shops and supply chains.
8.14 Remarks DT technology offers a chance to combine the real and virtual worlds across the globe, which may be used to address the issues faced by various industries, from manufacture to retail and from aviation to maritime. Unbiased DT technology has advanced since its debut (Figure 8.5); nonetheless, practical applications and industry applications of the technology are still unexplored. Industries that were early adopters of DTs, including manufacturing, are utilising the technology the most through providing better goods and gathering, analysing, and interpretation of data from the DT and adding it to commercially available goods. DTs may be applied to any product throughout its lifespan, from design through disposal, whether it is being manufactured or being built. Industries have taken use of DT features like remote maintenance and monitoring, especially during the COVID19 influenza outbreak in 2020.
108
Chapter 8 Digital Twin Use Cases and Industries
Using the real-time capabilities of DTs, businesses like Siemens had saved $1.6 B, resulting in decreased operating and maintenance expenses. It succinctly demonstrates the similarities and differences in how DTs are applied across various industries, as well as the advantages each industry gains. Broadly speaking, the applications in many industries may be summed up as follows (Figure 8.6): – Remote access, decision-making, optimisation, training, and documentation – Real-time monitoring, designing, and planning – Maintenance – Security No matter the business, DTs are a useful tool because of their variety of uses. Thirteen sectors were listed as using the technology in this research and reported DT application’s real-world industrial instances that propel the respective industry progress. Despite the fact that technology offers numerous benefits, it also has many drawbacks. Additional difficulties include the fact that it is a novel technology, its price and implementation time, security issues, and lack of norms and legislation these difficulties are linked to. Although the DT has not yet been implemented in these sectors, curiosity about its potential spurs swings freely in futurists’ and technologists’ minds, propelling it ahead. Because DTs provide so many benefits, from modelling and prediction skills to recording and reporting and problem-solving, it is crucial to recognise and comprehend their potential in every industry and adopt them for the correct application.
8.15 Supply Chain in Pharmaceutical Company DTs are changing the way supply chains do business by providing a variety of options to facilitate collaborative environments and data-driven decision-making, as well as making business processes more robust. In this section, we will look at the design and development of a DT for a pharmaceutical company case study. The technology employed is based on simulators, solvers, and data analytic tools, which enable these functions to be linked in a seamless interface for the company [6]. Typically, supply chains have divided their operations based on the stage in which they are. Each stage of the process plans, executes, and corrects its operations based on a specific vision of the supply chain stage in which it is located. ERP systems are the traditional method of sharing and transferring data and information across all links in the company’s supply chain. The goal is to coordinate the company’s activities. This goal is accomplished through the exchange of data and information among departments and business units. However, many ERP system limitations have been reported, including the inability to respond in real time to the dynamics of changes in orders, inventories,
8.16 Smart Transportation System
109
and eventualities due to supply chain disruptions. Security risks and a lack of integrated decision-making tools such as prediction, optimisation, simulation, and data analytics are also disadvantages of ERP systems. DTs are software representations of assets and processes that are used to improve understanding, prediction, and optimisation. DTs are made up of a data model, a set of analytics or algorithms, and knowledge. The creation of a DT with supply chain organisations enables monitoring and digital control of operations across all system links. A DT has numerous advantages because it captures all of the system’s insights, operational and financial information, device configuration, order status, and production orders. Decisions must be made regarding the location of new facilities for the production and distribution of injectable products. Similarly, different operating scenarios for the supply, manufacturing, inventory, and product distribution processes must be modelled and analysed. Clear communication, predictive analytics that anticipates changes or disruptions in the supply chain, and improved collaboration among its stakeholders are all required. Customer insight, demand data, business processes, inventory policies, productive capacity, and the location of available facilities are some of the data that must be uploaded to the DT. According to the research proposed in [6], DT is an operational DT, as it exchanges data between production, storage, and distribution. This twin also monitors the system’s overall performance. This allows for production scheduling and resource allocation, as well as the implementation of decision-making algorithms. The main goal is for the user to be able to interact with the DT in order to obtain information or change operating parameters. The DT decisions are made with a medium- and shortterm planning horizon in mind.
8.16 Smart Transportation System With reference to a case study discussed by Al-Ali et al. [7], this case study demonstrates how a DT model can be used in smart transportation. Figure 8.5 depicts a highlevel framework for sensing, communication, and control between a vehicle’s physical and virtual spaces. As illustrated in Figure 8.5, there are various physical sensors for automobiles such as camera, radar, gyroscopic sensor, GPS, tyre pressure sensor, speed sensor, and sound sensor to capture real-time parametric values from the vehicles and communicate them to the virtual space layer in real time. The virtual space layer will communicate effective control decisions for the ABS, vehicle speed, steering control, and motion planning. In this case study, authors have used four different types of cars of two models, namely, Toyota Avalon and Camry, for further elaborating the end-to-end conceptual model of a DT in the context of cars. In this the system collects a set of physical parameters from moving cars, including ABS, transmission, mileage, and tyre status, via
110
Chapter 8 Digital Twin Use Cases and Industries
Figure 8.5: A high-level framework for sensing, communication, and control between physical and virtual space for a vehicle [7].
on-board car sensors. The virtual twin in the virtual space layer stores the vehicle’s 2D/3D models, bill of materials, and historical data, as shown in Figure 8.6 [7]. Complex big data analytics and ML processing algorithms are used in the data analytics and visualisation layer of the end-to-end conceptual model, as shown in Figure 8.6, to perform automated decision-making based on real-time, measured sensor data from the cars and historical data. The data is stored on an off-the-shelf, high-end cluster built with commodity hardware, such as Hadoop Distributed File Storage system, and is made up of multiple data nodes that are geographically distributed. For example, MapReduce processing algorithm can be used. Splitting: Splitting is the logical division of data across cluster nodes based on the parameters/sensor readings measured by different automobiles. For example, data node 1 of the cluster stores the ABS status for all cars, data node 2 stores the transmission parameter for all cars, data node 3 stores the tyre status, and data node 4 stores the mileage covered with respect to each car. Mapping: During this phase, data from each cluster is sent to a mapping function, which returns an aggregated value for each type of car model in relation to a specific parameter. For example, in data node 1, the mapping function returns an aggregated output for ABS for each car model (Avalon and Camry). Similarly, other data node mapping functions return the sum of transmission, tyre status, and mileage values for each car model. Shuffling: During this stage, multiple mapping outputs are consolidated and relevant records are grouped together based on their respective car model. For example,
8.16 Smart Transportation System
111
different parameters from all Camrys and Avalons will be grouped together on different cluster data nodes. Reduce: In this stage, the outputs from the shuffling stage are combined into a single output based on the stakeholder’s status query. In other words, this stage summarises the entire dataset of cars and presents it to stakeholders based on their specific needs.
Figure 8.6: Proposed end-to-end digital twin conceptual model for vehicles [7].
Following the processing and analytics phases, the results will be communicated to various levels of stakeholders via dashboard graphs, charts, tables, and reports for data monetisation and visualisation. As shown in Figure 8.7, individual car owners, local dealers, state dealers, country dealers, and car manufacturers will all have different monitoring privileges. Accuracy for predicting failures in vehicular sensors in real time, total latency, and throughput for end-to-end communication from physical space to virtual space of DT, and vice versa, are some evaluation metrics that can be used to validate the proposed model’s performance.
112
Chapter 8 Digital Twin Use Cases and Industries
Figure 8.7: Use-case analytics’ model for vehicular digital twin [7].
References
113
8.17 Manufacturing System The increasing complexity of the order management process reduces a company’s ability to remain flexible and profitable. Integration of the manufacturing system’s DT into a decision support system for improving order management is a promising way to address these challenges. An automatic model generator that builds simulation models using information from the manufacturing system’s DT is a key component of the decision support system. The DT is defined as the sum of all available data, that is, engineering data and operational data, from all elements of the manufacturing system that reflect the system’s historical and actual state in real time. The DT of the manufacturing system, which is a data-driven representation of all elements, is divided into three parts: the manufacturing equipment system, the material flow system, and the manufacturing process system. The operating material system, the value stream system, and the human resource management systems are in the information age. The information system is linked to their physical elements. The DT of the manufacturing system can be viewed as an intelligent linkage of the DTs of the manufacturing system’s elements. As a result, the DT of the manufacturing system will have its own semantic data model describing the relationships between all of these elements.
References [1]
[2] [3]
[4]
[5]
[6] [7]
[8]
H. Kagermann, J. Helbig, W. Wahlster, Recommendations for Implementing the Strategic Initiative INDUSTRIE 4.0. Final Report of the Industry 3.0 Working Group, Forschungsunion, Germany, 2013. Accessed 20.5.2020 [Online]. Available https://www.din.de/blob/76902/e8cac883f42b f28536e7e8165993f1fd/recommendations-for-implementing-industry-4-0-data.pdf FutureScape, IDC, Worldwide IoT 2018 Predictions, in: Web Conference Proceeding: Tech Buyer, 2017, Doc #US43193617. Grand View Research, Digital Twin Market Size, Share & Trends Analysis Report By End Use (Automotive & Transport, Retail & Consumer Goods, Agriculture), By Region (Europe, North America, Asia Pacific), and Segment Forecasts, 2018–2025. Dec. 2018. Report ID: GVR-2-68038-494-9. Markets and Markets, Digital Twin Market by Technology, Type (Product, Process, and System), Industry (Aerospace & Defense, Automotive & Transportation, Home & Commercial, Healthcare, Energy & Utilities, Oil & Gas), and Geography – Global Forecast to 2025. August 2019. Report ID:SE5540. Gartner Inc, Top 10 Strategic Technology Trends for 2018. October 2017, Accessed 20.5.2020 [Online], Available: https://www.gartner.com/smarterwithgartner/gartner-top-10strategictechnology-trends-for-2018/ J. A. Marmolejo-Saucedo, Design and Development of Digital Twins: A Case Study in Supply Chains, available online: https://link.springer.com/article/10.1007/s11036-020-01557-9 A. R. Al-Ali, R. Gupta, T. Zaman Batool, T. Landolsi, F. Aloul, A. Al Nabulsi, Digital Twin Conceptual Model within the Context of Internet of Things, MDPI, Special Issue Feature Papers for Future Internet—Internet of Things Section. 12(10), 2020. M. Lezoche, H. Panetto, M. Dassisti, Digital Twin Paradigm: A Systematic Literature Review Concetta Semeraro, Comput. Indus. 130, September 2021, Elsevier.
Chapter 9 Security in Digital Twin Abstract: Digital Twins open up additional opportunities as far as checking, re-enacting, streamlining, and anticipating the condition of digital actual frameworks (DAF). Moreover, we contend that a completely utilitarian, virtual copy of a DAF, can likewise assume a significant part in getting the framework. The article has a system which permits clients to make and perform computerised twins, intently similar actual partners. We centre a clever way to deal with consequently producing the virtual climate from particular exploiting designing information trade designs. According to a security point of view, an indistinguishable (as far as the framework’s detail), recreated climate can be uninhibitedly investigated and tried by security experts without gambling with adverse consequences on live frameworks. Going above and beyond, safety elements in addition to the system help with security investigators in checking the present status of CDF. We show the feasibility of the system as evidenced by idea, with the computerised age of advanced twins as well as the checking of safety and well-being guidelines. Computing Classification System Concepts – Safety and confidentiality → Interference discovery systems – Computer frameworks association → Embedded and digital actual frameworks Keywords: Cyber-actual frameworks, modern control frameworks, computerised twin, reproduction, security checking, automation ML
9.1 Introduction 9.1.1 What Is a Digital Twin? The expression “digital twin”, in network protection, alludes to a computerised imitation of resources, processes, individuals, spots, frameworks, and gadgets that can be utilised for different purposes. Since digital twin likewise upholds expected physical and consistent resources, WHAT-IF examinations are conceivable as well. Haruspex is the main organisation to carry the idea of computerised twin to the universe of digital protection. Computerised twins are set to alter the manner in which industry works, moving from close actual administration of resources for an undeniably mechanised, distant method of working in view of information. In any case, incredible too are the dangers on the off chance that things turn out badly: the danger of digital assault, production network misrepresentation, botches, missed support, and different issues all compromise the honesty of the framework and disintegrate trust in the information it delivers and
https://doi.org/10.1515/9783110778861-009
116
Chapter 9 Security in Digital Twin
consumes. A twin that works on misleading information is, all things considered, not a twin. Advanced Twin frameworks are on a very basic level framework – different equipment and programming parts, actual conditions, entertainers – that convey and share information to make a comprehension of the frameworks’ activities and advance independent direction. This presents the requirement for a perspective with regards to security and dependability where hazard and obligation are shared and activities by one will affect others. So, digital twin security is a group activity, and this works out in both the specialised and the business space. Computerised twin consortium is creating and reporting a way to deal with security and trustworthiness as they connect with the particular and special highlights of digital twin frameworks and their activity. Various other quality assets are accessible that cover general issues like gadget security, network engineering, and process the executives. This task expects to present the ideas and sorts of reasoning that are expected for end-clients and frameworks integrators to appropriately evaluate, embrace, and work digital twin advancements and items that meet their business’ security and reliability needs according to all points of view: digital danger the executives as well as administrative consistence, individual well-being, and proper degrees of speculation. The IoT is advancing as a worth innovator in all cases from assembling to medical services. Be that as it may, as computerised change opens esteem, it likewise uncovered online protection weaknesses in the midst of a developing danger scene. Presently there is the advanced twin, which is both a developing method for tackling the force of the IoT and a wellspring of digital gamble.
9.1.2 Internet of Things and Information Safety The concept of the digital twin is expanding, posing new security challenges. A virtual model is referred to as a digital twin of an actual gadget or framework, intended for running reproductions. Designers can do a test and assemble innovative items and frameworks without making hazard related with genuine experimentation. On frameworks currently set up, advanced twins fill in as intermediaries so supervisors can screen, react to changes, and make enhancements without closing down activities or make pointless gamble. In assembling, for instance, advanced twins can be utilised to screen hardware, proactively getting ready for fixes and administration. Tasks chiefs can check the effect of making adjustments on a large-scale assortment of variables, all without executing any progressions whatsoever in the actual world. Advanced twins are regularly made by utilising a stationary 3-D computer-aided design model of the item or framework and transforming that into a unique model by including information gathered during the lifecycle of the item (or framework).
9.1 Introduction
117
Over the most recent couple of years because of advances in information investigation, the Internet of things (IoT), cloud innovation, and man-made reasoning, the twins’ computerization is no longer in use. The market for this invention is expected to reach USD 4.8 billion by 2025. After thriving in the design environment, it is now becoming well-known in a wide range of usage scenarios from the corporate world. Accordingly, the safety suggestions are tremendous. Quite a bit of this is because of the detonating utilisation of IoT gadgets and the information they communicate.
9.1.3 The IoT Powers Digital Twin Technology In computerised reproductions, particularly when used with hardware and frameworks observing, the IoT is fundamental. Utilising true information from sensors put on the genuine gear, advanced twin innovation utilises reproduction to create forecasts, consequently anticipating issues before they emerge. Sensors drive the information assortment that controls an advanced twin’s usefulness – there is no life without them on-going information and without access to real-time data, a re-enactment is a feeble guess of the genuine article. As well as referenced over, the IoT creates life (e.g. basic continuous information on a scope of elements) to advanced twins. The IoT sensing components are, sadly, where online protection weaknesses can manifest. Presently that computerised twinning is developing, adding one more road of hazard for IoT associations, security experts should contemplate how to obtain their IoT-based biological systems.
9.1.4 Is There No Longer a Divide Between Public and Private Data? Digital twins depend on information – however much as could reasonably be expected and from whatever number shifted sources as would be prudent. In any case, what happens if beforehand information silos between public and private are whirled combined in the mix of an advanced twin model? Information storehouses are intended to keep hidden data private from succumbing to the control of unapproved clients or, on account of a break, the general population. Information programmers will unquestionably be searching for ways of taking advantage of any conceivable security weaknesses related with advanced twins. Yet, according to the point of view of somebody who’s hoping to gather experiences or create a digital twin, information siloing is a problem, a snag. For their purposes, it addresses an absence of straightforwardness and a gigantic deterrent to effectiveness. The response is to empower associations between information sources from a security company – disapproved of structure. These associations should be facilitated, got, and
118
Chapter 9 Security in Digital Twin
persistently checked with the goal that your computerised twins have lifetime admittance to all information. Regardless of whether an association utilises computerised twins, opening information storehouses is thought of as significant. Truth be told, information straightforwardness is regularly considered to be a basic advance in the excursion of computerised change – as significant as refreshing/ supplanting inheritance frameworks or protecting the foundation with cutting-edge danger security. One more problem with advanced twins (and, not unintentionally, with computerised change) is information interoperability.
9.1.5 Challenges in Interoperability with Digital Twins As information goes from devices and equipment in the field (or on the assembly line) to the product used in information twin presenting frameworks, there is a risk of acquiring information you can’t trust or use. It is possible that multiple computerised twins can be transmitted in large project contexts. A few elements may even be present in a few distinct advanced twins at different levels: 1. Element 2. Quality 3. Structure 4. Method This progressive system of advanced twins produces alternate points of view – unbelievably significant understanding – yet may likewise create various sorts of information and complex connections between informational collections that outcome in information interoperability issues. Part-based advanced twins could take care of resourcebased computerised twins you should have the option to faith on your information or the entire idea of delivering a twin model of the real article goes completely out the window. Central Institute of Open school (CIOs) ought to guarantee that admittance to information is accessible from a wide range of sources along the whole item esteem tie by setting up the appropriate design definitions, personality the executives, and restrictions on access. Additional information inter-operability problem is that, throughout significant stretches of time, information arrangements and information stockpiling can advance. As the advanced twin accumulates developing chronicled information, the gamble of having unintelligible information is high. CIOs should get ready for the drawn out while setting up their computerised twin models, keeping information configuration and capacity choices steady over the lifetime of the twin.
9.1 Introduction
119
Likewise, with all information frameworks, the case with advanced twins is as well: your bits of knowledge are just comparable to your information.
9.1.6 Advanced Digital Transformation and Twins Advanced twin innovation may appear to be cutting-edge, but it is merely one of the most recent methods of harnessing the power assuming the IoT and the data it generates. Truly, nothing is new here, as model reproduction has been around for a long time. What is important is that we now have IoT, data analysis, AI, and the cloud to manage significantly greater models for better results and more value. Associations are currently utilising sophisticated twins to screen functional execution in assembly. We are currently seeing the computerised twin concept expanded for large business extensive advantages across the full value chain. All things considered, the security recommendations are the same old stuff. Interoperability of information, IoT security, information silos, and information protection are on the whole difficulties we have known previously. It’s simply that presently, there’s much more motivation to respond them.
9.1.7 The Different Sides of Advanced Twin Security The security of advanced twins should be considered from different sides – the security of the computerised twin and the framework wherein it is executed, and the security of this present reality article, framework, or interaction it addresses. Albeit advanced twins can assist with getting IoT gadgets and cycles, the actual twins can be presented to chance. Computerised twins are programming portrayals that are typically sent on standard PCs involving cloud administrations for information assortment and handling. Programming, PCs, and distributed computing have known dangers. Be that as it may, those dangers can be relieved with the appropriate measures.
9.1.8 What Are the Threats? With computerised twins being such close and synchronised portrayals of this present reality, accidental access could give programmers an open door not exclusively to get to the advanced twin, yet in addition to direct infiltration testing prior to hacking the actual framework it addresses. On the off chance that programmers get entrance, loss of protected innovation and information, loss of control, and possibly immense expenses coming about because of vacation, defective results, or information being held to deliver are the main dangers.
120
Chapter 9 Security in Digital Twin
Thus, to limit computerised twin-security gambles, it is fundamental to have a security technique heated in and carried out starting from the earliest stage. A precise network safety system will assist with wiping out holes that can in any case happen between the physical and computerised security reflecting and guaranteeing continuous solidifying of the product. Four security basics you want to take care of: Assuming you are constructing and sending a computerised twin, you want it to be completely trustworthy as choices in light of broken advanced twin results can drastically cost your association. Here are the four basics you and your advanced twin supplier need to consider: 1. Information security and online protection The advanced twin form and arrangement should be both secure and dependable. Your supplier ought to get the information, admittance to the advanced twin, and the genuine item it’s associated with utilising strong governance [LS2] practices, validation, and encryption. Search for two-factor or even multifaceted verification (2FA or MFA) of the individual – in addition to their gadget – or use equipment keys. Besides, you really want to scramble information on the way. Nonetheless, progressively, it is insightful to scramble information very still and being used as well. 2. Versatility and dependability “Constancy” includes accessibility, dependability, security, classification, uprightness, and viability. These are the characteristics that computerised twins should have the option to convey reliably so the results can be relied upon for business direction. Nonetheless, Laprie [3] recommends that the more serious need is resilience. He characterises flexibility as “the industriousness of administration conveyance that can reasonably be relied upon, while confronting changes”. This is central to advanced twins, which need to convey an undeniable degree of administration for right and dependable results. The vital dangers to reliability are flaws, disappointments, and mistakes [4]. Thus, you want to ensure your computerised twin supplier executes and keep up with the means for trustworthiness – shortcoming anticipation, resilience, evacuation, and gauging – so you should rest assured about steady, dependable results you can trust. 3. Protection Data protection is centre to any information assortment and handling. Information is actually recognisable, for example, area or clinical data, its utilisation needs to comply to all applicable security regulations like Europe’s GDPR and California’s CCPA. In any case, even where the information isn’t actually recognisable, information should be kept hidden to safeguard protected innovation. Covering, redaction, differential protection, encryption, and lifecycle the executives are only a
9.1 Introduction
121
couple of the techniques that ought to be accessible to you in your computerised twin organisation to keep information hidden. 4. Well-being At last, you want to guarantee the well-being of the computerised twin. Well-being might sound fundamental, yet it is a critical piece of safeguarding your advanced twin and the genuine framework it re-enacts. There need [LS3] to be actual locks on ways to the server farm and PC and server rooms, and IT hardware ought not be in danger of being harmed either by individuals, other gear, or through implies like overheating. Keep in mind the basics. Two security affirmations to search for: On top of all the above-mentioned affirmations, your advanced twin supplier should hold a data security declaration to highlight the dependability of their data dealing with and the computerised twin. ISO 27001 is a worldwide norm of data security board. Certified bodies will review the associations to guarantee the consent and afterward give official license or affirmation. This certification requires the association to lay out executing, keeping up with, and persistently further developing a data security the board system [5]. A further certification that merits searching for is SOC 2. This is generally perceived as a highest quality level for security as the authorisation depends on the association accomplishing a significant degree of safety, accessibility, handling uprightness, secrecy, and protection. Associations with the SOC 2 certificate have gone to additional lengths to safeguard your computerised twin and its information. Likewise, with all safety efforts, nothing is invulnerable. In any event, making sure your computerised twin provider is verified and grants access to the necessary verification methods will increase your ability to believe the supplier and the outcome of the advanced twin reproduction for independent business direction.
9.1.9 Your Activity Focuses Presently you know the dangers of not having a safe advanced twin arrangement; here’s the agenda you or your supplier need to execute: 1. Implement data [SM4] security and network protection 2. Use 2FA, MFA, or equipment keys for validation security 3. Use encryption for information on the way, and, surprisingly, very still and being used 4. Increase strength and constancy with issue counteraction, resilience, evacuation, and estimating 5. Keep delicate information private 6. Adhere to security guidelines
122
7. 8. 9.
Chapter 9 Security in Digital Twin
Use concealing, redaction, differential protection, or potential encryption Ensure actual security of the registering resources Acquire a worldwide perceived security confirmation
Work in security and administration from the beginning while building and sending advanced twins so you have certainty and confidence in the final products and results. With an advanced twin, you can take your item, cycle, or framework to a higher degree of enhancement and increment your effectiveness and benefit. So it merits getting the security right toward the beginning so none of your persistent effort is endangered.
9.1.10 Security by Design Digital twin empowers you to evaluate and deal with the digital gamble in any event, when your framework is at the arranging stage. You generally can foresee and deal with the gamble when a few changes in your framework must be carried out prior to conveying the change. Digital twin clears your questions where your innovation speculation must be finished.
9.1.11 NIS/GDPR Compliance Digital twin follows the most recent worldwide guidelines by empowering you to stock any resource on your framework and to ascertain a quantitative digital gamble, even before the framework has conveyed.
9.2 Network Safety by Solution 9.2.1 Network Protection Computerised Twin Advanced twin empowers to enter a framework 100,000 times without disrupting its typical usefulness. – GDPR/NIS compliance – Security by design
9.2.2 Prescient Cyber Security Numerous sellers come when the games are finished. We show up previously. – Static cyber stress curves
9.2 Network Safety by Solution
123
9.2.3 Constant Assessment and Remediation Our digital twin profoundly knows your framework. Take advantage of its insight to stop a continuous assault progressively. – Dynamic cyber stress curves
9.2.4 Nonstop Cyber Risk Assessment You can rapidly assess the gamble of your framework by just refreshing your digital twin with new found weaknesses.
9.2.5 Digital Protection Remediation and Countermeasures A decent remediation of the board is the best way to keep away from digital gamble. We give you the littlest arrangement of countermeasures to limit the gamble. Ensured.
9.2.6 Network Safety Forensics At times, not all things go well. Our time machine takes advantage of the present information to find undetected assault ways and quiet malwares.
9.2.7 Zero-Day Simulation and Defence Shield your framework with WHAT-IF examinations and zero-day reproductions. It’s more straightforward with our digital twin.
9.2.8 IT/OT Cyber Security Our computerised twin kills the shortcomings emerging when the OT world meets the IT one.
9.2.9 Network Protection Support Our group is made out of specialists having ace or doctoral certificates in a wide scope of software engineering disciplines: security, operational research, algorithms, big data, and networking.
124
Chapter 9 Security in Digital Twin
9.2.10 Overview The words Intelligent Manufacture and Industrial 4.0 relate to the ideal of an extremely skilled workforce, independent and adaptable creation procedure. At the centre of together previously mentioned ideas are DAF. An DAF is an example of framework which joins computational (e.g. figuring equipment/programming) as well as physical (e.g. actuators) parts, permitting the framework to connect with this present reality [4]. Besides, DAFs can coordinate systems administration abilities, an element that is a foundation of interconnected and independently working assembling frameworks. In any case, expanded digitalisation as well as availability fresh possibilities assault vectors that might lead to an association’s resources in danger, yet could likewise imperil human existence. This is particularly significant for modern control frameworks (MCFs) – a subset of DAFs – anywhere well-being was the case fundamental concentration. In this case, specific situation, it ought to be noticed that security concerns can arise prompt well-being suggestions. Indeed, the absence of satisfactory measures to get DAFs in basic foundations (e.g. treatment plants for sewage [30]) might actually take serious ramifications for the general public well-being [21, 22]. Security auditing, observing, as well as interruption discovery, as characterised in modern principles and rules, are significant actions taken to invigorate modern conditions. Because of the running’s criticality frameworks, experimenting in the creation climate isn’t suggested. The arrangement and support of test conditions then again is costly and tedious, regularly prompting inadequate and obsolete conditions. A comparative issue can be related to zero on the assessment of examination brings about this area. For instance, interruption identification in DAF stood out throughout the most recent years; however the assessment datasets are habitually not distributed and the digital actual arrangement can’t be replicated more often than not. This blocks reception and free correlation [15, 24]. In this chapter, we propose a clever system named DAF twinning to construct and keep up with completely utilitarian advanced twins of DAFs. The expression “Advanced Twin” was begat by author [29] and portrays the utilisation of all-encompassing re-enactments to practically reflect an actual framework [26]. Taking on such an idea could empower administrators to screen the creation interaction. Changes should be tested in a virtual environment, segregated climate, with additionally reinforcing the safety and security well-being of DAFs. A new report by author [27] recommends that a simulated imitation of the actual cycle might be utilised for the sake of security, in any case, prompting new concerns about the creation and the executives as an outcome thereof. In the meanwhile, the handbook production of a fictitious climate for computerised twins is tedious as we are inexperienced and prone to errors and need to create the climate totally from determination. This strategy is proficient and reusable also ensures an indistinguishable arrangement. Preferably, the detail of the DAF as of now is characterised and kept up with as a component of the framework designing interaction [19, 20] through normalised information designs like automation ML (AML) [10]. We think about two fundamental methods of activity of the virtual
9.2 Network Safety by Solution
125
climate, and it is possible that (1) in a reproduction method, working freely of the actual climate, providing the likelihood to screen as well as investigate a without a virtual clone hazard or (2) when in replication mode, replaying the occasions from the actual climate for representation and examination. In addition to this virtual portrayal, various safety highlights are possibly laid out. For instance, safety, also well-being instructions expressed as a feature of the detail, can be consequently checked based on advanced twins. What’s more, new actual gadgets can be associated and tried in the virtual climate, without affecting creation frameworks. Security analysers additionally have the likelihood to uninhibitedly investigate and assault the virtual recreation of creation arrangement. In this case, methodology, safety is possible when it is flawlessly coordinated in the whole presentation lifecycle, beginning derived from designing stage. The chapter can be summed up as shown below: – We suggest a system that gives a security-mindful climate to advanced twins. – We show the viability of the suggested structure by giving a prototype execution, which supports the virtual reproduction of the organisation geography, PLC’s with controlling logic, humanoid machine edges, and real strategies (e.g. engine). We coordinated two existing open-source tools to accelerate the development of this model. – We demonstrate what kind of virtual circumstances for computerised twins may be produced organically from detail. – We provide safety-related case studies for advanced twins in DAFs and demonstrate how safety and well-being instructions may be validated. The rest of the chapter is prepared as follows: in the first place, in Segment 2, we offer four use scenarios that stimulate the necessity for a safety-conscious computer-generated environment for computerised twins. Segment 3 diagrams examine the essential components of DAF twinning. Section 9.7 depicts the related works and follows Segment 4 with an execution of conceptual design of DAF twinning. Finally, in Segment 6, we present concluding remarks and talk about future projects. In this section, we present possible use cases for computerised twins to work on CPS security and assist security examiners in guard planning. Detection of Interruptions In a Writing Audit headed by Mitchell and Chen [23], the authors reasoned that conduct specification-based interruption recognition may be a promising strategy to deal with reveal interlopers while keeping the misleading positive rate at a minimum. Interruption location frameworks (IDSs) that use this specific approach attempt to identify hostile movement by differentiating departures from a predefined model of innocuous behaviour [23]. They may check their own behaviour and indicate deviations from their detail by engaging a point-by-point determination of the creative framework as a format for advanced twins. This could, for example, incorporate the recognition of obscure gadgets, vague associations, and changes in the control rationale. Furthermore, determined security and well-being rules could be observed inside the virtual climate. Another situation is the
126
Chapter 9 Security in Digital Twin
utilisation of extra, free sensors to find controls and deformities [12, 18]. While the combination, checking, and relationship of new sensors in a creation climate isn’t paltry, this could be taken care of totally inside the advanced twin. In the wake of putting the extra sensor in the actual climate, the readings are reproduced to its computerised twin and related with estimations from different sensors.
9.2.11 Framework Testing and Simulation Computerised twins can be utilised to perform framework tests and recreations. Security experts can investigate a creation clone rather than depending on documentation and hypothetical assault vectors. Besides, genuine gadgets can be tried by first interfacing them in the virtual climate. For example, in the event that the genuine gadget is associated as a substitution of the current computerised twin PLC, the administrator could notice the conduct of the new PLC inside the virtual climate. Modules to naturally record and look at explicit setups could be a further expansion. Also, exploring different avenues regarding designs in a virtual climate gives the likelihood to recognise issues and contradictions right on time without exorbitant arrangements.
9.2.12 Recognising Misconfigurations Another utilisation case is to recognise jumbles of the genuine climate and to keep up with detail. In the event that, for example, an actual gadget is added to the genuine climate without adjusting the determination, the befuddle could be identified and revealed. A similar applies to the situation where an actual gadget isn’t predictable with its virtual portrayal because of misconfiguration or control by an assailant.
9.2.13 Entrance Testing Entrance testing of MCFs should be painstakingly planned since network outputs, for example, a ping clear might make the framework act in a startling way, possibly hurting fabricating gear and human well-being [11]. Accordingly, ideally upkeep windows are chosen for infiltration tests in the live climate. Nonetheless, briefly halting or restricting the activity of the plant is exorbitant and frequently not plausible, particularly for basic foundations. Building test conditions explicitly for directing entrance testing may likewise not be a practical other option, thinking about price and time suggestions. Using a virtual reflection of the creation climate, safety examiners can recognise shortcomings and hence countermeasures tests prior to carrying out them underway.
9.3 Knowledge Input
127
9.2.14 Framework 3 This chapter depicts DAF twinning, a computerised twin system that will aid the use cases mentioned in Section 9.4. The engineering of the structure is built up of two basic modules, as shown in the high-level perspective in Figure 9.1. These are the generator and the virtual climate. The designer and space explicit information are used by the generator module to create the virtual climate. When the advanced twins and organisational geography are completed, the virtual climate can operate in two modes. To begin with, the virtual climate provides a reproduction mode in which the advanced twins operate independently of the actual climate. Second, the replication mode records events from the real world such as network traffic, and copies them in the virtual environment. On top of the two modes, the framework includes a number of modules that may be activated on demand such as observation, security investigation, and interruption recognition. We suggest a multi-module construction strategy since the structure should be expandable. In Section 9.6, we show a first basic implementation of DAF twinning. An itemised depiction of each section is provided in the preceding.
9.3 Knowledge Input An essential piece of the introduced thought is to use the determinations of cyber physical systems planned all through the designing stages. Thus, the virtual climate can be produced in light of existing relics rather than building it without any preparation. The upside of this approach is that the age of the virtual climate is programmed and subsequently supplanting tedious work with an effective and adaptable arrangement. Besides, this method produces reproducible outcomes, implying in which the virtual climate is possibly revamped indistinguishably whenever, for as long as the particular exists. Any change to the actual climate without correcting the unique results of each individual person creates disparities because of their virtual partner. Subsequently, the determinations of CPS ought to be stored in a state of harmony, cultivating a recognisable and very much archived climate. Making and keeping a nitty-gritty determination of a cyber-physical system that empowers the age of a total virtual imitation involves extra exertion. In a perfect world, the association as of now utilises normalised dialects (like AML) to configuration, trade, and protect their arrangements.
9.3.1 Engineer Expertise Engineer information depicts data which is novel to the current climate. It involves the plan of total interaction as well as framework parts, network data, and the interior conduct of CPSs. In light of the interaction definition, the geography of the climate and sensible associations between individual parts can be inferred. Generally, the geography is
128
Chapter 9 Security in Digital Twin
Figure 9.1: Architecture of Digital Actual framework twinning.
9.3 Knowledge Input
129
demonstrated by determining the parts (e.g. name, item form, and seller), their comparing qualities (e.g. Input/output channels), and their setup (e.g. IP and MAC addresses). Besides, characterising progressive connections inside parts is additionally a significant view point to think about while demonstrating a framework. Specifically, gadgets are inspected through activity to verify assuming they stick to the determination, example confirms that a framework offers just those types of assistance which were characterised ahead of time. Additionally, this demonstrating approach empowers DAF twinning to implement fine-grained strategies and imperatives on every single progressive level. One more viewpoint to consider is the unequivocal meaning of the correspondence way starting with one host then onto the next through legitimate associations and endpoints [3]. Other than utilising these subtleties to create the organisation arrangement of the virtual climate, this information can likewise fill in as a reason for understood security rules. All the more explicitly, the system could screen the traffic stream and check assuming organisation parcels contain allowed tending to and convention data. For instance, a correspondence way among a human machine interface and a programmable logic controller could be characterised in the determination, with the pre-owned applications layers protocol convention (e.g. Mod bus and the basic subtleties of the solicitation convention information element (e.g. work code). Thus, a network whitelistlevel checking can be gotten from the express planning of the organisation format. All things considered, control rationale can likewise be appended to gadget determinations. For instance, a program executed as a consecutive capacity diagram (CCD), one of the programming dialects characterised, can be referred to as a sequencer square of a PLC case. Since CCD is a controlled program design language, the referred to code can frequently be straightforwardly moved to the actual programmable logic control for execution or if nothing else changed over to merchant explicit vernaculars. Therefore, no extra exertion according to a designing point of view is caused and reasonable reproductions can be ensured since the control rationale conveyed in the two conditions (bodily and computer-generated) is indistinguishable. Counting process data is additionally important for modules for security. At long last, we accentuate the unequivocal combination of well-being and security rules into the detail. Throughout the interaction configuration, specialists can expressly state the well-being decides that characterise ordinary activity. This information empowers the structure to decide if the cycle is in a protected state and subsequently enhancing security equipped frameworks. For instance, esteem boundaries for gadget factors can be expressed. Contingent explanations as well as examinations among factors from various computerised twins’ proposal more perplexing principles. For instance, a standard could be indicated that limits the organisation correspondence between a human-machine interface and a programmable logic controller, contingent upon the PLC’s interior state.
130
Chapter 9 Security in Digital Twin
9.3.2 Field Information Area information refers to data that is unrelated to a specific real contact, suggesting that it may be defined once and then split across a few relationships. Modern hardware producers, for example, might define each of their gadgets and then provide these curios as gadget formats. These plans might include area-specific information from mechanical, electrical, and control design experts. Architects might then refer to these rapid plans or utilise them as a starting point to save unnecessary labour. Additionally, designers and security professionals may provide health and security guidelines and regularly update them. For example, the detail of a tank may include the maximum appropriate fill level, whereas the detail of a PLC could predefine criteria for ensuring the firmware’s legitimacy. Previous research on interruption identification for DAFs has also focused on attack location based on framework attributes. In this paper, we give a clear definition of material science norms and then characterise input that is important for security and well-being investigations. Consider an assault centred on a synthetic cycle with the goal of flooding a tank. The attacker attempts to manipulate the fluid-level sensor estimates in order to trick the control framework into overfilling the tank. By monitoring previous fill and channel control exercises, the structure may calculate a normal fluid level within the tank using liquid element standards and comparing it to sensor data. Irregularities would therefore draw attention to malignant behaviour or deformities.
9.4 DAF Twinning Framework The DAF twinning system addresses the principal commitment of this article. It incorporates a creator, the simulated climate, and segments that interface with advanced twins. Every part will be clarified further in the accompanying subgroups.
9.4.1 Creator The creator is responsible for transforming the specific into a virtual environment. As a first stage, the specific is parsed to extract the organisation’s topological architecture, the devices with their comparative setup, and the security and wellness regulations. The virtual climate changed after that. Execution of a virtual component program, simulation of physical components protocols for industry, and the network layer logical and physical connections of physical components. In Figure 2 layers of DAF twinning are achieved by constructing virtual things and applying their own configuration. Finally, the parsed rules are saved in a theoretical representation for further evaluation by the security and well-being inspection module.
9.4 DAF Twinning Framework
131
Figure 9.2: Layers of DAF Twinning.
9.4.2 Computer-Generated Environment This part frames the centre of DAF pairing and gives the replicated network framework as well as a virtual machine runtime gadget. As a sensible recreation of the actual cycle is central for the utilisation cases introduced in Section 9.4, the replicated DAFs should correspond to their actual partners as intently as could really be expected. Specifically, this incorporates the control execution rationale, network conventions, gadget categories, and the actual hardware. Figure 9.2 portrays an outline of the system plan. The base layer gives an establishment to the DAFs network foundation. On top of it are modern conventions that can be utilised by computerised twins. The computerised twins dwell one layer over the modern conventions and can either be copied by carrying out the command rationale or recreated in the event that the twin ought to duplicate just an actual part. While the “digital” some portion of an DAF can be reproduced indistinguishably in the virtual climate, actual parts, like sensors or actuators, as well as their cooperation with this present reality, should be recreated. As recommended by the author [2], this can be acknowledged by carrying out a record-based capacity for sensor and actuator esteems or even by incorporating equipment on the up and up arrangements. In Figure 9.2, this is addressed as the actual part layer, existing together using the virtual part layer.
9.4.3 Simulation and Replication After the programmed age of the virtual climate as indicated by the determination and the arrangement of computerised twins, the two methods of activity become accessible, i.e., replication and recreation. In reproduction mode, the advanced twins run freely of their actual partners. Like virtual appointing, this mode permits clients to examine process changes, test gadgets, or even improve producing activities. Besides, security experts can utilise this mode to perform security tests inside the virtual climate, along these lines staying away from likely ramifications of testing on a live
132
Chapter 9 Security in Digital Twin
framework. The replication mode then again mirrors information from the actual climate. Potential information sources to reflect in the virtual climate are log documents, network correspondence, and sensor estimations from the actual climate. Moreover, sensors could likewise be straightforwardly associated with the DAF twinning system. Through information diodes or unidirectional passages, the heading of the information stream could be limited to stay away from adverse consequences on the actual climate. It should likewise be noticed that an immediate association between the physical and virtual climate is definitely not a compulsory necessity. For instance, information could be gathered disconnected for a specific timeframe and afterward utilised in replication mode. However, to guarantee that the virtual imitation constantly mirrors the actual climate, the utilisation of a computerised information replication technique is liked.
9.4.4 Monitoring Checking the cycle taken care of is a significant part of the system. Henceforth, the checking module gives a connection point to evaluate the cycle state. Specifically, sensor values and actuator states can be gathered and ready for examination and representation. Thus, the observing usefulness of DAF twinning can help clients in acquiring a more profound comprehension of the current climate, guarantee its legitimate activity, and work with investigating when issues happen. Albeit this module is especially helpful in replication mode, when the essential target is to screen the actual cycle, bits of knowledge into the recreated climate can likewise be of significance. All the more solidly, checking the actual cycle in replication mode and afterward changing to recreation mode permits to explore a specific state. This approach might give knowledge in regards to the main driver that prompted a startling conduct.
9.4.5 Device Testing Because of the practical virtualisation of the actual climate, DAF twinning works with virtual authorising. This opens the likelihood to test actual gadgets by coordinating them into the virtual climate. The cycle states as well as the conduct of the framework under test can be checked to confirm that the framework is filling in true to form. While the arrangement of an indistinguishable actual test climate is costly and tedious, this approach might be considered as an appealing choice to test the substitution of gadgets and to perform mix tests.
9.4 DAF Twinning Framework
133
9.4.6 Security and Safety Analysis As referenced in Section 9.3, security and well-being rules can be expressed either certainly or unequivocally in the particular. After these standards have been extricated from the determination, this module plays out an examination during activity to distinguish strange states of the cycle in the virtual climate. Note that in replication mode, the actual climate is reflected to its virtual partner; along these lines, it very well may be expected that anomalies arise indistinguishably. Other than recognising strange cycle states during activity, this approach additionally gives the likelihood to run re-enactments in the virtual climate to test the arrangement against infringement of the predefined rules. Rather than an SIS, the system can recognise unusual circumstances by corresponding to the condition of advanced twins. It approaches all states and occasions inside the virtual climate and, subsequently, can likewise screen state changes over the long haul. With this data, connections between factors can be investigated to detect infringement of characterised well-being rules. Besides, sensors that are free of the interaction taken care of might be utilised as an extra information source to report their view on the framework’s state. The benefit of this approach is that the system gives a comprehensive perspective on the actual interaction. With centre around security, the computerised twins and their detail can fill in as an establishment for a conduct determination-based IDS. The IDS could accept the twins’ state as an essential contribution for investigation (have-based), yet in addition review the organisation traffic (network-based). As opposed to a conduct-based IDS, this approach doesn’t need a preparation stage and may yield a low misleading positive rate [23], given that the detail is right and the virtual climate is reliable with the physical. The predefined cycle information could likewise assist with recognising semantic assaults on control frameworks, for example, talked about in [6, 25].
9.4.7 Behaviour Learning and Analysis Learning the conduct of the virtual climate fills in as important contribution for process enhancement and irregularity recognition. It might very well be feasible to find process bottlenecks or different elements that contrarily influence the interaction or the nature of made items. Besides, run of the mill Industry 4.0 use cases, for example, anticipating the assembling throughput or the soundness of frameworks [17] might be likewise executed on top of this module. With DAF twinning, examiners can zero in on learning calculations and information understanding, while the arrangement and observing are taken care of by the system.
134
Chapter 9 Security in Digital Twin
9.5 Management Client Since clients need the likelihood to oversee and control the virtual climate, a bound together point of interaction that gives admittance to all modules of the structure is required. Be that as it may, shielding the structure from unapproved clients is fundamental. Assuming that enemies get sufficiently close to DAF twinning, they could acquire a definite portrayal of the actual climate and distinguish flimsy spots of frameworks. This data can then be utilised to find assault ways to wanted targets. Similarly, the security and well-being rules might illuminate an aggressor about controls that are covered by the system. What’s more, the administration client may likewise give perception highlights to address advanced twins graphically.
9.6 Proof of Concept In this segment we exhibit how an actual climate can be demonstrated, and we present a first execution of the presented system. While we can show extracts in this segment, the total model and situation records can be found on GitHub. A negligible arrangement fills in as our actual climate, including a Siemens S7-1200 PLC, a HMI, an organisation change to associate the two parts, and a transport line that is driven by an engine. While the engine is worked by the PLC, the HMI is utilised to screen and control the PLC by means of the Modbus TCP/IP convention. The HMI (i.e., Modbus ace) permits clients to compose holding registers on the PLC (i.e., Modbus slave) to begin/stop the transport and to set the speed individually. The designer and space information (cf. Segment 3.1) is the principle contribution to establish a virtual climate. To figure out the detail, we picked the information design AML [10]. AML expects to help the total designing chain of creation frameworks by offering a normalised information trade design for the majority of the ancient rarities (e.g. specialised, topological, and control-related data) inside the designing system. This XML-based information design considers both a linguistic and a semantic level to portray the information objects and, moreover, is adaptable regarding augmentations and changes, which makes it a fitting possibility to show the necessary information for our system. The centre of the CPS twinning system has been created in Python and depends on existing parts, for example, Mininet [16]. Mininet permits clients to virtualise network conditions and it is extensible, so we had the option to construct the DAF twinning layers on top. Aside from Mininet, the system likewise coordinates the iec2c trans compiler that is remembered for the MatIEC project2. This compiler interprets code written in a programming language of the IEC 611313 norm to C code. Further, we carried out a custom runtime to execute the code created by iec2c with regards to an advanced twin, empowering DAF twinning to copy the inner conduct of PLCs. As a base necessity, the model relies on the presence of an antique that depicts the attributes of the advanced twins. The model at present backings the age of PLCs, HMIs, parts without an organisation interface (e.g. engine) that main
9.6 Proof of Concept
135
screen the condition of other computerised twins and default Mininet network hubs (e.g. switches). Additionally, the particular parser has been intended to remove information from an AML curio depicting the arrangement that will be introduced in the accompanying segment. Consequently, we anticipate that clients should either adjust the gave AML parser to their necessities or compose their own parser execution for their favoured information design.
9.6.1 Scenario Specification In the following, we clarify the pertinent pieces of the situation determination. Figure 9.3 represents the praiseworthy actual interaction. For a superior agreement, we portray the fundamental components of the particular and show extracts of the relic close by. While demonstrating the correspondence of hosts in AML, there are no less than two perspectives that can be characterised, viz. the physical and legitimate organisations [3]. For instance, for the actual organisation, Wire1 associates the actual endpoint of HMI1 with the actual endpoint of Switch1. The meaning of the actual endpoint of HMI1, including a piece of the organisation arrangement of the HMI (i.e., IP address), should be visible in Listing 1. Additionally, the portion shows an intelligent gadget named HMI that contains a consistent endpoint and the HMI variable velocity.
Figure 9.3: The praiseworthy actual interaction.
Posting 1: Excerpt of the HMI particular. – – – – – IP address
136
– – – – – – – – –
Chapter 9 Security in Digital Twin
192.168.0.2
int
As displayed in Posting 2, Wire1 is utilised to interface HMI1 to Switch1. In AML, a connection can be demonstrated by utilising the internal link component [3]. Posting 2 portrays how the endpoints of the two gadgets can be referred to by means of the Ref Partner Side A and Ref Partner Side B credits. Note that the trait worth of Ref Partner Side B focuses to the actual endpoint of HMI1 (cf. Posting 1). Posting 2: Excerpt of a wire particular. – – – –
Rather than the actual organisation, the consistent organisation models the trading of information according to a theoretical viewpoint [3]. As Posting 3 shows, a consistent association exists between the HMI and the PLC since the two hosts trade information by means of the organisation. Moreover, the component named Logical Connection incorporates convention information units (CIU) that indicate which PLC and HMI labels are traded. For example, the PDU named Velocity Modbus TCP Data Packet lays out a connection between the velocity tag of PLC1 and HMI1. Posting 3: Excerpt of the consistent organization detail.
9.6 Proof of Concept
137
Like HMI1, PLC1 is indicated as an actual gadget that incorporates a consistent gadget. The PLC code has been executed as an SFC and, as displayed in Posting 4, is referred to straightforwardly inside the AML antique. A bit of the control rationale is additionally portrayed in Figure 9.3, addressed as a stepping stool chart (LD). As should be visible in the LD, the beginning/stop labels control the result Q0.0, which thus drives the engine control block that sets off the beat train yield (PTO). Posting 4: Referencing PLC code (SFC) in AML.
file:///Source/ConveyorSystem_SFC.xml
Now, we have presented the specialist information about the gadgets with their design, the organisation arrangement (physical and legitimate) as well as the control rationale. While contrasting it with Figure 9.1, it is obvious that the particular of safety and wellbeing rules is as yet deficient. For this prototypical execution of DAF twinning, we need to show how a well-being and a security rule can be characterised well-being rule. Posting 5: Specification of the security rule, characterising the most extreme speed of the engine.
file:///Source/ConveyorSystem_SFC.xml#Velocity
60
In this model, we need to guarantee that the transport speed doesn’t surpass a specific limit. This well-being rule could either be expressed by the merchant of the engine or by an individual planning the interaction. To characterise this standard in the determination, we utilised the connection point class variable interface to reference the velocity variable of the PLC program and afterward added a required max value limitation (cf. Posting 5). Since factors of the PLC code can be expressed in detail, the individual characterising the standard isn’t expected to get to the code or comprehend the hidden rationale of the PLC program.
138
Chapter 9 Security in Digital Twin
Posting 6: Specification of the security rule, characterising that the upsides of the two factors should be equivalent.
equals
9.6.2 Security Rule Stuxnet showed how defenceless ICSs are to information control. Controlled sensors or regulators detailing wrong qualities can upset the whole interaction and may prompt risky states. Ways to deal with moderate this hazard incorporates the utilisation of autonomous sensors, PLC trustworthiness checks, and consistency checks all through the control organisation. In this model, we need to show how a consistency check can be demonstrated in AML, so the standard can be assessed during activity by DAF twinning. Review that the HMI shows the engine status (began/quit), as indicated by the administrator’s activity. An assailant who accesses the organisation could send off a man-in-the-centre assault (MITM) to control framework states. For instance, a fruitful ridiculing assault could make the HMI show a wrong status or stunt the PLC into turning over the engine, without the HMI really sending the order. To distinguish conflicting conditions of the HMI and the PLC, we present a variable link constraint that anticipates balance between the factors of the two gadgets (cf. Posting 6). Furthermore, we could likewise associate a free actual sensor (outside the cycle circle) to the DAF twinning structure and take its estimations for examination.
9.6.3 Virtual Environment Generation This part depicts bit by bit the way in which the model produces the virtual climate in view of information sources. The creation interaction is started by executing the custom order twinning by means of Mininet’s order line interface (CLI). This precise sequence assumes that the AML old rarity will be a point of contention for the parser. In the wake of conjuring the AML parser, the geography is created and the extricated rules are given to the security and well-being investigation module. As currently referenced in Section 9.4, we utilised the API of Mininet to execute the fundamental organisation layer of
9.6 Proof of Concept
139
the structure. Each executed advanced twin class that requires network abilities acquires from Mininet’s Host class, permitting a consistent joining into Mininet. Thus, the predefined Mininet orders (e.g. hubs to list everything has) can likewise be utilised for the created computerised twins. Attributable to Mininet’s virtualisation approach, each advanced twin of the organisation geography is a cycle that is running in its own organisation namespace [16]. While launching a PLC hub, the structure brings forth one more interaction in the twin’s namespace to run the program that imitates the inward conduct of the PLC. This specific Python program begins the form interaction of the PLC code and consequently deals with its execution. Contingent upon the detail, it is likewise ready to copy a Modbus slave. Posting 7: Output of orders that can be utilised for a PLC computerised twin. mininet> twinning /home/user/ConveyorSystem.aml mininet> nodes available nodes are: HMI1 PLC1 Switch1 c0 mininet> links Switch1-eth1HMI1-eth0 (OK OK) Switch1-eth2PLC1-eth0 (OK OK)mininet> show_tags PLC1 Name |Class |Type --------------------------------------ENABLE |var |bool PTO |var |bool Q10 |out |bool Q00 |out |bool START |mem |bool STOP |mem |bool VELOCITY |mem |int ... mininet> get_tag PLC1 START False mininet> set_tag PLC1 START True mininet> get_tag PLC1 START True
Besides, the PLC emulator begins an audience to trade information between the Mininet CLI and the cycle that imitates the PLC. Thus, clients can see metadata about the PLC program (e.g. variable sorts) and get/set a label’s worth. Posting 7 gives an outline of the upheld orders. Contrasted with Mininet has, the HMI computerised twins additionally support the show_tags, get_tag, and set_tag orders. On the off-chance that one of the orders to get/set a tag is executed, the system produces a Modbus expert to speak with the PLC. On account of all intents and purposes cloning actual gadgets that require no organisation capacity (e.g. engines), examples of the comparing class (e.g. motor) are made, however not added to the virtualised network. To reproduce the
140
Chapter 9 Security in Digital Twin
inside conduct of these gadgets, the cases screen explicit labels of associated computerised twins like a PLC. At last, the security and well-being investigation module is introduced with the parsed rules. This part screens explicit factors of advanced twins and issues alarms if there should arise an occurrence of an identified rule infringement. In this first form of DAF twinning, cautions are logged to a record.
9.6.4 Simulation and Results At this point, we need to test the virtual climate produced from determination in the past segment. In the first place, we execute basic activities in the physical as well as in the virtual climate. From that point, we contrast the result of the two runs with perceiving how comparative the outcomes are. Second, we execute an MITM assault inside the virtual climate to test the identification of abused security and well-being rules in light of the models given in Section 9.3. 9.6.3.1 Comparison of the Environment In order to compile, we activate all client orders using HMI in real weather and transitions in separate twins separately. The direct cycle steps are as follows: (1) Initially, the engine is turned off, and hopefully HMI1 will send the first order. Next, this similarly is a key function in the redesigned system. (2) Next, we adjust the speed of the transport line to numbers 19 and 21 thereafter. (3) Finally, we stop the transit line after a short time. In order to monitor behaviour and timing, we have organised the organisation’s traffic in a visible and realistic climate. The catchy organisation from real weather contains 46 packages, although the catch from real weather contains 50. We have banned random traffic, such as DHCP and DNS [31], from discarding realtime traffic. Addresses on the Internet are inseparable from these two scenarios, as noted in our past AML. In addition, the cycle steps correspond to these two conditions, which include our physical exit activities and the reaction posted by the Modbus slave. Posting 8: Excerpt of the Modbus TCP/IP network traffic in the actual climate. |Time | 192.168.0.2 192.168.0.1 |12.767734 | → | TCP: 49796 – 502 [SYN] Seq=0 MSS=1460 |12.768956 | ← | TCP: 502 – 49796 [SYN, ACK] Seq=0 Ack=1 MSS=1460 |12.768987 | → | TCP: 49796 – 502 [ACK] Seq=1 Ack=1 |12.769628 | → | Modbus/TCP: Query: Func: 16 (Register 2: 19) |12.787200 | ← | Modbus/TCP: Response: Func: 16 |12.787291 | → | TCP: 49796 – 502 [ACK] Seq=16 Ack=13 |12.787620 | → | TCP: 49796 – 502 [FIN, ACK] Seq=16 Ack=13 |12.791942 | ← | TCP: 502 – 49796 [ACK] Seq=13 Ack=17 |12.802970 | ← | TCP: 502 – 49796 [RST, ACK] Seq=13 Ack=17
9.6 Proof of Concept
141
Meticulously describing the situation, we analyse the TCP stream of the speed change inside the actual organisation (cf. Posting 8) to the virtual climate (cf. Posting 9). Line 5 of Posting 8 and Posting 9 mirrors the genuine Modbus solicitation to compose different registers, setting register number 2 to worth 19. Posting 9: Excerpt of the Modbus TCP/IP network traffic in the virtual climate. |Time | 192.168.0.2 192.168.0.1 |6.048040 | → | TCP: 52606 – 502 [SYN] Seq=0 MSS=1460 |6.050265 | ← | TCP: 502 – 52606 [SYN, ACK] Seq=0 Ack=1 MSS=1460 |6.050279 | → | TCP: 52606 – 502 [ACK] Seq=1 Ack=1 |6.053310 | → | Modbus/TCP: Query: Func: 16 (Register 2: 19) |6.053515 | ← | TCP: 502 – 52606 [ACK] Seq=1 Ack=16 |6.058982 | ← | Modbus/TCP: Response: Func: 16 |6.058995 | → | TCP: 52606 – 502 [ACK] Seq=16 Ack=13 |6.061786 | → | TCP: 52606 –502 [FIN, ACK] Seq=16 Ack=13 |6.068576 | ← | TCP: 502 – 52606 [FIN, ACK] Seq=13 Ack=17 |6.068586 | → | TCP: 52606 – 502 [ACK] Seq=17 Ack=14
The Modbus payload is indistinguishable in the two conditions, yet there is a distinction in the timestamps because of the way that we set off the orders physically. We can likewise see that the reaction times inside the virtual climate are lower than in our actual arrangement. The organisation heap of our virtual climate and the actual gadgets isn’t indistinguishable; consequently, there are slight contrasts in the TCP traffic Our advanced twin for the HMI reacts with an ACK parcel (Posting 9, line 6) to a Modbus question prior to sending the Modbus reaction. The HMI of the actual climate just sends the Modbus reaction (Posting 8, line 6). Comparably, the Siemens PLC sends an RST bundle after FIN to close the association (Listing 8, line 10). This short model illustrated that the naturally created advanced twins act as per the determination. The advanced twins including their organisation arrangement match the actual climate, viz. the actual climate matches the determination. Centring around the control rationale, we didn’t identify deviations in the control stream as the PLC’s computerised twin runs similar code [14] as its actual partner. Be that as it may, there are minor contrasts in the organisation traffic because of changing executions of the organisation stack. |Time | Flow | Info Posting 10: Excerpt of the captured network traffic between the PLC (P) and the HMI (H) from the aggressors (A). |0.000000 | A → P | ARP: 192.168.0.2 is at A |0.000387 | A → H | ARP: 192.168.0.1 is at A ... |29.44686 | H → A | TCP: 34976 – 502 [SYN] |29.44689 | A → P | TCP Out-Of-Order 34976 – 502 [SYN] ... |29.45431 | H → A | Modbus/TCP: Query: Func: 16 (Register 2: 20)
142
Chapter 9 Security in Digital Twin
|29.45432 | A → P | TCP Retr.: Query: Func: 16 (Register 2: 100) ... |29.47340 | P → A | Modbus/TCP: Response: Func: 16 |29.47341 | A → H | TCP Retr.: Response: Func: 16
Review that as indicated by the determination, the speed should not surpass the edge worth of 60 (cf. Posting 5) and the HMI speed label esteem should be equivalent to the comparing tag of the PLC (cf. Posting 6). Subsequently, assuming that the aggressor prevails with regards to altering the speed esteem, the security and well-being examination module of CPS twinning should raise two alerts since the two imperatives are abused. As should be visible in Posting 11, the system tracks state changes and yields an admonition in the event that an infringement of a predetermined rule happens. The logged assertions could flag plant administrators or security experts that a framework is in a strange condition and requires examination. As currently expressed, the MITM assault has been done in the virtual climate, implying that we tried the identification of rule infringement while running DAF twinning in recreation mode. Posting 11: Logging result of CPS twinning. INFO:root: “Velocity” value changed 0 → 20 in device ‘HMI1’. INFO:root: “VELOCITY” value changed 0 → 100 in device ‘PLC1’. WARNING:root:ALERT! “PLC1” tag [Velocity=100] exceeds max value of, → 60. WARNING:root:ALERT! “HMI1” tag [Velocity=20] does not equal ‘PLC1’, → tag [Velocity=100].
Notwithstanding, the outcome in replication mode would be indistinguishable, giving that all conditions of the actual creation framework are duplicated. Besides, the shortfall of the aggressor’s MAC address in the particular would bring about a confuse between the physical and virtual climate; consequently, uncovering the assault. It ought to likewise be referenced that the introduced assault situation can be relieved by indicating network rules (e.g. an IP to MAC address planning as of now exists in the particular). Be that as it may, we picked the previously mentioned administers purposefully to exhibit how the condition of computerised twins can be examined.
9.7 Related Work Previous examination can be isolated into (I) the re-enactment of CPSs, (ii) processmindful interruption recognition frameworks, and (iii) the demonstrating of computerised twins. A few investigations have endeavoured to reproduce ICSs to evaluate their degree of safety [2, 7, 9]. Like our methodology, MiniCPS [2] is likewise founded on Mininet to copy the organisation layer for re-enacted DAFs. Notwithstanding, there are a few central qualifications among MiniCPS and DAF twinning. Most importantly, MiniCPS
9.7 Related Work
143
expects to give scientists a virtual climate to test DAFs network arrangements, investigate different assault situations, and assess countermeasures. While DAF twinning likewise upholds all of the previously mentioned use cases, the focal point of our work lies on creating advanced twins to basically repeat the actual cycle as close as could be expected. Moreover, we propose a structure with security modules past organisation examination [13] and present a first model including security and well-being rules in reproduction mode. Second, DAF twinning has been intended to produce the virtual climate from determination. Thus, the structure permits a consistent reconciliation into the framework designing interaction and can additionally be kept up with effortlessly by refreshing the determination. Third, in [2], the creators exhibit how an MITM assault can be forestalled by carrying out an ARP ridiculing discovery calculation in a custom programming characterised organising (SDN) regulator. Conversely, we show in Section 9.3.2 how such an assault can be distinguished by observing the conditions of advanced twins. At long last, it is additionally worth featuring that there are significant contrasts in the execution of the two systems. While in MiniCPS the imitation of a PLC expects clients to port the PLC code to Python, our introduced model backings direct PLC code execution. Taking everything into account, Mini CPS upholds Modbus TCP/IP and Ethernet/IP, though CPS twinning is presently restricted to Modbus TCP/IP [31]. R. Chabukswar [9] research how SDN can be utilised to expand the strength of brilliant lattices and which security gambles are engaged with this methodology. In their work, they depict a few use situations where SDN can be applied as a countermeasure against assaults, for example, a dispersed disavowal of-administration (DDoS) or bundle defer assault. Further, the creators fostered a brilliant brace proving ground to approve their proposed approach. This specific proving ground depends on Mini net and PowerWorld3, a power framework test system. In opposition to, the current paper does neither spotlight explicitly on the versatility of CPSs nor does the model introduced in Section 8 help the age of advanced twins that would give a total virtual copy of a brilliant lattice. Nonetheless, as the work by Dong et al. [9] illustrates, Mininet is likewise appropriate to be utilised as a component of an SDN-based brilliant framework test system; accordingly, improving the model to grow the idea of advanced twins additionally to savvy networks might comprise an important augmentation. Works, for example [25] and [8], propose a cycle mindful interruption identification procedure by using the information on engineers who fostered the framework. In [25], the creators present an organisation-based IDS structure for SCADA frameworks that can consider process variable qualities (e.g. temperature). In their work, they propose a language that permits specialists to communicate typical upsides of cycle factors. While characterising these imperatives with their proposed language appears to be trifling, this prerequisite can be considered as an action that doesn’t normally squeeze into the designing work process and may involve extra exertion. Conversely, we propose the implied or unequivocal meaning of these limitations in designing information designs, like AML. While endeavours have been made to robotise the undertaking of making the standards that determine a framework’s right https://www.power world.com/conduct [5], manual work clearly remains assuming that the documentation is
144
Chapter 9 Security in Digital Twin
absent or different sources give deficient data. Accordingly, we contend that security information about an DAF should be characterised currently in beginning stages of the framework designing cycle and afterward kept up with reliably all through the framework’s lifecycle. Different investigations connected with our work focus on demonstrating [1, 28] and carrying out computerised twins. It is worth focusing that Schroeder et al. [28] additionally use AML to show computerised twins. Nonetheless, their work centres around the information trade between the advanced twin and different frameworks.
9.8 Conclusions In this chapter, we have introduced DAF twinning, a system to produce and execute advanced twins, and besides, we have shown the interaction in a modern situation. It covers an original way to deal with naturally create virtual conditions totally from determination. Thus, this approach is reusable and reliable and ensures a total impression of the determination. With this methodology, associations that as of now use detail dialects in their designing cycle can assemble a computerised climate with practically no or minimal extra exertion. Tooling backing and gadget formats could help in arriving at a steady particular overall. It further opens the likelihood to reproduce and try different things with an indistinguishable DAF climate, just by trading its particular. According to a security point of view, an indistinguishable (as far as the framework’s particular), mimicked climate can be unreservedly investigated and tried by security specialists, without jeopardising the creation climate. Nonetheless, the security prospects go past that. On top of the virtualisation motor, security modules support specialists in safeguarding the climate. In a proof of idea, we have exhibited how the structure recognises an MITM assault, focusing on the control of an engine’s speed. Current limits of the model incorporate a restricted help of information types in PLC code (information types other than Boolean and whole number are not accessible) and Modbus work codes. There are likewise a few stages, which have been set off physically, however could be incorporated in a total mechanisation pipeline. For instance, the interpretation of seller explicit capacity blocks. While the current paper has presented the possibility of the structure and a model has shown the total cycle, from determination to recognition, different elements and modules have not been executed at this point. To the extent that the model of the DAF twinning system is concerned, the execution thereof raised a few issues that merit bringing up. In the first place, the fundamental thought of this work depends with the understanding that a particular of DAFs exists in a degree of detail that permits the age of the virtual climate. In a perfect world, antiques are kept up with all through the lifecycle of DAFs and are as of now reasonable to be utilised as a contribution for DAF twinning, which practically speaking may not be the situation. Assuming that ancient rarities are absent or inadequate, clients are expected to physically make the determination prior to utilising the system. Since this interaction is mistake-inclined and includes manual
References
145
work, coordinating a determination mining approach into DAF twinning would offer important help. Second, accomplishing an execution of computerised twins that gives an indistinguishable replication of their actual partners is testing. For instance, contrasts in the organisation stack execution might appear in the organisation traffic itself (cf. Segment 9.3.1) and the circumstance of activities, making advanced twins be out of sync with their actual partners. Third, carrying out a system that is prepared to do intently reflecting DAFs is a work-serious assignment, regardless of whether a few parts that work with the turn of events (e.g. Mininet) are now freely accessible. As future work, we need to zero in on the replication method of the structure, that is, reflecting the condition of actual frameworks to their comparing advanced twins. To approve our methodology, we intend to send off an MITM assault in the genuine climate to identify deviations in the conduct of computerised twins to recognise assaults as soon as could be expected. Also, we intend to resolve the issue of non-existent or fragmented relics by mining particulars. Aloof sniffing, gadget fingerprinting, or extricating information from framework logs and documentation might be potential methodologies that would assist clients with getting everything rolling with DAF twinning assuming no such particular exists at first. Further points incorporate, for example, a displaying language for security and well-being rules, investigating gadget formats with security rules, conduct learning, and examination in blend with inconsistency identification, a client to imagine results and to deal with the system, as well as supporting extra modern conventions.
References [1]
[2]
[3] [4]
[5] [6]
[7]
C. Wang, L. Fang, Y. Dai, A Simulation Environment for SCADA Security Analysis and Assessment, in: 2010 International Conference on Measuring Technology and Mechatronics Automation (Vol. 1), 2010, pp. 342–347. https://doi.org/10.1109/ICMTMA.2010.603 M. Caselli, E. Zambon, F. Kargl, Sequence-aware Intrusion Detection in Industrial Control Systems, in: Proceedings of the 1st ACM Workshop on Cyber-Physical System Security (CPSS ’15). ACM: NY, 2015, pp. 13–24. https://doi.org/10.1145/2732198.2732200 J. J. Chromik, A. Remke, B. R. Haverkort, What’sunder the Hood? Improving SCADA Security with Process Awareness. IEEE 2016, https://doi.org/10.1109/CPSRSG.2016.7684100 eemcs-eprint-27160. R. Drath, A. Luder, J. Peschke, L. Hundt, AutomationML – The Glue for Seamless Automation Engineering, in: 2008 IEEE International Conference on Emerging Technologies and Factory Automation, 2008, pp. 616–623. https://doi.org/10.1109/ETFA.2008.4638461 T. Morris, W. Gao, Industrial Control System Traffic Data Sets for Intrusion Detection Research. Springer: Berlin, 2014, pp. 65–78. https://doi.org/10.1007/978-3-662-45355-1_5 D. Antonioli, N. Ole Tippenhauer, MiniCPS: A Toolkit for Security Research on CPS Networks, in: Proceedings of the First ACM Workshop on Cyber-Physical Systems-Security and PrivaCy (CPS-SPC ’15). ACM: NY, 2015, pp. 91–100. https://doi.org/10.1145/2808705.2808715 A. Lüder, N. Schmidt, K. Hell, H. Röpke, J. Zawisza, Identification of Artifacts in Life Cycle Phases of CPPS. Springer: Cham, 2017, pp. 139–167. https://doi.org/10.1007/978-3-319-56345-9_6
146
[8] [9]
[10] [11]
[12]
[13]
[14] [15] [16]
[17]
[18]
[19]
[20] [21]
[22]
[23]
[24]
[25]
Chapter 9 Security in Digital Twin
AutomationML, Whitepaper: Communication, Technical Report V_1.0.0, 2014. AutomationML consortium. R. Chabukswar, B. Sinopoli, G. Karsai, A. Giani, H. Neema, A. Davis, Simulation of Network Attacks on SCADA Systems, in: First Workshop on Secure Control Systems, Cyber Physical Systems Week, 2010. http://www.truststc.org/pubs/693.html R. Baheti, H. Gill, Cyber-physical Systems, Impact Control Technol. 12, 2011, 161–166. X. Dong, H. Lin, R. Tan, R. K. Iyer, Z. Kalbarczyk, Software-Defined Networking for Smart Grid Resilience: Opportunities and Challenges, in: Proceedings of the 1st ACM Workshop on CyberPhysical System Security (CPSS ’15). ACM: New York, NY, USA, 2015, pp. 61–68. https://doi.org/10. 1145/2732198.2732203 M. Caselli, E. Zambon, J. Amann, R. Sommer, F. Kargl, Specification Mining for Intrusion Detection in Networked Control Systems. USENIX Association, e Proceedings of the 25th USENIX Security Symposium August 10-12, 2016, Austin, TX, pp. 791–806. ISBN 978-1-931971-32-4. B. Genge, C. Siaterlis, G. Karopoulos, Data Fusion-base Anomaly Detection in Networked Critical Infrastructures, in: 2013 43rd Annual IEEE/IFIP Conference on Dependable Systems and Networks Workshop (DSN-W), 2013, pp. 1–8. https://doi.org/10.1109/DSNW.2013.6615505 A. Lüder, N. Schmidt, K. Hell, H. Röpke, J. Zawisza, Fundamentals of Artifact Reuse in CPPS. Springer: Cham, 2017, pp. 113–138. https://doi.org/10.1007/978-3-319-56345-9_5 R. Mitchell, I.-R. Chen, A Survey of Intrusion Detection Techniques for Cyber-physical Systems, ACM Comput. Surv. 46(4), 2014, Article 55 (March 2014), 29 pages. https://doi.org/10.1145/2542049 M. Iturbe, I. Garitano, U. Zurutuza, R. Uribeetxeberria, Towards Large-Scale, Heterogeneous Anomaly Detection Systems in Industrial Networks: A Survey of Current Trends, Hindawi, Secur. Commun. Netw. 2017. B. Lantz, B. Heller, N. McKeown, A Network in A Laptop: Rapid Prototyping for Software-defined Networks, in: Proceedings of the 9th ACMSIGCOMM Workshop on Hot Topics in Networks (HotnetsIX). ACM: NY, Article 19, 2010, 6 pages. https://doi.org/10.1145/1868447.1868466 M. Luchs, C. Doerr, Last Line of Defense: A Novel IDS Approach against Advanced Threats in Industrial Control Systems. Springer: Cham, 2017, pp. 141–160. https://doi.org/10.1007/978-3-31960876-1_7 D. I. Urbina, J. Giraldo, A. A. Cardenas, J. Valente, M. Faisal, N. Ole Tippenhauer, J. Ruths, R. Candell, H. Sandberg, Survey and New Directions for Physics-based Attack Detection in Control systems, Technical Report, NIST, 2016. https://doi.org/10.6028/nist.gcr.16-010 S. Jill, M. Michael, Lessons Learned from the Maroochy Water Breach, in E. Goetz, S. Shenoi (Eds.), Critical Infrastructure Protection. Springer: Boston, 2008, pp. 73–82. J. Lee, E. Lapira, B. Bagheri, H. An Kao, Recent Advances and Trends in Predictive Manufacturing Systems in Big Data Environment, Manufact. Lett. 1(1), 2013, 38–41. https://doi.org/10.1016/j.mfglet. 2013.09.005 M. Shafto, M. Conroy, R. Doyle, E. Glaessgen, C. Kemp, J. LeMoigne, L. Wang, DRAFT Modeling, Simulation, Information Technology & Processing Roadmap Technology Area 11, National Aeronautics and Space Administration, 2010. V. Ján, B. Lukás, O. Rovn `Y, Š. Dana, M. Martin, L. Milan. The Digital Twin of an Industrial Production Line within the Industry 4.0 Concept, in: 2017 21st International Conference on Process Control (PC), 2017, pp. 258–262. https://doi.org/10.1109/PC.2017.7976223 S. McLaughlin, C. Konstantinou, X. Wang, L. Davi, A. R. Sadeghi, M. Maniatakos, R. Karri, The Cybersecurity Landscape in Industrial Control Systems, Proc. IEEE 104(5), May 2016, 1039–1057. https://doi.org/10.1109/JPROC.2015.2512235 B. Miller, D. Rowe, A Survey of SCADA and Critical Infrastructure Incidents, in: Proceedings of the 1st Annual Conference on Research in Information Technology (RIIT ’12). ACM: NY, 2012, pp. 51–56. https://doi.org/10.1145/2380790.2380805
References
147
[26] K. Stouffer, V. Pillitteri, S. Lightman, M. Abrams, A. Hahn, Guide to Industrial Control Systems (ICS) Security, NIST Special Publication 800, 82r2, Jun. 2015. https://doi.org/10.6028/nist.sp.800-82r2 [27] G. N. Schroeder, C. Steinmetz, C. E. Pereira, D. B. Espindola, Digital Twin Data Modeling with AutomationML and a Communication Methodology for Data Exchange, IFAC-Papers OnLine 49(30), 2016, 12–17. https://doi.org/10.1016/j.ifacol.2016.11.115 [28] U. Prem, R. Sekar, Experiences with Specification-Based Intrusion Detection. Springer: Berlin Heidelberg, Berlin, Heidelberg, 2001, pp. 172–189. https://doi.org/10.1007/3-540-45474-8_11 [29] T. H.-J. Uhlemann, C. Lehmann, R. Steinhilper, TheDigital Twin: Realizing the Cyber-Physical Production System for Industry 4.0.Procedia CIRP 61, Supplement C, 2017, pp. 335–340. https://doi. org/10.1016/j.11procir.2016.11.152 [30] J. E. Rubio, C. Alcaraz, R. Roman, J. Lopez, Analysis of Intrusion Detection Systems in Industrial Ecosystems, in: 14th International Conference on Security and Cryptography (SECRYPT 2017), 2017. [31] K. M. Alam, A. El Saddik, C2PS: A Digital Twin Architecture Reference Model for the Cloud-Based Cyber-Physical Systems, IEEE Access 5, 2017, 2050–2062. https://doi.org/10.1109/ACCESS.2017. 2657006
Chapter 10 Implementation of Digital Twin Abstract: While creating altered merchandise in little bunches, digital actual creation frameworks give adaptability and versatility. Material streams in digital actual creation frameworks can turn out to be very complicated because of changing pathways and an extensive variety of work pieces, which can cause genuinely prompted aggravations that can bring about mishaps, lower yield, and exorbitant expenses. Applying a physical science motor to this issue will tackle it. Recreate the genuine actual collaboration that happens during the activity between the work pieces and the material taking care of gear. Associating a genuine material taking care of framework to such a computerised model to create re-enactment-based choices the possibility of advanced twins is upheld. There haven’t been many known examples of advanced twins being utilised in assembling beyond the machine device industry. Since a certified material dealing with framework and its computerised partner depend on physical science re-enactment, this study examines the displaying and ensuing development of an incorporated framework. With respect to the three computerised twin jobs of expectation, checking, and conclusion, a genuine use case delineates the arrangement’s many benefits for a creation framework. Keywords: Digital twin, virtual twin, simulation, process industry, literature review, barrier enabler
10.1 Introduction The groundwork of Industry 4.0, digital actual creation frameworks (CPPS), empowers adaptable and versatile assembling of custom products through the combination and systems administration of digital actual frameworks (CPS) [1, 2]. CPS, which can appear as machines or material dealing with frameworks, are connected inside and across all creation levels and consolidate inserted frameworks that record and adjust genuine cycles utilising sensors and actuators. In view of this construction, CPS can cooperate with both the physical and the advanced universes by checking and overseeing actual cycles [3]. Individual creation arrangements can be made in CPPS by powerfully doling out blue collar positions to creation assets. To self-coordinate the assembling arrangements of different items, repetitive machines much of the time deal with items and with one another [3]. Subsequently, every item can travel an extraordinary way through the CPPS. In view of a discrete-occasion reproduction of a CPPS that makes customised things, Figure 10.1 portrays this event. The material progressions of four illustrative things demonstrate the way that different material streams can be used in workstations and courses. Because of different https://doi.org/10.1515/9783110778861-010
150
Chapter 10 Implementation of Digital Twin
Figure 10.1: Shifting courses in Cyber Physical Production System.
attributes of the industrial facility format, the actual way of behaving of the work pieces during material stream cycles might adjust along these assorted ways (e.g. slopes or bends). The range of items further adds to this intricacy in the material stream, which addresses the development of discrete things on transports or transport routes in consistent and unpredictable time stretches: products could fundamentally contrast concerning their actual properties, like mass, dormancy tensors, or surface unpleasantness, because of customisation [5, 6]. This is made possible by the availability of information in CPPS on the manufactured and shipped goods, which is used as simulation input. Additionally, due to the CPS nature of these systems, motion data is generated by sensing elements and can be independently controlled by movers and message tools. Third, the physics engines’ realtime capabilities allow for quick runtimes and consequently on-demand decision assistance. The probability of the portrayed aggravations in material lows in CPPS may therefore be decreased by coupling such a reproduction model to a genuine material dealing with framework that executes material streams. To conjecture, track, and examine actual material streams, a material taking care of framework’s computerised twin in light of a physical science re-enactment is fundamental. The essential target of this paper, which is coordinated as follows, is to demonstrate and carry out this computerised twin methodology utilising a certifiable material dealing with framework. The cutting edge with connection to physical science recreation, computerised twins, and material streams in contemporary assembling is explained in Section 10.2. Advanced twins are seldom demonstrated as far as construction and cooperation prior to being utilised in research these days. To reproduce the whole framework before it is executed, a framework designing method is utilised to deal with the intricacy that rises up out of coordinating a re-enactment model, a computerised twin, and a genuine framework: Phase 1 characterises the essentials for the execution of the advanced twin. This is completed in view of a portrayal of all expected capabilities for the computerised twin (Section 10.1). Stage 2 includes displaying the design of the whole framework that will be made to fulfil the predetermined necessities (Section 10.2). Stage 3’s displaying of the important framework collaborations is effective (Section 10.3). Stage 4 is the execution stage. In view of the previous demonstrating stages, this stage associates a genuine
10.1 Introduction
151
material taking care of framework to a computerised twin that depends on a physical science re-enactment (Section 10.5). In Phase 5, the executed idea fills in as the reason for a genuine use case that features the upsides of the execution. The upgrades over current techniques that results are additionally depicted (Section 10.6). A rundown and estimate balance Section 10.7’s decision. A material flow process in this context denotes a movement between two locations such as machines. Physically produced disturbances, which are a frequent worry during CPPS operation, are complicated by the complexity stated, especially when the physical characteristics of the transported objects alter. Work pieces can tip over or tumble off a conveyor as an illustration of this type of disturbance [8]. Jams and downtimes can be caused by work pieces that have dropped or otherwise altered their orientation on a conveyor [9]. To avoid the tipping of transported products, material handling systems in industry must be halted gently. As a result, additional load securing is no longer required [10]. Commonly, conveyor speeds are adjusted to very low levels, such 0.2 m/s [11], which slows down transport while reducing the dynamic influence on the work pieces. The previously mentioned disturbances can likewise happen in non-fixed material taking care of frameworks. AGVs, or mechanised directed vehicles, are broadly utilised underway in light of the fact that they consider variable steering. For AGVs, keeping up with load soundness is fundamental since burdens could tip in difficult maneuvers because of radiating powers or in crisis stops because of the dormancy of the heap [12]. AGVs should have the option to stop inside a protected distance, which ordinarily sets the most extreme AGV speed, particularly in regions that are imparted to workers [13]. To by and large lessen the opportunity of burdens sliding off because of inactivity or diffusive powers, AGV speed is much of the time restricted to values around 1 m/s [14]. AGVs are by all accounts not the only significant supporter of lethal wounds in modern settings; loads tumbling off forklifts are another. These mishaps regularly happen because of administrators misconceiving the fluctuating actual attributes of the shipped load in mix with the powerful way of behaving of the vehicle [15]. These outlines show the way that the actual qualities of burdens can bring about disturbances when they cooperate with different material taking care of frameworks, regularly because of flat speed increases. This might bring about setbacks, hurt products or assembling foundation, longer material taking care of framework free time, longer conveyance times, and more noteworthy expenses. Forestalling the previously mentioned unsettling influences can work on the security and monetary proficiency of an assembling framework given that material taking care of records for generally half of all modern wounds [16] and represents 15–70% of an item’s assembling cost [17]. Certain working boundaries control the actual way of behaving of activities in material taking care of frameworks that can bring about the portrayed issues (e.g. the speed increase or the force of the drive). Nonetheless, these elements habitually affect the actual way of behaving as well as could affect the material taking care of framework’s presentation. For if a material dealing with framework, for example, is utilised
152
Chapter 10 Implementation of Digital Twin
at quicker rates and speed increases, which will bring about faster material having a good effect decline throughput times and streams [18]. As a compromise, this method additionally places greater inertial stresses on the carried work pieces, which raises the likelihood of disturbances of the aforementioned kind. Fixtures for load securing could help accomplish the goal of fewer disturbances. But when it comes to bespoken items, these gadgets need to be very adaptable, which raises their price and intricacy. Additionally, weight setting extends the time required for material dealing with and, consequently, output, particularly when numerous transport activities are necessary. Another normal choice is to pick unassuming speeds and speed increases to totally preclude the probability of disturbances by limiting speed increase openness. Along these lines, most vehicles in modern material dealing with frameworks take more time than is needed to finish, which brings about lengthy travel times. A successful technique to address these downsides is to have material taking care of frameworks’ functional boundaries consequently picked in view of the nature and attributes of the shipped load [19]. Extraordinary re-enactment approaches are expected for this. Actually finding proper boundaries is unimaginable in discrete-occasion reproductions, which are habitually used to impersonate material streams on a hierarchical level to boost objectives like throughput [20]. All things considered, utilising physical science recreation to show the actual collaboration between work pieces and material taking care of frameworks all through every specific material dealing with activity could be a method for boosting the exhibition of the framework while staying away from disturbances.
10.2 Simulation of Physics The collaborations that are portrayed as having the capacity to cause unsettling influences depend on the elements and kinematics of unbending bodies, for example, the work pieces and the relating parts of the material dealing with framework. These peculiarities can be recreated by utilising material science motors [21], which is from here onward alluded to as physical science reproduction. PC programs known as “physical science motors” permit clients to quantitatively foresee how a gathering of solid bodies would act after some time because of outside impacts. Physical science reproductions, otherwise called truly based recreations, have been involved increasingly more for the end goal of designing. They have their foundations in PC designs and were initially evolved with the objective of effectively figuring actual events for PC produced movements (e.g. computer games) [9]. The advantage of this strategy is that it considers the demonstrating free recreation of different conditions subsequent to providing a bunch of laid out limitations and actual qualities [7]. For example, further material stream cycles can be consequently
10.2 Simulation of Physics
153
reproduced following starting demonstration of a material dealing with framework by essentially adding relevant data about the conveyed items and changing appropriate boundaries. The principal steps in a physical science re-enactment are given as follows: the thought about objects (unbending bodies) are stacked into the reproduction scene during instatement. Normally, every one of these things is addressed by a remarkable polygon network that is involved its vertices, edges, and faces. These cross sections can be created utilising work piece information from PC helped plan (CAD) programming. Everything is given a bunch of actual qualities, like mass, inactivity, and grating coefficients, prior to being re-enacted. Following the meaning of the characteristics, discrete time steps are utilised to re-enact how the items connect [22]. Every one of these time steps in a physical science re-enactment is addressed by an ordinary reproduction circle in Figure 10.2a: the recreation first searches for any item crashes (impact recognition). Each one of a kind item’s expressed cross section fills in as its particular impact math for this reason. The suitable contact locales are enrolled after crashes are found. Physical science conditions relating to the movement of the singular articles are characterised and settled as for these contact areas to determine contact powers. Contact dealing with alludes to the most common way of deciding the fitting grinding recreation and keeping objects from puncturing each other [25, 26]. Impact goal is the following stage, which manages different crash types: Contacts that haven’t occurred in before time steps propose impacts with imprudent powers. These powers bring about a quick change in object speeds and are much of the time took care of autonomously of prior cooperation (e.g. an item laying on
Figure 10.2: (a) A time-step simulation loop in physics simulations; (b) material science reproduction showing an unsettling influence on an AGV.
154
Chapter 10 Implementation of Digital Twin
Figure 10.2 (continued)
another). The new areas and speeds are then resolved utilising time combination once all contact powers have been determined, and the previously mentioned circle is then rehashed [23] with a recurrence of no less than 60 Hz. The actual way of behaving of material stream processes, which incorporates the elements and kinematics of solid bodies, can be recreated by the useful guideline of physical science re-enactment. Thus, as long as the unsettling influences are achieved by specific actual communications, physical science re-enactment might copy and figure various unmistakable aggravations. Workpieces or material taking care of frameworks that slam into production line foundation are instances of this kind of unsettling influence. Different models incorporate workpieces that slip or overturn because of quick speed increases or difficult makeovers. Past investigations, for example, those by researcher, utilised physical science reproduction of material streams for virtual authorising of arranged assembling frameworks preceding the genuine effort. Lead times [27] and free times of material dealing with frameworks [8] might be abbreviated in the event that physical science reenactment is utilised to reproduce material streams during the activity of a creation framework [39]. Be that as it may, none of the ongoing strategies sufficiently addresses this application case. Subsequently, the creators’ prior work endeavoured to fill this information hole fully intent on applying physical science recreation to help material dealing with framework working. The relevant actual peculiarities were situated in a fundamental step [7]. The python module pyBullet was accordingly picked as a fitting setting for incorporating the imagined reproduction model [28]. A far reaching and adaptable structure is presented by pyBullet to recreate the mechanical collaboration of firm bodies. A fundamental model was utilised to thoughtfully recognise every one of the data sources required for the re-enactment model. This chapter’s key commitment was to dissect the reasonable execution work process required for a material science re-enactment model with the absolute minimum capacities. The degree of precision that can be accomplished inside the recreation
10.3 Digital Twin in Production
155
model was scientifically analysed in the following work [4], alongside another illustrative use instance of a less difficult AGV (see Figure 10.2b). The discoveries of this exploration demonstrated the way that pyBullet can precisely conjecture the essential classes of occasions. Subsequently, the creators’ previous work delivered a principal physical science model that makes it conceivable to imitate actual events while treatment of materials dependably. Nonetheless, to be appropriate, practically speaking, there are even more moves toward improvement. The pre-owned models included in light of speculative circumstances. The most common way of displaying a framework with the legitimacy of the methodology will be essentially expanded by a certifiable partner since genuine material taking care of framework highlights can be considered. Tracking down association components between the physical science recreation and genuine frameworks, ideally founded on modern correspondence principles, is one more necessity for deciding ideal control boundaries for these materials dealing with frameworks. Furthermore, prior executions keep on working on a genuinely conventional level, and data streams to and from the material science recreation are not indicated. It takes modules that can pre-process recreation inputs and can produce choices and usable control values from reproduction results to make re-enactment-based choice help. All in all, the possibilities for a CPPS presented by the recreation of material streams before their execution are as per the following: Processes including material stream can be finished all the more rapidly and without load getting. Burdens can be shipped at a quicker yet more secure speed by foreseeing the proper qualities like speed or speed increase. This could prompt better utilisation of the material taking care of framework and more limited lead times [58]. Moreover, by bringing down the probability of unsettling influences, margin times and potentially costly harms to the framework of the manufacturing plant or the actual items can be stayed away from. Moreover, gambles from falling work pieces can be tried not to change the boundaries as per the results of the recreation. Safe working rates for transports, AGVs, or re-enactment-based training for forklift administrators, for example, can do this. To pursue utilisation of these open doors, an essential reproduction model should be extended so it can speak with a relating true material taking care of framework and trade information and control contributions to give re-enactment-based choice help. The possibility of “computerised twins”, which is made sense of in the accompanying part, depends on the procedure of recreating a genuine framework utilising a computerised model to empower activity.
10.3 Digital Twin in Production The accompanying gives an establishment to interfacing recreation models to genuine frameworks and cycles: A computerised model of every relevant framework and cycles can be made by incorporating the capacities of cloud-based reproduction, the
156
Chapter 10 Implementation of Digital Twin
tremendous measure of information currently accessible (like sensor information) and the interconnectedness of CPPS pieces. A computerised twin is a typical term used to depict this. The expression “computerised twin”, which has its underlying foundations in aviation design, was first used to allude to a coordinated multi-physical science, multiscale, probabilistic reproduction of a framework that utilises the best actual models, sensor information, and verifiable information to imitate the way of behaving of a genuine framework [29]. A functioning special item (genuine thing, object, machine, administration, or theoretical resource) can be addressed carefully as a “computerised twin” in assembling by utilising models, data, and information [30]. This computerised twin incorporates the item’s picked characteristics, properties, conditions, and ways of behaving. Since a computerised twin can help a CPPS’s insight through logical evaluation, prescient determination, and execution streamlining [33], its reception can be viewed as a significant essential for the activity of a CPPS [31, 32]. As recently said, a CPPS is framed by associated CPS in an assembling setting. Subsequently, connecting computerised twins of a CPPS’s characteristic CPS is important to make a complete advanced twin of the CPPS [34]. The transient grouping between a computerised twin reproduction and the event of genuine occasions can outline the maximum capacity of a computerised twin. Three essential assignments of computerised twins were recorded in the debut meaning of the term in this setting [29]: – Prediction: Investigating the system’s behaviour in real life before the runtime – Monitoring: For the purposes of monitoring and regulating, predicting the status of the actual system – Diagnosis: Examining unexpected problems that occur after the real system has become operational. The objective of a computerised twin in a creation framework is to improve on dynamic techniques and empower choice computerisation by mimicking explicit parts and framework capabilities. The utilised models are fit for leading recreation tests utilising constant information while considering the condition of the creation framework right now [34]. Functional information and relating computerised models are associated by means of CPPS components through sensors, actuators, and correspondence frameworks. A computerised twin, in additional profundity, utilises models related to various information sources to translate and estimate the way of behaving of a genuine framework [35]. Subsequently, to completely use the illustrated capability of advanced twins and apply CPPS in an Industry 4.0 climate, it is important to approach reasonable computerised models of assembling framework parts [36]. The making of proper associations between models, related advanced modules, and the genuine framework is another vital need.
10.3 Digital Twin in Production
157
A computerised twin application in assembling shows a bunch of determinations and qualities. The computerised bidirectional information stream between an advanced twin and its genuine partner is supposed to be what recognises an advanced twin from its certifiable partner. Subsequently, choices made utilising recreation inside the advanced twin may quickly bring about control changes to the genuine framework [37]. An established element of a computerised twin is given as follows: an advanced twin can assume command over the framework, rather than the Digital Shadow, which just guides the way of behaving of the genuine framework [37]. Normalised designs [31] as OPC UA are required for the information move between the relating actual framework and its advanced twin. Industry 4.0 is seen as being empowered by this brought together correspondence standard for data displaying and constant information transmission in assembling [38, 40, 41]. Digital twins in production are typically just viewed as a component of the digital twins of the relevant machines throughout their life cycle. A digital twin in manufacturing must take into account more than that, though. Since the processing of products is what distinguishes production, it is important to take into account not just the components of manufacturing systems, such as machines or material handling systems, but also how these components interact with the items being manufactured. So, as a constitutional feature of a digital twin in manufacturing, we now include the consideration of interactions between the product and the manufacturing system. An overview of the state of digital twin research in manufacturing engineering is provided by a recent survey [37]. The majority of submissions, according to the survey’s authors, present theoretical perspectives on the digital twin. A paper by C. Li [36] that describes a digital twin reference model for design and manufacturing is one example. Despite the importance and potentials mentioned, few initiatives go beyond the conceptual level to demonstrate the advantages of the digital twin in manufacturing with actual CPS executed use cases. The majority of these implementations concentrate on individual industrial systems or shop floors. Bottani et al. [42] provide some excellent work by presenting a digital twin of a production system that is based on discrete-event simulation. A computerised twin is utilised by Zhang et al. to improve a glass creation process [43]. To further develop booking, checking, and support, Mourtzis et al. develop a computerised twin thought in view of OPC UA on real machines [44]. While Urbina Coronado et al. set up a computerised twin of a shop floor for creation control and streamlining reasons [46], Stark et al. make sense of the computerised twin of a savvy plant cell in view of OPC UA [45]. The last methodology attempts to screen shop floor streams concerning worldly boundaries like handling or appearance season of specific assembling activities. A stage that works with the sending of a computerised twin of machine devices is depicted by Liu et al. There are a generally modest number of approaches with regards to computerised twin executions of specific CPS inside assembling. For checking
158
Chapter 10 Implementation of Digital Twin
and control, Lynn et al. make a computerised twin of a processing machine [47]. The strategy is tried utilising a genuine processing machine [48]. Without approval on a real framework, Luo et al. make a computerised twin of a machine instrument utilising the Unified Modeling Language (UML) [49]. Both of the later strategies impart utilising OPC UA. Liu et al. make a computerised twin that changes over a veritable machine device’s movement into a recreation upgraded point of view in an AR setting. This makes checking and the executives conceivable [50]. The technique utilised by Haag et al. gives a computerised twin of a test seat for twisting pillars in light of reproduction utilising the finite-element method (FEM). The computerised twin has a checking highlight. The genuine framework leads the suitable cooperations in the wake of entering determined boundaries, which are in this way sent to a FEM Simulation [51]. While there has been significant outcome in making computerised twins for machine apparatuses, research on genuine executions of advanced twins for material taking care of frameworks is deficient. Current material taking care of frameworks are named CPS, and to make exhaustive computerised twins of CPPS, advanced twins of every CPS are required. Therefore, machines as well as material dealing with frameworks should be considered while making a computerised twin, particularly as material taking care of can make up a critical piece of a creation framework’s functional expenses (see Section 10.1). These consumptions can be decreased by up to 30% by working on material taking care of. Lessening material dealing with harm is one method for doing this [52]. Furthermore, as shown by [37], just few the ongoing use cases make sense of the execution of the three computerised twin capabilities that were first characterised (forecast, checking, and finding) with bidirectional programmed information stream between the advanced twin and the genuine framework. Moreover, no strategy goes into extraordinary profundity on how recreation-based control inputs are consequently sent to the genuine CPS to acknowledge choice help. Notwithstanding scholastic techniques, various business items vow to give computerised twin answers for mechanised frameworks or material taking care of: With the utilisation of programmable rationale regulators, Rockwell Automation’s advanced twin programming bundle makes it conceivable to show material taking care of frameworks and connection them to genuine frameworks carefully. Before the genuine framework’s working, the application centres for the most part around plan and virtual authorising. The recreation is essentially worried about kinematics and doesn’t address the specific actual qualities of specific workpieces [53]. Automation company gives a framework called Pick ace that re-enacts a genuine picking framework carefully with an end goal to accelerate the dispatching system for picking robots. Moving physically picked OK settings to the genuine framework is conceivable [54]. The thing was made particularly for ABB picking robots. To make computerised twins from CAD information, use Automation Studio’s B&R Industrial Physics module. In an equipment in the know strategy, this can help virtual charging to assess
10.3 Digital Twin in Production
159
a framework’s regulator programming [55]. Furthermore, MapleSim, which fabricates a unique model of a framework in light of CAD information, is coordinated into B&R Automation Studio. The program models all powers and forces, empowering the model to be used as a computerised twin for part estimating (e.g. drive determination). From that point forward, determined models can be applied to real control equipment [56]. Material science re-enactments of mechatronic frameworks are conceivable with SIEMENS’ Mechatronics Concept Designer. For Hardware-in the know applications, the innovation makes it conceivable to display mechatronic frameworks with actual elements like latency or grinding. This arrangement likewise underscores setting up a framework’s control boundaries before it starts to work. Notwithstanding, local ability for the advancement of explicit material streams during activity is missing [57]. ISG Virtuoso empowers the production of actual re-enactments of material dealing with frameworks, similar as the application referenced. Likewise, there are various programming programs with wide relevance for demonstrating and recreating designing issues. Like re-enactment instruments like Matlab’s Simscape Multibody [60], OpenModelica upholds Multibody Simulations [59] and is in this way basically suitable for the expected use. These projects all have many expected utilises and may be applied to making a computerised twin execution that fulfils Section 10.2’s particulars. For example, MATLAB even offers an OPC UA system that would consider worldwide and two-way correspondence between the re-enactment and its true partner. The execution work will be equivalent to the predefined arrangement inside the Python environment, however, as these re-enactment frameworks don’t accompany local advanced twin modules for the portrayed material dealing with reason. All in all, no financially accessible arrangement gives the abilities expected to the utilisation of advanced twins talked about in this review. Rockwell, ABB, and B&R are instances of frameworks with a critical accentuation on computerisation. These frameworks permit boundary determination for the genuine framework however are regularly organisation explicit and need adaptivity to coordinate every fundamental component. Other computerised twin items (SIEMENS, ISG) offer physical science re-enactment and a more prominent degree of adaptivity, yet these devices firmly stress starting virtual charging and don’t consider the rehashed reproduction of material streams in CPPS with variable courses and variable actual workpiece properties. These items are not planned to help the genuine effort through bidirectional information move or mechanised boundary choice through the demonstrating of specific cycles as itemised in this review. Furthermore, last, a ton of flexibility is given by all inclusive designing recreation toolboxs like OpenModelica and MATLAB Simscape Multibody. Similar arrangements might be carried out in different circumstances with critical exertion. Be that as it may, the proposed approach’s technique is definitely not a local piece of such frameworks.
160
Chapter 10 Implementation of Digital Twin
10.4 Research Gap It can be inferred from a survey of the state of the art that CPPS characteristics may cause physically induced disturbances in material flow processes. Accidents may occur as a result, and a CPPS’s functionality may be hampered. Individual material flow processes can be physically simulated to address and prevent these problems. Despite this capability, there is yet no strategy from academia or industry that illustrates the use of this simulation method when a manufacturing system is in operation. In their earlier work, the authors created a conceptual framework and a simple simulation model. By interfacing it to a genuine material dealing with framework, making suitable association modes to screen and control the genuine framework in view of reproduction, and exemplifying the recreation model (e.g. formalising data streams to and from the re-enactment model), this work should be firmly expanded. Executing a computerised twin that interfaces a genuine material taking care of framework with a physical science re-enactment can possibly address this inadequacy. This is created conceivable by the overflow of information that can be utilised as recreation inputs and by the decentralised re-enactment assets that any CPPS unit can use depending on the situation. Moreover, most of the material taking care of frameworks in CPPS are CPS; subsequently, they are outfitted with sensors, actuators, and correspondence interfaces, which empower them to handle on-request control information sources and give movement information. Most of momentum research on computerised twins in assembling covers ideas, which fills in as an essential beginning stage for genuine computerised twin executions. Interestingly, research uncovers not many CPS executions with comparing advanced twins. Notwithstanding, It’s basic to focus advanced twin exploration on use cases since it helps with Identify impediments to executing computerised twins and best practices [67, 69]. It can’t be seen reasonably. The couple of purposes referenced cases are normally settled in the machine device industry, yet they don’t necessarily depict computerised bidirectional information streams concerning the observing, analytic, and expectation capabilities. In spite of the critical measure of consumptions related with material dealing with in a creation framework, no arrangement considers use cases for computerised twins of material taking care of frameworks. Modern arrangements likewise miss the mark regarding what is expected to help functional material stream activities. Most of these frameworks need versatility or are essentially centred around the appointing stage without considering activity. The motivation behind this work is to propel the cutting edge by applying and embodying physical science recreation to gauge, screen, and analyse specific material stream processes while a creation framework is in activity. Thus, coming up next is the examination subject for this article: How can a material taking care of framework’s computerised twin be made and placed into utilisation to help material streams that depend on physical science recreation?
10.4 Research Gap
161
The genuine framework and the computerised twin should be displayed to address this inquiry.
10.4.1 Modelling A computerised twin isn’t seen as an extra yet rather as a critical part of a CPS in a CPPS, which is the centre reason of the demonstrating stages. Thus, the genuine framework (CPS) and the computerised twin are seen as two frameworks that consolidate to make a new, prevalent framework. From here onward, this organisation of frameworks will be alluded to as the “Advanced Family” (Figure 10.3). As displayed in Figure 10.3, the framework’s “computerised twin” as of now accompanies a sizable number of parts, including choice help modules and recreation models, as well as associations between those parts (e.g. a re-enactment result that is conveyed to the choice help module). Like this, the real framework is comprised of various parts and associations between them. Expecting that a framework’s intricacy ascends with the amount and assortment of its connections and constituent parts, the two frameworks incorporate a sizable measure of inner intricacy [61]. To give the expected working, connecting the different “genuine framework” and “computerised twin” to an advanced family builds the intricacy of both extraordinary frameworks by presenting new connections and interdependencies. Moreover, the parts of a computerised family can be associated with many fields of study (e.g. mechanics, programming, correspondence, and gadgets). The computerised family includes a serious level of intricacy because of these qualities. It appears to be fitting to portray the computerised family in a model-based way before it is truly carried out to deal with this intricacy. The development of perplexing specialised frameworks regularly utilises this procedure, known as model-based frameworks designing (MBSE) (e.g. the incorporated preparation of items and creation frameworks [62]). By utilising a careful framework model to characterise a framework perspective on all viewpoints and interdependencies of the general framework, MBSE endeavours to guarantee the expected usefulness and the imagined benefits as well as to forestall postponements of the genuine execution inferable from arranging blunders. Moreover, the subsequent complete intricacy must be constrained by unique models of the relating computerised twins, particularly when various advanced twins of CPS inside a CPPS are to be coupled (as portrayed in [31]). The computerised family, alongside its genuine framework and advanced twin subsystems, is demonstrated for this reason. Following the meaning of the prerequisites, UML is utilised to address the construction and communications inside the computerised family. A graphical displaying language called UML offers a normalised technique for picturing a framework’s
Figure 10.3: A system of systems, the digital family. ------------------------------
conversation gadgets
Sensing Elements
Mover
Relationship between Native Systems Linkage creates new relationships.
Workpieces
Processor/ Controller
Actual System
The Technological Families
Networks with database
Processing of Data
The interface design
The Digital Twin
-------------------------
Assistance in Making Decisions
Models of Simulation
162 Chapter 10 Implementation of Digital Twin
10.4 Research Gap
163
plan concerning its construction, conduct, and collaborations. For this reason, UML offers various outline types that address different framework parts [61].
10.4.2 Conditions The portrayal of the three advanced twin capabilities – expectation, checking, and finding – structures the premise of the necessity definition. Preceding their genuine execution on the material dealing with framework, arranged material stream processes are recreated in the expectation capability. In this way, it is feasible to test different material dealing with attributes to augment transport effectiveness while decreasing powerful aggravations. The genuine framework, where the material stream system is completed, ought to then consequently get the subsequent material dealing with boundaries. The advanced twin ought to empower following and management of the workpiece’s present status on the real material taking care of framework for the end goal of checking. Subsequently, the material taking care of framework, which is continually handled in the physical science recreation, ought to give live positional information to the computerised twin. Through a recreation that reproduces the condition of the genuine workpiece, this empowers the recognition of issues. Following a particular material stream process, the finding capability is utilised. The shipped workpiece’s subtleties as well as the genuine material dealing with framework’s recorded development information during the important system ought to be incorporated. This ought to make it conceivable to reproduce the respected cycle in the physical science reproduction climate, along with any possible issues. The broad scope of the advanced twin that should be demonstrated and incorporated is shown by the acknowledgment of these situations. Extra circumstances should be met as per Section 3: – Genuine System: It is important to plan and execute a trial material dealing with framework. This framework should be equipped for taking care of even exchanges of articles with various boundaries. For the computerised twin idea to be pertinent, the framework should be acknowledged as a CPS. Support for dynamic in light of reproduction: The focal part of the computerised twin should be a physical science re-enactment. This re-enactment needs to help aggravation expectation (see Sections 3.1 and 3.2). To give choice help and the referenced capabilities, the reproduction should be suitably coordinated into the whole framework. The production of connection points and the conveyance of information fall under this classification. To lessen the requirement for human mediation, the dynamic cycles additionally should be robotised. Network: The frameworks should have the option to convey such that fulfils the requests of the imagined applications (for instance, the re-enactment framerate). The
164
Chapter 10 Implementation of Digital Twin
execution ought to likewise have the option to squeeze into a CPPS setting. Thus, the utilisation of normalised correspondence conventions is important to guarantee modern appropriateness. Also, however much as could be expected of the framework to-framework correspondence ought to be mechanised. Control of the genuine framework: If fitting, reenactment choices should be taken care of in a style that empowers the making of control inputs for the real framework. The following segment models the advanced family’s framework structure while considering these standards.
10.5 System Engineering The frameworks inside the general framework were displayed utilising UML class charts as per the details. A class in this present circumstance addresses the design and properties of items with comparable characteristics and semantics [61]. The class outline of the genuine material dealing with framework is displayed in Figure 10.4. The framework is comprised of various mechanical parts, including a crankshaft belt, a workpiece bed, straight aide rails, and a casing, which really convey the workpiece. Furthermore, a stepper engine drives the crankshaft belt, going about as an actuator. An Arduino Mega 2,560 microcontroller is utilised to drive the stepper engine. a more minimised microcontroller, Utilising a, the Arduino nano peruses the stepper engine’s rotational position, development indicator. A workstation is associated with both microcontrollers.OPCUA Server, a Python program, is run on PC2. OPC UA was picked as a viable method for correspondence since it offers a uniform connection point for modern correspondence. OPC UA maps are utilised for this data specific to an application and article situated information model [63]. The OPCUA Server application banters with OPCUA Client, its advanced twin partner. Moreover, it takes position data from an Arduino nano and sends control contributions to an Arduino Mega 2,560. The genuine framework may be viewed as a CPS in light of this structure. Figure 10.11 shows the class chart for the advanced twin. The main advanced parts of the computerized twin are those that are dynamic on workstation PC1: The product Simulation, which incorporates a re-enactment model of the material taking care of framework, fills in as the mind of the computerised twin. The user input/ output module gives information, which the input generator utilisations to design the reproduction climate. This incorporates changing over essential information into a configuration that the reproduction can process. Choice help processes the re-enactment results to deliver relevant client information and determined control inputs, which are then shipped off the OPCUA Client for
Belt Timing
Dynamical Component
drive
The Machine
Mover
motion
uses the control signals to
compares the current condition to
Sensor of rotation
Sensing Element
Information sent
Figure 10.4: Shows a class diagram of the real system. Work-Piece
Work-Piece
transport
platform for work piece
Dynamical Component
Mega- ARDUNIO
Controllers
Programme
Structure
Dynamical Component
OPC UA clients
Programme
Communicating through Protocol
Is backed by
roll of linear guide
Dynamical Component
sends command input to
goes on
Run
Server PC-2
OPC UA Servers
transmits position information to
connection in serial
connection in serial
UNO / NANO ARDUNIO
Controllers
10.5 System Engineering
165
Figure 10.5: Prediction function sequence diagram.
9.data
Iterate analysed variables
4.Visiualization
2. Value processing
Genertor of inputs
1.Values
Utilize outputs that are delivered
5.Outputs
OPC UA Service
6. Process control specifications
Assistance in Making Decisions
3. Simulation
Simulator
7. test tool
Machine
10. inputs for drive system
Controllers/ Processors
8. values for process control
OPC UA Network
166 Chapter 10 Implementation of Digital Twin
10.6 Application
167
transmission to the genuine framework by means of transmission control protocol (TCP) correspondence. Furthermore, development information from the genuine framework is gotten by the OPCUA Client and can be shipped off either the re-enactment or the data set that stores the Historic Process as a comma-isolated values (CSV) record. Afterwards, this data may be utilised as recreation input.
10.5.1 Interactions UML grouping graphs were utilised to address the cooperations inside the computerised family in light of the depicted classes, their cases in the class outlines (Section 4.2), and the three application situations. The communications that occur during the expectation capability are portrayed in Figure 10.5. The actual effects of different material stream boundaries are over and over re-enacted before a stepper engine fueled genuine material stream system is done. Every re-enactment run’s results are shown, noted, and communicated to the genuine framework through the OPC UA interface as control inputs. The ongoing place of the workpiece bed of the genuine framework is iteratively handled and shipped off the recreation subsequent to beginning the observing capability (see Figure 10.6). The recreation status is shown outwardly, and on the off chance that any interruptions are found, a notice is shown. A real material stream process fills in as the establishment for the cooperation’s in the determination capability (see Figure 10.7). Its area has been recorded and chronicled over the long run. While using the genuine determination capability later, this technique is gotten from the information base (base left half of Figure 10.7). Then the system is duplicated utilising the verifiable development information. The accompanying execution was based on top of the demonstrated construction and cooperation’s of the computerised family.
10.6 Application 10.6.1 Actual Set-Up The implemented model-based illustrative material handling system is displayed in Figure 10.8. According to the model, the system comprises a stepper motor-driven work piece pallet that can execute varied dynamic motion. Due to the pallet’s movement over linear guide rails, parts put on it may experience horizontal acceleration or deceleration. A stepper motor drives a timing belt that moves the work piece pallet (actuator). An angular encoder is used to determine the stepper motor’s angle (sensor).
Figure 10.6: Monitoring function sequence diagram.
11. Notifications of a disruption
9. Visualization
Loop Monitoring
3. Variables that were analyzed
Notifications disruption
Machine
4.Present condition
Controllers/ Processors
5. Present condition
OPC UA Network
6.present status has been verified
OPC UA Service
7. Present condition
Assistance in Making Decisions
10. Simulated output
8.Simulation
Simulator
2. check present situation
Genertor of inputs
1.Values
Utilize outputs that are delivered
168 Chapter 10 Implementation of Digital Twin
Figure 10.7: A flowchart of the diagnosing functionality.
11. Visualization
Loop Simulation 10. Simulation
5. archives
Loop The product flow procedure
Machine
1. Present state
Controllers/ Processors
2. Present state
OPC UA Network
3. Present status has been verified
OPC UA Service
4. Present state
Assistance in Making Decisions
8. Information from the Past
Simulator
7. Obtain the materials process flow
Genertor of inputs
6.Values
Utilize outputs that are delivered
10.6 Application
169
170
Chapter 10 Implementation of Digital Twin
Figure 10.8: Material handling system design.
Microcontroller boards connect the sensor and actuator to a PC2 (Arduino Mega 2,560 and Arduino Nano). The two microcontroller sheets do specific obligations in an unexpected way: Nano Arduino. The work piece bed’s development is overseen by the Mega 2,560. The carried out program is hanging tight for an imparted input that determines the work piece bed’s speed increase. The development is then completed with the set speed increase and a decent objective speed of 10,000 stages each second utilising the AccelStepper library. The workpiece bed is bit by bit advanced from the beginning area until the stepper engine performs deceleration to end the vehicle at the ideal place of x = 0.36 m. A model diagram of this development profile can be displayed in Figure 10.9. This game plan empowers the material taking care of framework to show the effect of different speed increases and subsequently powerful effects on a particular workpiece. The result upsides of the precise encoder, a sensor that measures stepper engine turn, are handled continuously microcontroller (Arduino nano). This likewise makes it conceivable to follow the current place of the bed as a result of the proper connection between the turn and interpretation of the workpiece bed.
10.6.2 Infrastructure for Communications A careful correspondence design that allows the acknowledgment of the demonstrated communications is expected for the execution of the computerised family (see Section 10.2). This turns out as expected for both inner correspondences inside the
10.6 Application
position velocity
171
4
____ ____ 3
2
1
1 0.3
0 1
0.2
0.4
0.1 2
3
4
5
Figure 10.9: Example graphs for workpiece pallet location and velocity.
genuine material taking care of framework and correspondence between the real material dealing with framework and its advanced twin. Figure 10.11 shows the general correspondence framework. Both microcontrollers communicate with PC2 through a sequential association in the real framework. To guarantee modern ease of use, the connection between the advanced twin and the genuine framework is acknowledged through OPC UA (see Section 10.2).
Figure 10.10: The communications systems.
The client-server plan was picked as perhaps of the most predominant heavenly body in business [63]. A Python library for OPC UA [64] was decided to carry out OPC UA. The client side, which is addressed by the computerised twin as a program running on PC1, makes solicitations to the server side, addressed by the material dealing with framework, which
172
Chapter 10 Implementation of Digital Twin
answers those solicitations. On the server side, a location space was made alongside a server name. A hub containing the item “Boundary” is shaped inside that address space. A few factors that are changing during the activity of the computerised family’s correspondence between the server and client side were utilised to populate this item. This execution empowers TCP correspondence between the material dealing with framework and the advanced twin. This suggests that both the web and a modern organisation can be utilised for correspondence (e.g. inside a cyber physical production system).
10.7 Digital Twin On PC1, the material handling system’s digital doppelganger is put into use. Its interactions and structure adhere to the models from Section 10.4. The physics simulation was used as the foundation of the digital twin. The previously created fundamental pyBullet model served as the basis for this step (see Section 10.2). This fundamental model was first given a physical representation of the actual material taking care of framework, complete with all restrictions what’s more, moving elements. A physical science model of the created material dealing with system that is adaptable to different workpiece geometries was produced as a result. Nevertheless, using the simulation model in an industrial setting required processing of the simulation data before and after. To achieve this, decision support was included to the simulation model and modules for input generation. Both modules were customised to the features of this simulation model and the actual system, and both have functional interfaces to it. Handler i/p and o/p module was put into place to offer straightforward text-based client association along with pyBullet’s visualisation function. Figure 10.11 shows every module connected to one another as depicted.
10.7.1 Prognosis The expectation’s capability will probably recreate the impacts of different material taking care of boundaries prior to picking a setup that delivers the best results for the genuine material dealing with activity (see Section 10.2). The speed increase of the workpiece bed during material dealing is in this example the variable boundary between various setups. While quicker material taking care of is made conceivable by higher speed increases, the mechanical weight on the moved workpiece additionally rises. The effect of different speed increase values on the steadiness of the workpiece is inspected utilising a physical science recreation. The picked speed increase esteem
Figure 10.11: Digital twin sequence diagram.
5. Client for OPC UA
4.Database Control Maker
3. Generator simulation
2. Decision assistance for input
1. Input/output from the user
Computers
Connectivity through protocol
OPC UA servers
Results of simulation
Data
recover
The Historical Method
restore
Data Collection
Simulator Data from i/p
Input from simulation
Module for physical parts
i/p
Generator of Input
Input variables, simulations modes
Information about Physical Characteristics
Design of the Part
Utilize outputs that are delivered
Information in live mode
OPC UA Service
Assistance in Making Decisions
Information that is pertinent
10.7 Digital Twin
173
174
Chapter 10 Implementation of Digital Twin
Producer of inputs
Procedure Outline
Assistance in Making Decisions
Actual system performance
Simulations of Physics
Variable changes
Speed Figure 10.12: Iterative process loop for determining flow of material process parameters as part of the prediction function.
can’t be utilised for the genuine interaction on the off chance that the reproduction shows that this security can’t be kept up with for the given design (e.g. the workpiece spills because of speed increase). The thinking did by the expectation capability is portrayed in Figure 10.12a. A rundown of different speed increases is produced by the info generator module. A reproduction run is directed for every speed increase setting to evaluate the impacts on the security of the shipped workpiece. The reproduction results are handled by the choice help module. Disregard is given to all speed increases that caused aggravations. The best worth among the speed increase esteems that disturbed nothing is picked and sent as a control contribution to the genuine framework, where the material dealing with activity is completed. This cycle recognises the speed increase boundary that allows the fastest and most secure material taking care of.
10.7.2 Perception All observing elements were established. The program at the core of this capability iteratively filters the place of the bed holding the workpieces in view of sequential association with the genuine framework. The OPC UA point of interaction is utilised by this product, which runs on PC2, to send this position information with the fitting moments constantly. This data is then given to the dynamic physical science reenactment in the advanced twin on PC1. Subsequently, the reproduction precisely portrays the place of the workpiece bed over the long haul as well as any actual contact that might have happened with the shipped workpiece. The physical science reproduction is pictured, permitting a human manager to utilise the perception to track
10.8 Use Cases
175
down irregularities. Be that as it may, since the recreation consistently decides the even place of the workpiece corresponding to the workpiece bed, aggravations can likewise be distinguished naturally by the reproduction. This prompted the execution of a crash recognition calculation in view of the arrangement of rules depicted beneath: The area of the workpiece bed and the area of the workpiece in three direction headings are recorded for every reproduction outline. In the event that the conclusive determined esteem between the workpiece’s area comparable to the bed surpasses a worth at a specific time xt and starting relative position x0 a particular resistance estimator, a computerised alarm cautioning for the twin connection point.
10.7.3 Detection The advanced twin can be stacked with information from past material taking care of exercises for analysis [5–6, 25–26]. A capability that can change over genuine framework developments into CSV documents and CSV record contents into reproduction inputs for demonstrative objects was worked to chronicle and reestablish material stream processes. This likewise contains data on the specific shipped products and a table appearance the area of the workpiece bed over the long run pertinent to the vehicle. The association between the real framework and the advanced twin, through which every material dealing with method is archived and saved, is the reason for this capability. The physical science reproduction can get to this information to imitate the cycle in case of unexpected occasions. To accomplish this, a program module was established that examinations verifiable cycles make recreation input by giving the re-enactments’ situation corresponding to the transient properties of the genuine interaction ceaselessly. The comparing workpiece information is hence connected to the portrayed diagrams and tables. Duplicating prior material streams and the related actual events is conceivable utilising the analysis capability. On this base, different boundary impacts can be evaluated.
10.8 Use Cases The utilisation case’s goal is to show the way that the examined subject might be applied through a model application circumstance. The availability over OPC UA was accomplished by joining the genuine framework and computerised partner to a solitary neighbourhood (LAN). The utilisation of the computerised twin is represented utilising the case of a barrel shaped pole and the three advanced twin elements of forecast, observing, and determination. The pole was made into a physical science model. In view of this specific circumstance, the three functional methods of the computerised twin’s down to earth use in a speculative CPPS are illustrated.
176
Chapter 10 Implementation of Digital Twin
10.8.1 Prognosis The default setting for material taking care of speed increase in the digital actual creation framework referenced is set to 4,000 stages/s2, while the speed is set to 9,000 stages/s. These outcomes in extended material stream periods yet additionally consider safe vehicle. Subsequently, the prescient capability is begun before the actual transport. The speed increase of the work piece bed is changed naturally during various recreation meetings. As a proper boundary 15,000 stages/s2 is picked in view of the discoveries. The material dealing with framework, which does the vehicle activity, gets this worth next. Whenever applied to all CPPS transports, increasing this could bring about superior execution and more prominent utilisation of material dealing with gear. In this way, a creation control framework that attempts to perform material streams at their separate greatest velocities to improve throughput can be based on the expectation capability. Albeit prior strategies (e.g. [65] or [18]) perceived this valuable effect of quicker material dealing with speeds on throughput times, the actual feasibility of quicker speeds and the subsequent actual peculiarities was not considered. The proposed strategy fills this hole by making it conceivable to distinguish special greatest material stream rates relying upon the properties of the conveyed work pieces. As expressed, the objective of this is to further develop a creation framework’s exhibition by accelerating throughput. Thus, the recommended methodology, which centres around functional material dealing with strategies, has a comparative objective to planning frameworks that consider the sensible and hierarchical parts of material streams (e.g. task). In this manner, as opposed to supplanting booking strategies, the forecast approach improves them. Following the ideal planning of assembling undertakings and, thus, material taking care of systems, efficiency can be additionally expanded by streamlining the transportation in view of their actual way of behaving. Furthermore, the forecast approach beats before executions of material stream physical science recreation. Expectation by and large tries to predict how a framework will act from now on. Virtual dispatching strategies currently being used, for example, [8] or [24] (see Section 10.2), take into consideration long haul projections of a material dealing with framework’s designs. This strategy fundamentally focuses on the arranging phase of a creation framework and executes few virtual charging runs in physically planned re-enactment conditions. This strategy basically focuses on the arranging phase of a creation framework and executes few virtual dispatching runs in physically planned reproduction conditions. Accordingly, the forecast is made utilising the workpieces that are expected to be shipped while the assembling framework’s functional stage starts. This can be utilised to deliver known things in huge amounts with comparable actual attributes, but it doesn’t stick to the CPPS’s [39, 58] principles for material streams (see Section 10.1). Conversely, the computerised twin depicted in this study makes it conceivable to over
10.8 Use Cases
177
and over expect and reproduce a material stream while it is being created in a plant with various actual part properties. The recreation climate is naturally developed for every extraordinary material stream and changes with explicit work pieces as a result of its flexibility (see Figure 10.11). This help for online expectation of explicit material streams isn’t given by business computerised twin arrangements or by virtual charging strategies currently being used (check Unit 10.3).
10.8.2 Observing Another material stream process utilising a similar part is completed for observing. In this occasion, an unsettling influence is brought about by the high-speed increase of 10,000 stages each second. The aggravation location calculation distinguishes the unsettling influence as well as envisions it, bringing about an alert sign. At the point when a specific surface quality is required, an event like the one in the delineation might hurt the work piece. The computerised twin guides in lessening framework free times by working with fast disclosure of issues. Also, all development data is put something aside for observing. Subsequently, boundaries can be changed to prevent unsettling influences from reoccurring. One more illustration of a moved spine may be utilised to show these issues. The rib tumbles off the work piece bed and rests between the straight aide rails because of critical speed increases. Because of the plan of the spine, the work piece bed is probably going to stick once it returns. A work piece might be hurt, the material dealing with framework might be harmed, and there might be postpones all over. The advanced twin registers the development of the bed and the actual properties of the rib utilising the observing methodology. Just positional following sensor frameworks in light of RFID, as proposed in [66], could somehow or another give computerised discovery and aversion of ensuing disappointments without a trace of the computerised twin. These frameworks give constant following of the area of work pieces. Yet, to achieve this, sensors wouldn’t just be required; movable work pieces would likewise should be furnished with the proper labels. This outcomes in a critical expansion in work, especially while moving incomplete pieces between phases of the creation cycle. Visual framework observation by an administrator, which would bring about high staff costs, is another recognition choice. There are presently accessible computerised twin methodologies that attempt to follow material streams on a shop floor level in view of gained discrete timestamps like the appearance season of a section on the shop floor notwithstanding sensor-based following. Hypothetically, these strategies would empower the age of an interruption warning after a section has not arrived at its objective inside a foreordained measure of time. Moreover, the information that it exists procedures don’t resolve this issue, disturbance
178
Chapter 10 Implementation of Digital Twin
discovery must be acknowledged after a postponement of a couple of moments. Considering the referred to in this utilisation case, the bed will have repositioned itself and would have prompted a gridlock. Different methods consider the recognition of disturbances by watching out for mechanical parts like direction (e.g. [68]). These techniques can be utilised with frameworks for handling materials that have additional orientation. In any case, considering the moved articles’ material science, none of the depicted disturbances can be found on the off chance that they are not considered. Instead of their strategies since the observing capability simply examinations routine information from the transport engine, the proposed computerised twin arrangement gives close to constant checking and doesn’t require a critical extra framework. The genuine observing is then done inside the computerised twin along with the physical science recreation, requiring simply advanced information on the workpiece. This exhibits the potential for disturbance discovery without the necessity for costly sensors or human checking. In this way, physical science reproduction can go about as an imaginary or delicate sensor that processes information that is easy to gather to quantify a worth that is hard to recognise (unsettling influence) (development of the engine). Also, the OPC UA association empowers robotised criticism to the genuine framework and makes the observing capability application somewhat available [67, 69]. Administrators could depend on warnings in light of recreations rather than continually checking the genuine frameworks.
10.8.3 Evaluates The work piece is carried in the application under consideration without the use of a monitoring or prediction mode. However, the digital twin keeps track of the work piece pallet’s position throughout time. Operators in this particular instance chose material handling settings from a previous procedure that included a work piece with distinct physical characteristics. An unsettling influence during the real exchange is distinguished. The diagnosing capability in the advanced twin is then begun after that. The development information is placed into the physical science reproduction by picking the particular ID of the material stream process. Thus, in the reproduction climate, the aggravation might be seen. Improved parameters can be derived from this base. Other issues can also be found by replicating the physics of material handling in diagnosis mode. For instance, despite the fact that a disturbance did occur, the diagnostic simulation can show that it wasn’t supposed to. These details may point to further sources of the issue, such as lubricant spills on work piece pallets or asymmetrical part shapes that alter the physical behaviour of the material handling process.
References
179
10.9 Overview and Prospects Because of various pathways and explicit work pieces, material streams in cyber physical production system can turn out to be very mind boggling, which could cause actually produced aggravations. These can bring about postponements or mishaps, yet they can be constrained by emulating the genuine actual communication that happens while the material dealing with frameworks are being used between the work pieces and the frameworks. A computerised twin is made when a particularly actual re-enactment model is combined with a genuine material taking care of framework, and it is imagined that this is fundamental for a CPPS to work. The primary capability of computerised twins is to give choice help to genuine frameworks by coupling reproduction models with functional information. There haven’t been many known occasions of computerised twins being utilised in assembling beyond the machine apparatus industry. A coordinated framework (computerised family) containing a genuine material taking care of framework and its computerised twin, in view of physical science reenactment, was displayed and developed in this work to address the current lack of unmistakable advanced twin executions in assembling research. Concerning the three computerised twin elements of forecast, checking, and conclusion, a genuine use case outlined the arrangement’s many benefits. Extra accentuation was put on mechanised correspondence, choice help, and re-enactment based genuine framework control. The methodology can be utilised in modern settings on account of correspondence over OPC UA. Using UML charts to display the whole framework preceding execution has demonstrated to be a fruitful technique for controlling framework intricacy and ensuring the ideal usefulness.
References [1] [2] [3] [4] [5] [6] [7]
S. Makridakis, The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms, Futures. 90, 2017, 46–60. F. Tao, J. Cheng, Q. Qi, M. Zhang, H. Zhang, F. Sui, Digital Twin-driven Product Design, Manufacturing and Service with Big Data, Int. J. Adv. Manuf. Technol. 94(9), 2018, 3563–3576. Anon, What is a Digital Twin?〈https://www.ge.com/digital/applications/digital-twin〉. (Accessed May 6 2020). Anon, Digital Twin.〈https://www.plm.automation.siemens.com/global/en/our-story/glossary/digi tal-twin/24465〉. (Accessed May 6 2020). Anon, Cheat sheet: What is Digital Twin?.〈https://www.ibm.com/blogs/internet-of-things/iot-cheatsheet-digital-twin/〉 . (Accessed May 6 2020). D. Jones, C. Snider, A. Nassehi, J. Yon, B. Hicks, Characterising the Digital Twin: A Systematic Literature Review, CIRP J. Manuf. Sci. Technol. 29, 2020, 36–52. B. Marr, What Is Digital Twin Technology – And Why Is It So Important?, 2017.〈https://www.forbes. com/sites/bernardmarr/2017/03/06/what-is-digital-twin-technology-and-why-is-it-so-important/ #43fa67be2e2a〉 . (Accessed May 6 2020).
180
[8]
[9] [10] [11]
[12] [13] [14] [15] [16] [17] [18] [19]
[20]
[21]
[22] [23] [24]
[25] [26]
[27] [28] [29]
Chapter 10 Implementation of Digital Twin
B. Marr, 7 Amazing Examples of Digital Twin Technology In Practice, 2019.〈https://www.forbes. com/sites/bernardmarr/2019/04/23/7-amazing-examples-of-digital-twin-technology-in-practice/ #428398a56443〉 . (Accessed May 6 2020). T. Mukherjee, T. DebRoy, A Digital Twin for Rapid Qualification of 3D Printed Metallic Components, Appl. Mater. Today. 14, 2019, 59–65. Anon, Digital Twin – towards a meaningful framework, London, 2019. M. Bevilacqua, E. Bottani, F. E. Ciarapica, F. Costantino, L. Di Donato, A. Ferraro, G. Mazzuto, A. Monteriù, G. Nardini, M. Ortenzi, M. Paroncini, M. Pirozzi, M. Prist, E. Quatrini, M. Tronci, G. Vignali, Digital Twin Reference Model Development to Prevent Operators’ Risk in Process Plants, Sustainability. 12(3), 2020, 1088. F. Tao, M. Zhang, A. Y. C. Nee, Digital Twin Driven Smart Manufacturing. Academic Press, 2019. Elsevier, Paperback ISBN: 9780128176306, eBook ISBN: 9780128176313. Anon, Virtual Singapore.〈https://www.nrf.gov.sg/programmes/virtual-singapore〉. (Accessed May 6 2020). Anon, Forging the Digital Twin in discrete manufacturing.〈https://discover.3ds.com/forging-digitaltwin-discrete-manufacturing〉. (Accessed May 6 2020). F. Tao, Q. Qi, L. Wang, A. Y. C. Nee, Digital Twins and Cyber–physical Systems toward Smart Manufacturing and Industry 4.0: Correlation and Comparison, Engineering. 5(4), 2019, 653–661. L. Zhang, X. Chen, W. Zhou, T. Cheng, L. Chen, Z. Guo, B. Han, L. Lu, Digital Twins for Additive Manufacturing: A State-of-the-art Review, Appl. Sci. 10(23), 2020, 8350. Anon, Aconity3D equipment.〈https://aconity3d.com/equipment/〉. (Accessed May 10 2020). Y. Hagedorn, F. Pastors, Process Monitoring of Laser Beam Melting, Laser Tech. J. 15(2), 2018, 54–57. D. Editors, Markforged Debuts Blacksmith Artificial Intelligence (AI) Software for Metal 3D Printing, 2019.〈https://www.digitalengineering247.com/article/markforged-debuts-blacksmith-artificialintelligence-ai-software-for-metal-3d-printing/〉 . (Accessed May 10 2020). 039 Development & Demonstration of Open-Source Protocols for Powder Bed Fusion AM, 2020. 〈https://www.americamakes.us/portfolio/4039-development-demonstration-open-source-protocolspowder-bed-fusion-additive-manufacturing-pbfam/〉. (Accessed February 2021). P. Stavropoulos, P. Foteinopoulos, A. Papacharalampopoulos, H. Bikas, Addressing the Challenges for the Industrial Application of Additive Manufacturing: Towards a Hybrid Solution, Int. J. Lightweight Mater. Manuf. 1(3), 2018, 157–168. G. Tapia, A. Elwany, A Review on Process Monitoring and Control in Metal-based Additive Manufacturing, J. Manuf. Sci. Eng. 136(6), 2014. S. K. Everton, M. Hirsch, P. Stravroulakis, R. K. Leach, A. T. Clare, Review of In-situ Process Monitoring and In-situ Metrology for Metal Additive Manufacturing, Mater. Des. 95, 2016, 431–445. D. Mishra, A. Gupta, P. Raj, A. Kumar, S. Anwer, S. K. Pal, D. Chakravarty, S. Pal, T. Chakravarty, A. Pal, P. Misra, S. Misra, Real Time Monitoring and Control of Friction Stir Welding Process Using Multiple Sensors, CIRP J. Manuf. Sci. Technol. 30, 2020, 1–11. https://doi.org/10.1016/j.cirpj.2020.03.004 A. Vandone, S. Baraldo, A. Valente, Multisensor Data Fusion for Additive Manufacturing Process Control, IEEE Robot. Autom. Lett. 3(4), 2018, 3279–3284. Anon, AM machine and process control methods for additive manufacturing.〈https://www.nist. gov/programs-projects/am-machine-and-process-control-methods-additive-manufacturing〉 . (Accessed May 20 2020). Anon, Bridge digital and physical worlds with digital twin technology, 2020.〈https://www.sap.com/ australia/products/digital-supply-chain/digital-twin.html〉. (Accessed May 10 2020). Anon, Solutions – Digital Twins, 2020.〈https://www.lanner.com/en-us/solutions/digital-twin.html〉 . (Accessed May 10 2020). C. Chen, K. Li, M. Duan, K. Li, Chapter 6 – Extreme Learning Machine and Its Applications in Big Data Processing, in -H.-H. Hsu, C.-Y. Chang, C.-H. Hsu (Eds.), Big Data Analytics for Sensor-Network
References
[30] [31]
[32] [33] [34] [35]
[36] [37] [38]
[39] [40]
[41]
[42] [43] [44]
[45] [46]
[47]
181
Collected Intelligence. Academic Press, 2017, pp. 117–150, PP. 13–53. Received 23 Jul 2016. Accepted 17 Sep 2016. Published Online 03 Nov 2016. R. J. Martis, V. P. Gurupur, H. Lin, A. Islam, S. L. Fernandes, Recent Advances in Big Data Analytics, Internet of Things and Machine Learning, Future Gener. Comput. Syst. 88, 2018, 696–698. L. Vendra, A. Malkawi, A. Avagliano, Standardization of Additive Manufacturing for Oil and Gas Applications, in: Offshore Technology Conference, Offshore Technology Conference, Houston, Texas, USA, 2020, p. 9. L.-X. Lu, N. Sridhar, Y.-W. Zhang, Phase Field Simulation of Powder Bed-based Additive Manufacturing, Acta Mater. 144, 2018, 801–809. G. Vastola, G. Zhang, Q. X. Pei, Y. W. Zhang, Controlling of Residual Stress in Additive Manufacturing of Ti6Al4V by Finite Element Modeling, Addit. Manuf. 12, 2016, 231–239. D. Gu, C. Ma, M. Xia, D. Dai, Q. Shi, A Multiscale, Understanding of the Thermodynamic and Kinetic Mechanisms of Laser Additive Manufacturing, Engineering. 3(5), 2017, 675–684. W. King, A. T. Anderson, R. M. Ferencz, N. E. Hodge, C. Kamath, S. A. Khairallah, Overview of Modelling and Simulation of Metal Powder Bed Fusion Process at Lawrence Livermore National Laboratory, Mater. Sci. Technol. 31(8), 2015, 957–968. C. Li, C. H. Fu, Y. B. Guo, F. Z. Fang, A Multiscale Modeling Approach for Fast Prediction of Part Distortion in Selective Laser Melting, J. Mater. Process. Technol. 229, 2016, 703–712. M. Markl, C. Korner, Multiscale Modeling of Powder Bed –based Additive Manufacturing, Annu. Rev. Mater. Res. 46(1), 2016, 93–123. D. R. Gunasegaram, A. B. Murphy, S. J. Cummins, V. Lemiale, G. W. Delaney, V. Nguyen, Y. Feng, Aiming for Modeling-asssited Tailored Designs for Additive Manufacturing, TMS2017. The Minerals, Metals & Materials Society: San Diego CA, 2017, 91–102. F. Ahsan, L. Ladani, Temperature Profile, Bead Geometry, and Elemental Evaporation in Laser Powder Bed Fusion Additive Manufacturing Process, JOM. 72(1), 2020, 429–439. J. Romano, L. Ladani, J. Razmi, M. Sadowski, Temperature Distribution and Melt Geometry in Laser and Electron-beam Melting Processes – A Comparison among Common Materials, Addit. Manuf. 8, 2015, 1–11. S. A. Khairallah, A. A. Martin, J. R. I. Lee, G. Guss, N. P. Calta, J. A. Hammons, M. H. Nielsen, K. Chaput, E. Schwalbach, M. N. Shah, M. G. Chapman, T. M. Willey, A. M. Rubenchik, A. T. Anderson, Y. M. Wang, M. J. Matthews, W. E. King, Controlling Interdependent Meso-nanosecond Dynamics and Defect Generation in Metal 3D Printing, Science. 368(6491), 2020, 660–665. Airbus 320 – Autopilot.〈https://www.aviatorsbuzz.com/airbus-320-autopilot/〉 . (Accessed February 2021). Future of Driving, 2021.〈https://www.tesla.com/en_AU/autopilot?redirect=no〉. (Accessed February 2021). S. S. H. Razvi, S. C. Feng, A. Narayanan, Y. T. Lee, P. Witherell, A Review of Machine Learning Applications in Additive Manufacturing, in: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, ASME, Anaheim, CA, USA, 2019. H. Bikas, P. Stavropoulos, G. Chryssolouris, Additive Manufacturing Methods and Modelling Approaches: A Critical Review, Int. J. Adv. Manuf. Technol. 83(1), 2016, 389–405. Y. Huang, M. Leu, J. Mazumder, M. A. Donmez, Additive Manufacturing: Current State, Future Potential, Gaps & Needs, and Recommendations, ASME J. Manuf. Sci. Eng. 137, 2015, 014001-1 – 014001-10. D. Editors, Registration Now Open for America Makes Virtual Mini TRX, 2020.〈https://www.digital engineering247.com/article/registration-now-open-for-america-makes-virtual-mini-trx〉. (Accessed December 2020).
182
Chapter 10 Implementation of Digital Twin
[48] D. Gunasegaram, B. Smith, MAGMAsoft Helps Assure Quality in a Progressive Australian Iron Foundry, in: 32nd Annual Convention of the Australian Foundry Institute, Australian Foundry Institute, Fremantle, Australia, 2001, pp. 99–104. [49] H. Baumgartl, J. Tomas, R. Buettner, M. Merkel, A Deep Learning-based Model for Defect Detection in Laser-powder Bed Fusion Using In-situ Thermographic Monitoring, Prog. Addit. Manuf. 5(3), 2020, 277–285. [50] A. Gaikwad, B. Giera, G. M. Guss, J.-B. Forien, M. J. Matthews, P. Rao, Heterogeneous Sensing and Scientific Machine Learning for Quality Assurance in Laser Powder Bed Fusion – A Single-track Study, Addit. Manuf. 36, 2020, 101659. [51] S. Shevchik, T. Le-Quang, B. Meylan, F. V. Farahani, M. P. Olbinado, A. Rack, G. Masinelli, C. Leinenbach, K. Wasmer, Supervised Deep Learning for Real-time Quality Monitoring of Laser Welding with X-ray Radiographic Guidance, Sci. Rep. 10(1), 2020, 3389. [52] B. Yuan, G. M. Guss, A. C. Wilson, S. P. Hau-Riege, P. J. DePond, S. McMains, M. J. Matthews, B. Giera, Machine-learning-based Monitoring of Laser Powder Bed Fusion, Adv. Mater. Technol. 3(12), 2018, 1800136. [53] A. Hoekstra, B. Chopard, P. Coveney, Multiscale Modelling and Simulation: A Position Paper, Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 372, 2021, 2014) 20130377. [54] Anon, Exascale Computing Project, 2020〈https://www.exascaleproject.org/〉. (Accessed December 2020). [55] nanoHUB.〈https://nanohub.org/〉. (Accessed February 2021). [56] D. R. Gunasegaram, D. J. Farnsworth, T. T. Nguyen, Identification of Critical Factors Affecting Shrinkage Porosity in Permanent Mold Casting Using Numerical Simulations Based on Design of Experiments, J. Mater. Process. Technol. 209(3), 2009, 1209–1219. [57] M. Alber, A. Buganza Tepole, W. R. Cannon, S. De, S. Dura-Bernal, K. Garikipati, G. Karniadakis, W. W. Lytton, P. Perdikaris, L. Petzold, E. Kuhl, Integrating Machine Learning and Multiscale Modeling – Perspectives, Challenges, and Opportunities in the Biological, Biomedical, and Behavioral Sciences, Npj Digit. Med. 2(1), 2019, 115. [58] T. DebRoy, H. L. Wei, J. S. Zuback, T. Mukherjee, J. W. Elmer, J. O. Milewski, A. M. Beese, A. WilsonHeid, A. De, W. Zhang, Additive Manufacturing of Metallic D.R. Gunasegaram Et Al. Additive Manufacturing 46 (2021) 102089 16 Components – Process, Structure and Properties, Prog. Mater. Sci. 92, 2018, 112–224. [59] S. Haeri, Optimisation of Blade Type Spreaders for Powder Bed Preparation in Additive Manufacturing Using DEM Simulations, Powder Technol. 321, 2017, 94–104. [60] D. Powell, A. E. W. Rennie, L. Geekie, N. Burns, Understanding Powder Degradation in Metal Additive Manufacturing to Allow the Upcycling of Recycled Powders, J. Clean. Prod. 268, 2020, 122077. [61] L. Ladani, Additive Manufacturing of Metals Materials, Processes, Tests, and Standards. DEStech publications Inc, 2020. 978-1-60595-600-8, © 2021, 265 pages, 6×9, HC book. [62] R. Shi, S. Khairallah, T. W. Heo, M. Rolchigo, J. T. McKeown, M. J. Matthews, Integrated Simulation Framework for Additively Manufactured Ti-6Al-4V: Melt Pool Dynamics, Microstructure, Solid-state Phase Transformation, and Microelastic Response, JOM. 71(10), 2019, 3640–3655. [63] R. Shi, S. A. Khairallah, T. T. Roehling, T. W. Heo, J. T. McKeown, M. J. Matthews, Microstructural Control in Metal Laser Powder Bed Fusion Additive Manufacturing Using Laser Beam Shaping Strategy, Acta Mater. 184, 2020, 284–305. [64] J. Knap, C. Spear, K. Leiter, R. Becker, D. Powell, A Computational Framework for Scale-bridging in Multi-scale Simulations, Int. J. Numer. Methods Eng. 108(13), 2016, 1649–1666. [65] S. Alowayyed, D. Groen, P. V. Coveney, A. G. Hoekstra, Multiscale Computing in the Exascale Era, J. Comput. Sci. 22, 2017, 15–25.
References
183
[66] J. Borgdorff, M. B. Belgacem, C. Bona-Casas, L. Fazendeiro, D. Groen, O. Hoenen, A. Mizeranschi, J. L. Suter, D. Coster, P. V. Coveney, W. Dubitzky, A. G. Hoekstra, P. Strand, B. Chopard, Performance of Distributed Multiscale Simulations, Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 372, 2021, 2014 20130407. [67] J. Borgdorff, M. Mamonski, B. Bosak, K. Kurowski, M. Ben Belgacem, B. Chopard, D. Groen, P. V. Coveney, A. G. Hoekstra, Distributed Multiscale Computing with MUSCLE 2, the Multiscale Coupling Library and Environment, J. Comput. Sci. 5(5), 2014, 719–731. [68] D. Groen, J. Knap, P. Neumann, D. Suleimenova, L. Veen, K. Leiter, Mastering the Scales: A Survey on the Benefits of Multiscale Computing Software, Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 377 (2142), 2019, 20180147. [69] S. Alowayyed, T. Piontek, J. L. Suter, O. Hoenen, D. Groen, O. Luk, B. Bosak, P. Kopta, K. Kurowski, O. Perks, K. Brabazon, V. Jancauskas, D. Coster, P. V. Coveney, A. G. Hoekstra, Patterns for High Performance Multiscale Computing, Futures. Elsevier, 91, 335–346. doi: https://doi.org/10.1016/j. future.2018.08.045.
Chapter 11 Digital Twin Simulator Abstract: Virtual reality and digital twins are important technologies for designing, simulating, and optimising cyber-physical production systems, as well as interacting with them remotely or collaboratively, in relation to the modern web of things and Industry 4.0. Additionally, these technologies give up new opportunities for workstation joint design and mechanics research based on Industry 4.0 elements such as robotic systems. Additionally, a framework for open and flexible simulation with special design concerns is required. To achieve these goals and create a dynamic and immersive virtual environment, it is necessary to integrate the immersive virtual environment’s interactional skills with the digital twin’s ability to simulate the production system. This chapter suggests architecture for information exchange between virtual reality software and digital twins before presenting a use case on the design and evaluation of a collaborative workplace including humans and robots. Additionally, it focuses on a hybrid discrete–continuous simulation architecture designed especially for digital twin applications. SimPy, a well-known Python discrete-event simulation toolkit, provides the foundation of the framework. First, it provides a methodical procedure for incorporating continuous process simulations into SimPy’s event-stepped engine. Keywords: Digital twin, simulation, model synchronization, carla simulator, node-red, smart city
11.1 Introduction By facilitating monitoring and using real-time analytics and industrial IoT technologies to make decisions in the present, the rise of digital twins is poised to alter many other industries, including manufacturing, healthcare, urban planning, and transportation. A digital version (computer model) of the real system is referred to as “digital twin”. It joins examination for gauge, controlling and enhancing the genuine construction and is consistently kept in a state of harmony with the genuine framework by means of occasional detection of its well-being measurements. A “computerised model” or “computerised shadow” frequently recommends a one-way data stream from the genuine thing to its advanced portrayal; however, the expression “advanced twin” suggests a two-way data stream between the genuine item and its advanced portrayal (viz. detecting and control). A crucial component of digital twins is simulation. The following is a list of difficulties and desirable abilities related to simulating digital twins. The properties of the system to be mimicked are what guide the creation of a simulation framework for digital twins. While a continuous simulation framework may be required for some system models, a discrete-event simulation may be adequate for other types of digital twins. https://doi.org/10.1515/9783110778861-011
186
Chapter 11 Digital Twin Simulator
The creation of digital twins in application domains like manufacturing and process control is the main topic of this chapter, which focuses on open simulation frameworks. The phenomena that must be represented in these applications frequently consist of both discrete and continuous processes. The framework’s capacity to run blended discrete non-stop recreations is thus a crucial need. The following are the additional prerequisites for such a virtual reality framework: 1. The capacity to simulate heterogeneous systems with several continuous processes, each of which may need for a unique numerical approach to solve it and/ or unique characteristic size of time steps. 2. Assistance in catching the effect of standard sensor refreshes from the genuine framework on the condition of the model. 3. The capability to run simulations in real time. 4. The framework has to be adaptable and open source. It should be simple to include pre-existing libraries (such as optimisation, machine learning (ML), data processing, scientific computing, and plotting libraries) into the framework to enable analytics and visualisation. 5. Measured portrayals and the utilization of article-arranged attributes for large frameworks with various interrelated parts ought to be upheld by the language the structure utilises. While there are a variety of frameworks that are either designed for discrete-event simulations or continuous simulation, simulating both discrete and continuous processes simultaneously – and perhaps interacting with one another – presents significant difficulties. A brief overview of methods and current frameworks for continuous, discrete, and hybrid simulation is presented in Section 11.2. Most of the current Mixed Discrete Continuous (MDC) simulation frameworks are either commercial or domain-specific. The notable exception is OpenModelica, which provides a multi-domain simulation climate in view of a definitive demonstrating language (Modelica) and a library of part models.
11.2 An Examination of Simulation Methods and Frameworks The following categories can be used to group simulation methodologies broadly according to the sort of system being modelled.
11.2.1 Simulating a Continuous System Continuous development of the state variables is a characteristic of continuous systems. Examples of these systems include fluid flows, acoustic wave propagation, and transient heat conduction in materials.
11.2 An Examination of Simulation Methods and Frameworks
187
The growth of vehicle density over time, for example, may be modelled as a continuous process for essentially discrete systems like highway vehicle traffic under specific circumstances specified by author in 1955 and 2019. Conventional variance divergent equations or variance mixture equations are frequently utilised as the mathematical models to describe the behaviour of such systems (differential algebraic equation (DAE)). In order to simulate continuous systems, differential equations must be solved using appropriate numerical techniques and temporal integrators (such as the finite element spatial discretisation approach). A sequence that can be either constant or variable or changed with dynamism throughout a simulation is used to advance time in a regular manner, and the references therein provide a thorough overview of continuous processes and the simulation-related features of them. For continuous multi-physics simulations of complex systems, frameworks like foam are commonly utilised. To lower the total computation cost, numerical approaches like ML-based meta models and reduced order models may be employed instead of high fidelity models.
11.2.2 Simulation of Discrete Events Changes in the system’s state only occur at discrete (countable) moments in time, known as events, which define discrete processes. The two main categories of discrete event simulation are cycle-stepped and event-stepped methods. The techniques to discrete-event simulation are described fully in Hill (2007). For the design and simulation of discrete-event systems, formalisations like discrete-event system specification and its generalisation have been developed. A summary of problems and difficulties with discrete-event modelling in this situation of digital twins is provided by Agalianos et al. (2020). For discrete-event simulations, there are several private and open-source libraries and programmes. In this chapter, open-source discrete simulation software is reviewed.
11.2.3 MDC Simulation or Mixed-Resolution Simulation A hybrid simulation framework is necessary for systems with both discrete-event and continuous processes. Kofman (2004) suggests an integration technique based on quantisation for simulating hybrid systems. In order to undertake effective simulation, offer a splitting system technique where a priori understanding of the model’s discrete–continuous structural divide may be used. Constant-rate fluid flows are one example of a linear continuous model that has been given for simulation inside a discrete-time structure and software package implementation. Author from 2019 provides a thorough analysis of the various approaches employed for hybrid discrete–continuous simulation frameworks. In the
188
Chapter 11 Digital Twin Simulator
section that follows, we discuss our suggested framework for MDC simulation, as well as various design considerations and implementation methods. The suggested system is built on SimPy, a Python package for simulation of discrete events based on operations. SimPy uses procedures to simulate active components, which are implemented using Python’s generator functions. The processes are controlled by an environment class, which uses a global event queue to advance time in an event-stepped fashion. The user does not need to learn a new modelling language because the system to be modelled may be specified in Python using a few SimPy techniques. SimPy also offers simulation in real time. SimPy, however, lacks facilities for modelling continuous systems because it is built for discrete-event simulation. Here, we offer a systematic way for including continuous models in SimPy’s event-stepped simulation engine in the case of mixed discontinuous continual simulation. For systems, the outline is most appropriate with few, loosely connected continuous elements whose interactions may be captured by events. We introduce the framework, highlight its major concepts with a straightforward example, and outline the steps for putting it into practice. The remaining portions of the study are organised as follows: We give a full examination of the methods currently used for continuous, discrete-event, and mixed simulations. We outline the suggested framework and discuss several design factors for simulating MDC systems with the implementation road plan.
11.3 Framework A mixture of continuous and discrete elements that interact with one another can be used to conceptualise the system that will be described. In this instance, an entity is a grouping of system parameters, practices, and procedures that collectively make up a particular thing that the system will model. An activity whose condition may require only discrete temporal instants of change is referred to as a discrete entity (events). An entity that requires continuous simulation or monitoring is one whose state may be thought of as changing constantly across time. The following inquiries serve as the main inspiration for the building of a structure for simulating interactions between such discrete and continuous things one another: – How can we speed up time? – How can interactions between discrete and continuous things be simulated? (A) Formal methods for hybrid simulations, as those described by Nutaro et al. (2012), have addressed these issues. The simulation methods may be roughly divided into two groups from the standpoint of implementation, as outlined below and shown in Figure 11.1.
11.3 Framework
189
Figure 11.1: Two ways to deal with execution of an MDC recreation system.
The first method uses a single continuous simulation framework to regulate the passage of time. The events to be represented during the simulation are included into the continual modelling frameworks as a parameter value or boundary modification conditions. These updates may occur when a condition for the state variables is met, at specified intervals, or both (e.g. whenever the number of the parameter value exceeds a certain limit). The stage process for time advance is defined by the numeric system’s stability analysis for continuous simulations. The shortest size of the step should be chosen if the needed step size varies between different continuous entities in the system. Such a strategy works effectively for systems that mostly comprise closely connected continuous elements. (B) In the second method, a framework for event-stepped discrete-event simulation handles time progression. This framework incorporates continuous solvers for the simulation of distinct continuous entities. Events are used to model how a continuous thing interacts with other entities in the system. When the system’s continuous entities are few and weakly connected, this strategy works effectively. Because the systems to be modelled are heterogeneous, with a greater number of discrete entities and a smaller number of constant elements that are approximately coupled and associate in clear-cut ways, approach (B) is especially appropriate for the re-enactment of digital twins in the
190
Chapter 11 Digital Twin Simulator
assembling and cycle engineering domains. Using this methodology as a foundation, we suggest an MDC simulation framework and outline how our framework addresses the issues of time travel and the capture of interactions between entities.
11.4 Proposed Structure The event-stepped SimPy engine is used to advance time in the suggested architecture for MDC simulation. Figure 11.2 shows how a continuous entity is made up of system parameters, a state update method, and a specification of trials that act as an interaction between both the constant body and the outside world. The continuous entity’s state update function can act as a wrapper for updating data using an outer non-stop solver.
Figure 11.2: Mechanisms of a persistent component model.
Events that operate as the continuous entity’s interface can be used to construct interactions between that entity and the rest of the system. Four categories of these occurrences exist:
11.4 Proposed Structure
1. 2. 3. 4.
191
Perturbation event: An outside event that could have an impact on the continuous entity’s state or trajectory. Probing activity: An outside occurrence that entails checking the ongoing entities status and calls for updating it up to a specific time. Result events: A subsequent occurrence caused by the continuous entity states change that might have an impact on further entities in the system. Awakening events: A planned event is an ongoing process that produces state changes over a predetermined period of time or produces results whose timing can be predicted.
SimPy is used to represent the continuous entity’s behaviour. Every time the entity experiences a perturbation, probe, or wake-up event, this procedure is triggered. When turned on, 1. the entity’s state modifications up to the present time are calculated; 2. if any trigger condition is satisfied, the result events are activated; and 3. the continuous element plans a wake-up occasion for itself for a specific time frame. This time span is processed by utilising the equation underneath: (a) If the ongoing direction of the element’s state values is known and the result occasions can be all anticipated ahead of time founded on this direction, the wake-up might be sorted out at the time the earliest result occasion is expected to happen. (b) If the result occasions cannot be all anticipated ahead of time, the state should be refreshed after a set number of time steps (maybe on each time step). This is achieved by regularly setting up a wake-up occasion. At every emphasis of SimPy’s occasion-ventured calculation, the recreation time is progressed to the timestamp of the earliest arranged occasion in the worldwide occasion list. All occasions booked for this time are executed, and call-backs are utilised to begin programmes that are sitting tight for this occasion.
11.4.1 An Improved Time Advancement Plan On the off-chance that the time step size for the non-stop reproduction is fairly little contrasted with the ordinary stretch between occasions in the remainder of the framework, the expense of adding a wake-up occasion to the worldwide occasion list after each time-step may be costly while utilising the proposed approach. To remedy this, we advise making the following change: The distinction between the extended season of the accompanying occasion in the worldwide occasion list (tnextevent) and the recreation time for the ebb and flow cycle at every emphasis (K) of the occasion-ventured calculation is accepted to be the provisional time step (K) (tK). Then, each persistent substance is approached to
192
Chapter 11 Digital Twin Simulator
progress in time by an all-out time of K by finishing its state-update condition size K or by separating this period into more modest time ventures as essential when walking plan. Hypothetically, the state updates of a few constant elements might be determined all the while. On the off-chance that no result occasions of interest are expected to be created by any of the constant substances during this time, the time might be progressed by K and the processed state-refreshes in every one of the persistent elements can be applied prior to happening to the following cycle. Notwithstanding, it is possible that this occasion might significantly affect the states or directions of different substances in the framework on the off-chance that it is observed that an occasion of interest is shaped for a ceaseless element I at time ti > tnext occasion. To move to the earliest expected event across every single non-stop element, the genuine time step that is picked should propel time. In other words, the simulation time has to be advanced to tK + 1 = mini in the subsequent iteration (ti). So that its impact on different elements might be sent as expected in a discreteoccasion structure, the earliest occasion projected to happen at time tK + 1 can then be added to the worldwide occasion list. The state-refreshes in every one of the non-stop substances processed up to that point can in like manner be executed prior to propelling the re-enactment to time tK + 1. The provisional time step K could be adaptively changed to additionally improve execution. This framework has been finished with a proof-of-concept implementation that has been supported with examples. Here, we provide a straightforward example to demonstrate the framework’s main concepts, and we also give the simulation’s output results.
11.4.2 A Case Study The following example can be used to explain the key concept of the suggested framework: Consider a system that has to have several fluid tanks with their levels continually simulated and tracked in relation to time. The intake and outflow valves on each tank can be accessed outside triggers. The greatest tank level variation per unit of time while the associated valves are open defines the pace at which the in/out valves are open, which can be expressed in units of length/time. You can assume that the flow rates and top level necessary to fill the tank to capacity are fixed characteristics for the tank. There are two possible types of interaction between the tank and other system components: (a) External activities may cause the valves to open or close, which may alter the time trajectory of the tank level. (b) It is expected that the system’s external processes are impacted whenever the tank’s level reaches a specific limit (e.g. at whatever point the tank becomes unfilled or full). It is necessary to alert these external processes whenever the empty/full circumstances arise. The tank may be viewed as a continuous entity and represented as a Python class to model this example in the suggested framework. Multiple tank instances can be created in the system by instantiating
11.4 Proposed Structure
193
objects of this class. A summary of Figure 11.3’s state variables, parameters, and stateupdate equations defines the tank entity. The opening/closing of valves is caused by an external mechanism and corresponds to perturbation events for the tank. The tank object initiates a matching full/empty output event whenever the tank is full or empty. By utilising SimPy’s yield event > construct, exterior actions that ought to be impacted by events that are filled or vacant may be immediately alerted. Additionally, the object defines a probe event that may be started by the outside processes.
High Levels
Heaters
Temperature Sensor
Low Levels
Outlet Valve
Figure 11.3: Model of a liquid reservoir.
194
Chapter 11 Digital Twin Simulator
The tank state is updated up to the present moment when it is activated. Figure 11.2 depicts the SimPy procedure that is used to characterise the tank’s behaviour. Figure 11.4 shows the temporal development of the tank’s states as they were seen throughout this example’s simulation run utilising our framework. ON
In_let OFF ON
Out_Flow OFF
In_flow
1 0 -1 5
Level 0 Full
Empty
Time 0
10
20
30
40
50
60
Figure 11.4: Plots of the time advancement of the tank’s state showing the vacant/full occasions created by utilising the MDC recreation structure.
Since the state update problem in this example is a straightforward linear algebraic equation, continuous state updating does not require an iterative time marching strategy. It is sufficient to implement the updates using standard Python code inside the state update function of the tank class; using an external solver is not required. If the trajectory is unaffected by outside events, it is also possible to anticipate the precise time instants at which the empty/full occurrences take place. In this scenario, only the state adjustments must be calculated for wake-up trials arranged at the times when the result trials are expected to occur. Therefore, a straightforward event-stepped strategy might result in a quick and effective simulation. The time instants, at which output events must occur, however, might not be predictable if the state update equations must be solved using iterative time integrators. Therefore, periodic status updates and scheduling of wake-up events are required (a certain number of time steps later). This illustration helps to highlight the crucial elements of the suggested structure. The implementation would include modules for integrating with current continuous solvers as well as abstract classes to describe continuous entities using an interfacing given by the perturbation, occurrences that rouse awake and explore.
11.4 Proposed Structure
195
11.4.2.1 Advantages of the Suggested Framework The following are some factors that make this strategy particularly appropriate for use with digital twins: – The free coupling between the ceaseless substances considers their simultaneous execution inside a solitary time step for constant recreation, and it is possible for different persistent elements in the framework to utilise different consistent solvers and inward time step-size values in this technique. – In circumstances where only a few different types of events have an impact on the trajectory of continuous entities in the system, the event-stepped method can result in a simulation that is more effective. – Coarse surrogate models can be used to anticipate the route and timing of result event and plan a wake-up in displaying elements where a high degree of accuracy may not be required. – It is simple to include real-world sensor value updates into the model as a perturbation event changes the states. (1) Action plan We examine the elements of the proposed system and a plan for doing as such in the habits portrayed underneath: 1. Combination with current continuous simulation frameworks 2. Quick recreation of non-stop cycles by reconciliation with as of now existing consistent re-enactment of a system is important. In our framework, simulation is employed to carry out continuous process simulations. Dolfin is a finite-element model in the Python toolbox for simulation and modelling for different physics. (2) Analytics integration Analytics modules must be added into the architecture for digital twin applications. Extraction of parameters from sensing element records, forecasting, optimising, and substituting model construction during runtime. (3) Actual real-time simulation acceleration. Real-time simulation is essential; hence, there is a requirement for usable simulation acceleration. (4) Using hardware platforms like GPGPUs, FPGAs, or multi-core computers with parallel processing. We intend to investigate simulation-friendly designs that can benefit from these technologies. (5) Sensing and control support. A digital twin must have both sensing and control. It is necessary to investigate in-depth and include into the framework the features that support these elements.
196
Chapter 11 Digital Twin Simulator
11.5 A Framework for the Formal Simulation Model Configuration The integration of simulation and modelling operations based on other technical fields [8] serves as the framework’s foundation. The division into three fundamental areas, such as domain engineering, simulation engineering, and (model-based) systems engineering, is shown in Figure 11.5. Each of these fields offers a unique perspective on the system of interest. Motorised engineering, electrical control engineering, software development, and technology are the conventional fields that deal with technological systems. They translate design criteria into actual products and are encapsulated under the discipline of domain engineering. Systems engineering, in contrast, emphasises on the creation and administration in response to stakeholder requests for system solution requests. Engineering systems based on models emerged as a method to address the complexity of document-centred information interchange with the goal of developing a database of digital information (the system model) that contains and integrates all pertinent system design materials [9]. Meanwhile, modelling and simulation are having a bigger impact on all engineering processes as a result of the remarkable advancement in computer technology. However, using simulation in a certain field setting with a property set of utensils is no longer viable. As a result, simulation engineering takes on equal importance, seeking to create thorough model simulation that enables framework validation and verification across the whole lifespan. The formal simulation model configuration framework, for example, which is based on appropriate information interchange across those three disciplines, are crucial for achieving this aim. Figure 11.6 provides an illustration of the formal simulation model configuration framework’s fundamental structure. It makes a distinction between the digital twin of the targeted system of interest and an appropriate virtual testbed. A model library of model components and a scenario library of simulation scenarios are both provided by digital twin. The virtual testbed, on the other hand, offers an algorithm library that includes all pertinent simulation algorithms that may be used with the digital twin. I outlined the fundamental syntactical components that the framework defines and that will be covered in greater detail in the paragraphs that follow.
197
11.5 A Framework for the Formal Simulation Model Configuration
Engineering based on modelling and simulation Results of Simulation
Engineering a system with Expert
System attributes
Engineering Simulation With Practical Expert
The System Model Model for simulation
Product Specifications
R
m s ste nt Sy reme i u eq
s
Re
tio n
Products
ult s o fS im ula
on ns tio il c i tr ta res de
Interest Systems
Engineering the Domain with Area Expert
Figure 11.5: Order of various designing disciplines.
11.5.1 Model Library A hierarchically organised replica of the physical organisation of the system is the model library. As demonstrated in [7], the decomposition of the system’s structure offered by mechanical design may be used to determine the physical structure of the system. On each hierarchical level, the model library supplies fundamental model elements (e.g. system, assembly, and component). There may be several variations of each model piece, each covering a different degree of detail. For every model piece, the model library also offers the ability to handle several views. The viewpoints come from physical disciplines such as mechanism, electronic engineering, and thermodynamics. This division of the sample modules enables a fluid decoupling between the several viewpoints, degrees of integration, and variations, enabling a procedure for
198
Chapter 11 Digital Twin Simulator
independent model refining and the evolution of each model component on various time periods [8].
Sample Gallery
Algorithmic Archive
Library Plan
W1 M1
M1
C3 M2
M1
C1
W1
M2
M2 W1 C2
M3 M3
C4
M3
A1 A2 A3
Electronic Testing Board Digital Twin
Figure 11.6: Essential construction of the formal simulation model configuration structure.
11.5.2 Library of Scenarios Specific simulation experiments are included in the scenario library and are used to carry out particular analysis as shown in Figure 11.7. Scenario refers to a single simulation experiment that can be executed. It consists of linkages between selected model elements from the model library and a suitable selection of those model elements. A systematic approach was described in [7] for the purpose of deriving a scenario. First, according the present stage of the project and the goal, a set-up table (see Figure 11.8) is utilised during simulation to choose all ideal features and viewpoints that are pertinent for the situation. The viewpoints line up with the simulation domains that the algorithm library offers. Based on the chosen model components, the interconnectivity table is shown in Figure 11.9. Using a connectivity table, all explicit connections – either inner or outer domain – are done between the model’s component parts. Furthermore, the relationship of the initial values for the model elements is done using a table. In this manner, the connectivity table and configuration table are a depiction of the scenario’s theoretical model and may be utilised to mechanically produce the instrument’s particular code for a simulation model. Figure 11.10 depicts the first graphical user interface of a sample application of the suggested formalism.
11.5 A Framework for the Formal Simulation Model Configuration
199
systems M system (P1 /V1)
M system (P1 /V2)
Structure M structure 1 (P1 /V1)
Viewpoints
M structure 1 (P1 /V2)
Element 1
Architectural breakdown
M element 1 (P1 /V1)
M element 1 (P1 /V2)
Element 2 M element 2 (P1 /V1)
M element 2 (P1 /V2)
Variations
Structure M Structure 2 (P1 /V1)
M Structure 2 (P1 /V2)
Figure 11.7: Itemised design of the model library.
11.5.3 Library of Algorithms The algorithm library identifies the component that is lacking and required to carry out scenario simulations. Numerous simulation areas, including mechanics, electronics, and thermodynamics, are offered through the algorithm library. Every simulation method may also have many variations that allow for a tailored design of the algorithm
200
Chapter 11 Digital Twin Simulator
Figure 11.8: Choice of the model components from the set-up table in request to carry out the situation.
Figure 11.9: Connection of the model pieces within the interconnection table.
to the circumstance, as shown in Figure 11.11. A simulation algorithm’s main objective is to gradually change the scenario’s state. The necessary algorithm must thus be provided by the algorithm library. Several simulation tools may be used to produce the virtual testbed. The actual reality of the algorithm library may vary depending on the applicable tool. Lange et al. [7] described a method in which the simulation algorithms were implemented as real algorithms that each had their own solver. In contrast, packages are provided for each simulation domain by tools supporting the multi-domain modelling language Modelica. Standardised building blocks are included in each package to enable configuration of the manner in which the simulation algorithm assembles the differential equation system from the interconnected building blocks. Here, the entire differential equation system is immediately solved using a common domain-independent solver.
11.6 Operational Modelling Integration
201
Figure 11.10: Construction of a particular simulation scenario by choosing model components.
The Simulations Sector
B Variation 1 Area 1
B Variation 2 Area 2
B Variation 1 Area
Vaiations
Figure 11.11: The algorithm library’s organisation.
11.6 Operational Modelling Integration Mechatronic systems, according to Tao et al. [9], are made up of a fundamental data and a system processing system. The latter performs all types of processing data and intersystem message, whilst the fundamental system is situated on the physical layer. Information is exchanged between those two levels of sensors and actuators (see Figure 11.12). The framework has already included coverage for modelling the physical layer. This section outlines a method for integrating the data layer, allowing the operational behaviour of the system to be configured.
Mechanical Level
Info Layers
202
Chapter 11 Digital Twin Simulator
System for analyzing information
Movers
Sensing Elements
Direct Physical Contact
Bodily Systems
flow of data
Interaction
power flow
supplies flow
Figure 11.12: Mechatronic systems’ fundamental structure.
As the physical architecture of the system determines how it is configured, comparably, by using a hierarchy dividing units for the data processing system, the functional architecture offers the foundation for configuring the data processing units (DPUs). A logical model element, a function script, or an existing control unit coupled through an appropriate communication interface can all be used to implement any DPU. Based on its incoming data, each DPU determines its output values (e.g. ROS). This enables the control of real components outside the simulation or the integration of existing control algorithms into data processing system models at various levels of integration. As a result, scheduling a DPU can be done aligned or not (evaluating inputs and determining outputs at each step of the simulation) or concurrently (going on one’s own time). In order to implement or encapsulate the system’s DPUs, a new layer with relevant model components is added to the model library. An empty configuration list serves as the initial set-up of the data processing system for a particular simulation scenario. The model library’s DPU model components may be chosen and included to the scenario specification. The specified DPUs can acquire parameters as key value pairs, as shown in Figure 11.13. Additionally, their inputs are specified, making it possible to establish the links between inputs and
11.7 Example of an Educational Implementation
203
outputs in terms of data flow. The incorporation of sensors and actuators is a further crucial factor. They both may serve within as sources of data (devices) and collection sink (movers) set-up of the data processing system, and they can be manually added to the list. Ideal or model-based sensors and actuators are both possible.
Figure 11.13: Choosing and arranging data processing units, sensors, and actuators from a configuration list.
The incorporation of sensors and actuators is a further crucial factor. They both may serve as sources of data (devices) and collection sink (movers) inside the set-up of the DPU, and they can be manually added to the list. Ideal or model-based sensors and actuators are both possible. Ideal sensors read and write chosen component attributes directly (e.g. the existing location of the present angular velocity of a hinge joint or a rigid body element) and are connected to represent actual system components or interconnections. Similar to this, optimal actuators write values straight into the associated actual element’s property values. Though using perfect actuators is typically not advised because they skip the simulation procedure and frequently result in non-physical system behaviour. The substitute is to set up an actuator with a model (such as a motorised that actuates a hinge joint and has a certain transfer characteristic). The virtual testbed’s associated simulation algorithms supply such model-based actuators. The same holds true for sensors based on models, such as an optical detector that produces a signal noise.
11.7 Example of an Educational Implementation This section provides a clear example to show how the suggested formalism may be applied. The creation of a conveyance system capable of moving a particular work item from a given starting place to a certain ending position shall be the objective of a fictitious project. A certain rotary table has previously been chosen as the mechanism based on a number of user needs and limitations. As quickly as the work component
204
Chapter 11 Digital Twin Simulator
is completed, it reaches its end position, which will be picked up by a corresponding sensor (signal), and the rotation will cease. To support the product development, a virtual prototype will be created.
11.7.1 Structural Disintegration Systems engineers used the physical architecture of the system to build the breakdown structure of the physical system (from the overall system to the component level) and data processing system. All future phases are built on the structural breakdown of the system that is being developed.
11.7.2 Development or Selection of Product Parts Within the corresponding technical areas, the requirements (produced by systems engineering) are now increasingly being transformed into product components. As a result, the physical system’s breakdown structure is transformed into a product tree. In this example, we choose readily available components that may have also been newly developed.
11.7.3 Identify Particular Information Need Every simulation begins with a question; hence, the digital twin of the system under development is activated as soon as specific questions are raised. Here, we wish to examine how the transportation process behaves under dynamic conditions when a straightforward feedback loop controller is used. As a result, we must develop a simulation scenario that replicates the mechanical system’s dynamics and incorporates the control logic (from sensor to controller to actuator).
11.7.4 Establish or Expand a Model Library It must be verified that the design collection has all required model pieces before the scenario can be constructed. In this instance, we do not get the digital twin from earlier projects. As a result, we must first populate the model library with model components that take into account the viewpoints and hierarchical phases that are pertinent at this project level. We determine that the best way to examine the system of interest and account for all mechanical dynamics is from the task specification in the preceding section. So, using stiff structures containing geometric primitives as replacement
11.7 Example of an Educational Implementation
205
forms and proportions taken from the corresponding technical documentation sheets (from technology experts), we approximate the components. The stiff physique dynamics viewpoint alone would be insufficient, and adequate to complete the required analysis in this scenario because it is relatively straightforward. But as an extra perspective, we include precise CAD geometries to show the formalism’s features like inter-domain linkages. This has the added benefit of allowing for future analysis of the system using optical sensors and image processing techniques [2]. Some model components for the rotary table parts are exemplified on the right side of Figure 11.14.
Figure 11.14: Creation of the model library for the digital twin.
11.7.5 Specify the Simulation Scenario The simulation scenario may be set up once the system’s essential components have been correctly characterised. This is done in a multi-step method as described in sections. Configuring the physical system is the initial stage. Ten model elements from various hierarchical phases are chosen in the set-up table, as shown in Figure 11.17, and are subsequently added to the scenario. As a result, Figure 11.15 shows a 10 × 10 connection table. The interconnectivity table may be split into three sub-tables, two of which define inner-domain links (tinted in blue), and one of which specifies inter-domain links between the two chosen views (tinted green). The starting state of each model element is included in the cells of the upward diagonal. This refers to the beginning posture (placement and alignment) and starting speed (angle and sequential) in the rigid body dynamics domain. Four internal section connections are defined as well, offering
206
Chapter 11 Digital Twin Simulator
Figure 11.15: Configuration table for the simulation scenario that is being shown.
Figure 11.16: Connection chart for the rotary table simulation example that is being shown.
physical connections among the respective stiff bodies [3]. There is no need for any additional inner-domain links in the geometry (CAD) domain. Finally, utilising spatial redirect links, all rigid body dynamics domain model components are connected to geometry domain model elements within the interdomain interconnect sub-table. They make sure the rigid body models are spatially tracked by the geometry models. The design of the information system is now done after the physical system configuration. We choose the three data processor components from the structured decomposition’s second tier, as shown in Figure 11.17 (see Figure 11.15). We also included three ideal sensors to provide the present state of the necessary dynamic characteristics of the connected model components, a motor model based on velocity to drive the connected rotary joint connectivity, and a velocity-based motor model. The entire simulation situation is defined with the information processing system’s set-up. Additionally, the scenario may now be transformed within the code
11.7 Example of an Educational Implementation
207
Figure 11.17: The configuration list for the simulation scenario that is being shown.
generator that can be performed within a virtual testing facility if the individual model pieces adhere to specific requirements regarding structure and syntax. Although it relies on the destination testbed [4], this translation may theoretically be totally automated. In this example, we show how to generate code for two separate testbeds, namely OpenModelica and VEROSIM (check Figures 11.18–11.23).
11.7.6 Run the Scenario Model and Evaluate the Results of the Simulation The simulation runs can be carried out after the scenario code has been prepared. The following simulations were run in VEROSIM and OpenModelica with a step size of integration of 10 in each scenario. The simulation results for the motor torque profiles, the observed angular velocity of the rotary table, and the motor controller speed set point are all displayed in Figure 11.18. The speed set point serves as a suitable representation of the rotary table’s switching point because the binary result of the sensor controller DPU coincides with the result of the rotary table controller. The simulation results are consistent with one another, as shown in Figure 11.20, and the amount of variance is minimal. The differing numerical integration procedures might provide an explanation for the variance. The divergence of the simulation findings does, however, include an intriguing abrupt peak that precisely matches the switching time of the sensor’s DPU. This behaviour is explained by a thorough investigation of the switching point; refer to Figure 11.23. Based on DAE, Modelica directly simulates all physical connections between model components. The direct connection technique is employed to correctly detect, ascertain, and then mimic the discontinuities (the discontinuities in this case) expressed by the
208
Chapter 11 Digital Twin Simulator
Figure 11.18: OpenModelica scenario code that can be executed.
Figure 11.19: OpenModelica is used to carry out the simulation scenario.
sensor switch, while OpenModelica employs a numerical integration procedure with variable step sizes. In order to more precisely determine the switching point, the lower plot in Figure 11.23 displays one additional simulation point between 3.28 and 3.29. In contrast, VEROSIM schedules simulations with a set step size (see top figure in Figure 11.23). As
11.7 Example of an Educational Implementation
209
Figure 11.20: The VEROSIM simulation scenario was carried out.
a result, the sensor’s switching point only materialises in the following equidistant simulation step. The abrupt peak in Figure 11.22 is caused by the ensuing delay.
Figure 11.21: Comparison of the simulation outcomes obtained from running the scenarios in VEROSIM and OpenModelica.
210
Chapter 11 Digital Twin Simulator
Figure 11.22: Analysis of the discrepancy between OpenModelica and VEROSIM simulated results.
Figure 11.23: Itemised investigation of the exchanging point of the sensing element device.
11.8 Conclusion
211
In any case, the behaviour of a system in the actual world may combine the outcomes of the two simulations. There will be temporal delays in a real-world embedded system, and DPU computations will take some time. Therefore, neither the predetermined scheduling of a VEROSIM simulation nor the idealised direct connection (speed to torque mechanical input) achieved in Modelica by the DAE would match the actualworld system. Nevertheless, the outcomes of both simulations run inherently support one another, allowing for the first-ever prediction of a real-world system’s behaviour.
11.8 Conclusion We provide a Python-based simulation framework for hybrid discrete–continuous applications like digital twins. The suggested system includes assistance with integration of current constant process modelling frameworks and employs SimPy, a Pythonbased discrete-event simulation framework. We provide a methodical method for incorporating constant processes into SimPy’s scheduled events’ discrete mechanism of simulations, and use an example to show how the framework is organised. The continuing effort is concentrated on advancing the simulation framework on a number of levels, such as, but not restricted to, integrating with current continuing problem solvers, including analysis, actual simulations, and a sensor and an actuator interface. The approach that is being used makes it easier to design and run sophisticated simulations while also organising the corresponding model artefacts explicitly. The simulation scenarios are adaptable and flexible to meet the various information demands, thanks to the model library’s hierarchical division into levels of integration, variations, and views. The proposed strategy well fits the process of designing. All steps of model construction, model improvements, and investigations are related to objects in the modelling library or the scenario library. The virtualisation thereby summarises the evolution of information and understanding of the system across time, representing the whole design process. The technique may be used in any functional model, thanks to the operational view’s interaction with the configurable framework that has been demonstrated. The formalism currently supports all pertinent elements of complex mechatronic systems, and the framework fully covers the smart technologies’ knowledge cycle. Furthermore, as shown, the newly designed formal simulation model design structure may be a workable foundation for a methodical approach to automated simulation model production, which enables the generation of scenarios for different simulation tools. Based on this, it is also possible to cross-validate simulation runs carried out using various simulation tools with the fundamental formal method.
212
Chapter 11 Digital Twin Simulator
References [1]
[2]
[3]
[4]
[5]
[6]
[7] [8]
[9] [10]
[11]
[12]
[13] [14] [15]
[16] [17]
D. Gibb, M. Johnson, J. Romaní, J. Gasia, L. F. Cabeza, A. Seitz, Process Integration of Thermal Energy Storage Systems – Evaluation Methodology and Case Studies, Appl. Energy 230, 2018, 750–760. https://doi.org/10.1016/j.apenergy.2018.09.001 T. Nagasawa, C. Pillay, G. Beier, K. Fritzsche, F. Pougel, T. Takama, K. The, I. Bobashev Accelerating Clean Energy Through Industry 4.0: Manufacturing the Next Revolution: A Report of the United Nations Industrial Development Organization, Vienna, Austria. United Nations Industrial Development Organization: Vienna, Austria, 2017. S. Bonilla, H. Silva, M. Da Terra Silva, R. Franco Gonçalves, J. Sacomano, Industry 4.0 and Sustainability Implications: A Scenario-Based Analysis of the Impacts and Challenges, Sustainability 10, 2018, 3740. https://doi.org/10.3390/su10103740 E. Commission, J. R. Centre, K. Kavvadias, J. Jiménez Navarro, G. Thomassen, Decarbonising the EU Heating Sector: Integration of the Power and Heating Sector. Publications Office: Luxembourg, 2019. https://doi.org/10.2760/072688 G. Mendes, C. Ioakimidis, P. Ferrão, On the Planning and Analysis of Integrated Community Energy Systems: A Review and Survey of Available Tools, Renew. Sustain. Energy Rev. 15, 2011, 4836–4854. https://doi.org/10.1016/j.rser.2011.07.067 V. Parida, D. Sjödin, W. Reim, Reviewing Literature on Digitalization, Business Model Innovation, and Sustainable Industry: Past Achievements and Future Promises, Sustainability 11, 2019, 391. https://doi.org/10.3390/su11020391 S. Lange, J. Pohl, T. Santarius, Digitalization and Energy Consumption. Does ICT Reduce Energy Demand? Ecol. Econ. 176, October 2020, 106760. https://doi.org/10.1016/j.ecolecon.2020.106760 D. Kiel, J. Müller, C. Arnold, K. I. Voigt, Sustainable Value Creation: Benefits and Challenges of Industry 4.0, Int. J. Innov. Manag. 21, 2017, 1740015, 34 pages. © World Scientific Publishing Europe Ltd. https://doi.org/10.1142/S1363919617400151 F. Tao, H. Zhang, A. Liu, A. Y. C. Nee, Digital Twin in Industry: State-of-the-Art, IEEE Trans. Ind. Inform. 15(4), 2019, 2405–2415. https://doi.org/10.1109/TII.2018.2873186 Y. Lu, C. Liu, K. I. K. Wang, H. Huang, X. Xu, Digital Twin-driven Smart Manufacturing: Connotation, Reference Model, Applications and Research Issues, Robot. Comput.-Integr. Manuf. 61, 2020, 101837. https://doi.org/10.1016/j.rcim.2019.101837 W. Yu, P. Patros, B. Young, E. Klinac, T. G. Walmsley, Energy Digital Twin Technology for Industrial Energy Management: Classification, Challenges and Future, Renew. Sustain. Energy Rev. 161, 2022, 112407. https://doi.org/10.1016/j.rser.2022.112407 D. Jones, C. Snider, A. Nassehi, J. Yon, B. Hicks, Characterising the Digital Twin: A Systematic Literature Review, CIRP J. Manuf. Sci. Technol. 29, 2020, 36–52. https://doi.org/10.1016/j.cirpj.2020. 02.002 T. Y. Melesse, V. D. Pasquale, S. Riemma, Digital Twin Models in Industrial Operations: A Systematic Literature Review, Procedia Manuf. 42, 2020, 267–272. https://doi.org/10.1016/j.promfg.2020.02.084 M. Grieves Digital Twin: Manufacturing Excellence through Virtual Factory Replication, White Paper. Florida Institute of Technology: Melbourne, FL, USA, 2014, pp. 1–7. E. Glaessgen, D. Stargel, The Digital Twin Paradigm for Future NASA and U.S. Air Force Vehicle, in: Proceedings of the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Honolulu, HI, USA, 23–25 Apr. 2012, pp. 1–14. https://doi.org/10.2514/6.2012-1818 C. Cimino, E. Negri, L. Fumagalli, Review of Digital Twin Applications in Manufacturing, Comput. Ind. 113, 2019, 103130. https://doi.org/10.1016/j.compind.2019.103130 E. Negri, L. Fumagalli, M. Macchi, A Review of the Roles of Digital Twin in CPS-based Production Systems, Procedia Manuf. 11, 2017, 939–948. https://doi.org/10.1016/j.promfg.2017.07.198
References
213
[18] M. Liu, S. Fang, H. Dong, C. Xu, Review of Digital Twin about Concepts, Technologies, and Industrial Applications, J. Manuf. Syst. 58, 2021, 346–361. https://doi.org/10.1016/j.jmsy.2020.06.017 [19] W. Kritzinger, M. Karner, G. Traar, J. Henjes, W. Sihn, Digital Twin in Manufacturing: A Categorical Literature Review and Classification, IFAC-PapersOnLine 51, 2018, 1016–1022. https://doi.org/10. 1016/j.ifacol.2018.08.474 [20] C. Wagner, J. Grothoff, U. Epple, R. Drath, S. Malakuti, S. Grüner, M. Hoffmeister, P. Zimermann The Role of the Industry 4.0 Asset Administration Shell and the Digital Twin during the Life Cycle of a Plant, in: Proceedings of the IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, Torino, Italy, 4–7 September 2018, pp. 1–8. https://doi.org/10.1109/ETFA.2017. 8247583 [21] K. Josifovska, E. Yigitbas, G. Engels, Reference Framework for Digital Twins within Cyber-Physical Systems, in: Proceedings of the 2019 IEEE/ACM 5th International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS), Montreal, QC, Canada, 28 May 2019. Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2019, pp. 25–31. https://doi.org/10.1109/SEsCPS.2019.00012 [22] G. Steindl, W. Kastner, Semantic Microservice Framework for Digital Twins, Appl. Sci. 11, 2021, 5633. https://doi.org/10.3390/app11125633 [23] E. J. Tuegel, A. R. Ingraffea, T. G. Eason, S. M. Spottswood, Reengineering Aircraft Structural Life Prediction Using a Digital Twin, Int. J. Aerosp. Eng. 2011, 2011, 1–14. https://doi.org/10.1155/2011/ 154798 [24] F. Tao, M. Zhang, A. Y. C. Nee, Digital Twin Driven Smart Manufacturing. Academic Press: Cambridge, MA, USA, 2019. https://doi.org/10.1016/C2018-0-02206-9 [25] T. R. Wanasinghe, L. Wroblewski, B. K. Petersen, R. G. Gosine, L. A. James, O. De Silva, G. K. I. Mann, P. J. Warrian, Digital Twin for the Oil and Gas Industry: Overview, Research Trends, Opportunities, and Challenges, IEEE Access 8, 2020, 104175–104197. https://doi.org/10.1109/ACCESS.2020.2998723
Chapter 12 Case Studies: Smart Cities Based on Digital Twin Abstract: Chapter gives explanation about the development of 3D parametric models as digital twins to evaluate energy assessment of private and public buildings is considered one of the main challenges of the last years. The ability to gather, manage, and communicate contents related to energy-saving in buildings for the development of smart cities has to be considered a specificity in the age of connection to increase citizens’ awareness of these fields. How will digital twins be used to build smart cities? Important points – The significance of digital twins in smart cities – The importance of smart cities – What hurdles must be overcome? – Case study In light of their minimal expense and convenience, computerised twins have turning out to be progressively famous in different ventures as the Internet of Things (IoT) has developed. The idea of the brilliant city is clear through advanced twins. From metropolitan wanting to land-use improvement, it can really direct the city. Computerised twins empower the demonstration of plans before they are carried out, uncovering imperfections before they become a reality. Lodging, remote organisation receiving wires, sunlight-powered chargers, and public transportation are instances of engineering parts that might be planned and concentrated on utilising computerised strategies. What goals will be achieved by creating smart cities with digital twins? There is little inquiry that networks that can take advantage of this innovation and receive the rewards will thrive. They will turn out to be all the more biologically, monetarily, and socially economical notwithstanding specialised accomplishment. Be that as it may, a few specialists have communicated worries about how this development would outflank more seasoned strategies. As per computerised twin-trained professionals, despite the fact that computer-aided design as of now plans and gives experiences into the plan cycle, the computerised twin would give exactly the same thing as well as an actual partner with whom they might lock in. Having a partner would permit the fashioners to expect any likely issues. One more superb delineation of how this advancement further develops present cycles is the correlation of virtual innovations with savvy maps driven by geospatial examination. The objective of these guides is to assist clients with picturing, decipher, and investigations different, huge, and convoluted dereferenced informational collections. Once more, the computerised twin offers a similar assistance, yet it likewise addresses
https://doi.org/10.1515/9783110778861-012
216
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
an in-administration actual thing that changes progressively in close to constant when the real article’s status changes. It allows users to create simulations that will help them plan their future. That function is not available on smart maps. To summaries, it is too early to say that digital twins will solve the complicated difficulties that cities face. It will undoubtedly be an important part of any city’s long-term resilience plan. As with every invention, there may be some drawbacks, but the advantages exceed the drawbacks. A total revamp of old systems may be disastrous; hence it would be prudent to use digital twins with existing systems in the early phases. Investing in this technology in tandem with current practices would also allow the government to recoup public dollars previously spent on ineffective methods. These additional savings can subsequently be invested elsewhere in town. Keywords: Digital twin, smart city, digital city, big data, AI
12.1 Introduction Legislatures all over the world are beginning to address the challenges produced by urbanisation in the twenty-first 100 years. Metropolitan destitution, high metropolitan costs, gridlock, lodging deficiency, absence of metropolitan speculation, inadequate metropolitan monetary and administration limits, developing disparity and guiltiness, and natural debasement are totally exacerbated by urbanisation. The globe is seeing tremendous urbanisation. According to the United Nations (2018), metropolitan regions will house 68% of the world’s population by 2050. Cities are home to a growing proportion of the world’s exceptionally talented, instructed, and imaginative people, resulting in highly concentrated and diversified pools of information and information creation organisations. Author investigated worldwide urbanisation trends, as well as the spectacular extent and quick pace of urbanisation in emerging nations, as well as the various qualities of urbanisation patterns in creating and laid out nations [1–3]. According to author, the creation of smart/digital cities is due to individuals throughout the world asking their nearby state-run administrations to work on their personal satisfaction through inventive plan and reproduction of metropolitan conditions. As an inescapable pattern of computerised change, the advanced twin empowers urban areas to accomplish continuous far off reconnaissance and more compelling direction. Creator examined the savvy city plans of 15 significant urban areas all through the world, noticing hierarchical coordination and practices. New York concocted a methodology to turn into “the world’s most computerized city”, zeroing in on four key regions: access, open government, commitment, and industry. United States set an aggressive point of zero waste by the current year, with various shrewd city help instruments for clean-tech and development ventures. The British government proposed a Smart London
12.3 Cities and Their Digital Twins
217
Plan, which expected to utilise new innovations’ imaginative ability to serve London and upgrade Londoners’ lives. Singapore sent off the “Singapore Intelligent Nation” technique in 2015, and it was the main country to see itself as a shrewd nation. Chief Xi of China puts major areas of strength for modernising the public administration construction and administration capacities.
12.2 Digital Transformation Is an Unavoidable Trend for Smart Cities Digital transformation is a critical decision in urban government. Every significant technical innovation alters the global landscape. Manchester and Liverpool, both cities with populations of over 100,000 in the United Kingdom, spearheaded the first industrial revolution in the twentieth century. US lead the globe into the second modern upheaval. Right now, new advances like the Internet, enormous information, man-made consciousness, block-chain, the IoT, and fifth-age remote frameworks are quickly developing; new organisations like the sharing economy, driverless vehicles, computerised money, and savvy advertising are arising; and new ideas like environment and well-being are building up some decent forward movement. Pioneers are effectively considering ways of modernising their urban communities through new innovation, structures, and ideas. The silicon-based revolution resulted in a total update of the “data, algorithm, computing power” decision-making mechanism, altering data collecting and processing methodologies [4]. The notion of smart cities has gained popularity practically everywhere in the last 20 years, and people have begun to consider novel approaches to construct keen metropolises. The author examined 7,845 studies on keen metropolises from 1991 to 2018. As per the scholars, the terms manageability and supportable advancement have acquired fame subjects of interest not exclusively to scholastics, eminently in that frame of mind of ecological financial matters, innovation and science, metropolitan preparation, etc. for metropolitan turn of events and organisation, yet in addition for metropolitan policymakers and expert specialists believe brilliant urban communities to be mind boggling frameworks with a harmonious relationship. People, institutions, technology, organisation’s, the built environment, and physical infrastructure are all linked together.
12.3 Cities and Their Digital Twins The eventual objective of digital transformation is the digital twin. Professor Grieves proposed the notion of a digital twin in his invention lifespan managing course. A digital twin is made up of three key components: bodily items, computer-generated product, and the links that connect them. A digital twin is a complete description of a possible or
218
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
actually made object from the miniature nuclear to the large-scale mathematical level [5]. To enhance and improve virtual entities, digital twin technology intimately blends equipment, programming, and IoT advancements. NASA was the first to use digital twin technology as an information mirroring concept in the realm of aviation and aerospace. A digital twin continually anticipates an automobile or structure’s health, residual usable life, and task achievement likelihood. Different systems of computerised twin-based shop-floor savvy creation the board strategies or approaches have been presented by researchers in the manufacturing industry. Digital twins have been demonstrated to be a viable way for combining the real and virtual worlds of production. The author describes a novel digital twin-based product design process. He also investigated the effective control of equipment energy usage in a digital twin plant level. He investigated a digital twin case of a linking manufacturing mark and reported the execution method, application procedure, and case consequences. In conjunction, the digital twin serves as the foundation for reproduction-driven help frameworks as well as control and administration decisions. The author (i) integrated diverse forms of physical object information; (ii) present throughout the full life pattern of physical items, co-evolving with them, and continually gathering important information; and (iii) characterising and optimising physical objects. Digital twin technology is more than a single application or a basic technique. The author suggested a digital twin model with five dimensions. Governments will be able to observe and foresee unmeasurable cues from the previous physical world, allowing them to build a more thorough judgement. This chapter describes the design of a digital twin city: all entities in DTCs will exist concurrently in parallel with history records that can be traced, a present state that can be checked, and a future state that can be projected [6]. The digital twin becomes the most essential power engine of urban knowledge in this plan. Decision-makers can produce more ordered urban government. Citizens may engage in the processes of urban governance and scrutinise government choices. The condition of reciprocal symbiosis between digital and physical entities is referred to as the digital twin. Data, models, and physical things are all integrated in digital twin technology. The mapping collection of entities in the digital realm is referred to as digital twins. A DTC gathers digital twins of a city’s entities using digital twin technology.
12.4 Advanced Technology Used in Digital Twin Cities The objective of DTCs is to expand the effectiveness and manageability of coordinated operations, energy utilisation, interchanges, metropolitan preparation, fiasco help, building advancement, and transportation. We remember various critical advancements for DTCs in this segment including looking over and planning innovation, building data displaying (BIM) innovation, IoT, 5G, cooperative figuring, blockchain, and reproduction. The advances portrayed over each play a remarkable capability in DTCs
12.5 Smart Cities and Digital Twins
219
[7–9]. Reviewing and planning innovation is the establishment for gathering static information on city structures. BIM innovation fills in as the establishment for city resource and foundation the board. IoT and 5G are the establishments for effectively assembling dynamic information and input. Blockchain innovation is the establishment for value-based, strategic, and human conduct trust components. 5G cooperative registering gives the establishment to productive on-going responses. Recreation innovation fills in as the establishment for strategy backing, arranging, and early admonition frameworks.
12.5 Smart Cities and Digital Twins The “computerised twin” is frequently thought to be a recreation method that completely uses actual models, sensors, verifiable information of activity, etc. to join data of multidiscipline, multi-actual characteristics, multi-scale, and multi-likelihood. It goes about as a computer experience methodology for genuine things. The mirror body reflects the whole life cycle interaction of the actual substance delivered. Notwithstanding, it is generally accepted that the fundamental parts of advanced twins incorporate actual things, virtual models, information, associations, and administrations. The centre of computerised twins is the presence of a bi-directional planning join among physical and virtual space. This bi-directional planning varies from uni-directional planning in that it only deciphers information from actual things to computerised objects. The last option is otherwise called a computerized shadow, and that intends that “an adjustment of the actual thing’s state prompts an adjustment of the advanced item, yet not the other way around”. In any case, computerised twins permit virtual things to oversee genuine substances without the requirement for human connection, which advanced shadows don’t. The computerised twin can plan the actual substances and qualities, structure, state, execution, capability, and conduct of frameworks to the virtual world, shaping a highloyalty dynamic complex, multi-scale, multi-actual amount model, which will give a successful approach to noticing, perceiving, understanding, and interfacing the virtual and the genuine [10, 13]. Computerised twins initially appeared and assumed a part in the item and assembling plan areas, and afterward in airplane, mechanisation, shipbuilding, medical care, and energy. With the quick advancement of innovations and businesses like the IoT, major information, distributed computing, and man-made consciousness (AI), the shrewd city development premise has steadily developed from the first static 3D demonstrating level to the computerised twin level, which joins dynamic computerised innovation and static 3D model, framing another idea that advanced twin city [11]. The computerised twin city, as the name suggests, is a wide utilisation of the computerized twin thought at the city level. It endeavours to make a muddled, monstrous framework that can plan the truth and the virtual climate and speak with one another from
220
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
the two sides. It can connect an actual city with its “twin city”, making an example of combination between mechanised urban communities in the enlightening aspect and genuine city in the actual viewpoint. A specialised premise and an information establishment are expected to fabricate a computerised twin city. The expression “information establishment” alludes to both the immense measures of large information made everyday by various sensors and cameras around the city and the advanced subsystems that have been gradually developed by civil organisation organisations. The expression “specialised premise” alludes to mechanical frameworks like 5G, large information, distributed computing, and the IoT. In a computerised twin city, sensors, cameras, and different advanced subsystems will be utilised to accumulate data about the foundation’s functional status, the portion of nearby assets, and the development of individuals, products, and cars [12]. The city will be more successful with innovation like 5G sending them to the web and the regional government. Building computerised twin urban communities will prompt huge progressions in clever metropolitan preparation, the executives, and administrations and act as “another beginning stage” for building shrewd urban communities. This will help with accomplishing the goals of wise city arranging, the board, and administrations as well as the representation of all the city’s data. The information coordination city is a significant part of a brilliant city as well as the end objective of a computerised city. A vital part and essential capacity empowers the city to accomplish quickness. The progress of metropolitan informatisation from subjective to measurable change driven by innovation, which gives the production of brilliant urban communities more space for development, is likewise a defining moment.
12.5.1 Digital Twin-Based Features of Green Infrastructure The possibility of a computerised twin may be seen distinctively relying upon your perspective. The “five-layered model” of the advanced twin proposed by author begins from various aspects, sums up and examines the different current understandings of the idea, and proposes the ideal qualities of advanced twins in the model aspect, information aspect, association aspect, administration/capability aspect, and actual aspect (as displayed in Table 1). As indicated by Tao et al. [13], the comprehension and execution of advanced twins can’t be segregated from explicit items, interesting applications, and explicit requests since computerised twins at different stages display various elements. In this manner, it is OK for the genuine application to fulfil the singular requests of clients as opposed to matching the laid out “computerised twin” all around. The advanced twin city shows specific attributes in light of the best highlights of the computerised twin as an expansion of the advanced twin thought in the metropolitan circle.
12.5 Smart Cities and Digital Twins
221
Computerised twin urban areas have four primary characteristics when joined with the on-going conversation: exact planning, virtual-genuine cooperation, program definition, and wise criticism. By putting sensors on the air, base, underneath, and stream levels in the actual city, the computerised twin city acknowledges exhaustive advanced demonstrating of metropolitan streets, spans, sewer vents, light covers, structures, and different frameworks to completely see and progressively screen the city’s working status and eventually structure the solid data understanding and displaying of the availability to the actual city in the data. Virtual-real interaction allows all kinds of “traces”, including those left by people, logistics, and automobiles in the actual city, to be generated and then searched for in the virtual city. Software definition: Using software platforms, the twin cities [14] create a virtual model that corresponds to the physical city and simulates how urban residents, events, and things behave in the virtual world. Smart responses refers to the function of giving acceptable and workable countermeasures as well as intelligent early warning of potential negative impacts, conflicts, and risks of the city via design and planning, modelling, etc. on the digital twin city. IoT, distributed computing, large information, AI, and other new age IT advancements can be coordinated based on the computerised fabricating city to direct and enhance the preparation and the executives of genuine urban areas, working on the stockpile of residents’ administrations and helping more in the formation of shrewd urban areas. In the areas that follow, we’ll frame five normal applications to show how computerised twin-based brilliant urban communities may really further develop city tasks.
12.5.2 Utilisations of Digital Twins in Smart Cities 12.5.2.1 Smart City Management Brain An activity called “Shrewd City” depends on the advanced twin city. Authorities from the city can lay out the mind (SCOB) will expect the drive to begin a Smart City Operation and assign a Chief Operating Officer to accept initiative. Figure 12.1 shows the whole in view of computerised innovation, the SCOB’s administration structure a twin city. The essential obligations of SCOB are to (1) participate in and survey the high level plan of the city; (2) plan and survey the general objectives, structures, undertakings, and the board systems of the data improvement of different businesses; (3) formulate relevant strategies, guidelines, and principles; (4) be liable for the combination and sharing of metropolitan data assets; (5) monitor city activity, multidepartment coappointment, and orders; (6) cultivating the advancement of an organisation of open enormous information administrations, applications, and collaborations. The foundation part of the SCOB is the Publically Available Cloud Storage Platform. The workplace of SCOB might begin working after the Public Information Cloud Service Platform is set up, and the authorities can just utilise the stage’s “applications”
222
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
Municipal Administration
Service in a Single Step
Applicants
Urban Design
Transport simulations, managing waste system, construction data model, urban virtual portraiture, community work framework, resources emergency operations platform, crisis simulink, home increasingly efficient, blockchain chain based trade platform Technology for Smart Applications Development Technology for Smart Applications Development
Urban Mind
System for city geospatial data integrating bim and Geographic information systems Observation of Network Data
Observation of Network Data
Big Cloud Storage, Routing Protocol
Facilities
Sensors IoT Computing
Wifi, fibreoptic network, and 5G architecture Sensor, actuation, and smart IoT infrastructure
Revolutionar y Map based Technique that is Automatic
Figure 12.1: The composition of digital twin cities.
to assume control over administration of the brilliant city. The Public Information Cloud Service Platform’s authoritative construction is portrayed in Figure 12.2. The stage is composed of a framework layer, an application layer, and a stage for programming improvement and activity. The stage gathers information utilising framework like servers, organisations, and sensor gadgets. It then utilises cloud stage, information, stages, and programming as administrations to execute cloud administration stages across different businesses including savvy metropolitan organisation, shrewd public security, and brilliant the travel industry. The stage might set up an input circle for information assortment, handling, putting away, cleaning, mining, and application. The cerebrum of the savvy city is the activity place worked on the computerised twin city. It fills in as both the centre point of the metropolitan IoT and an asset pool for enormous information in urban communities [15–17]. It coordinates and directs city activities and accumulates total information on them to lay out successful cross-departmental and cross-local collaboration and crisis the board abilities. The development of a brilliant city tasks focus in view of a computerised twin city is shown schematically in Figure 12.3. With the joining of information from the cloud server farm and the computerised subsystem of different divisions, a multi-entrance incorporating city control and crises fixates in view of the savvy city working mind might be developed. The middle additionally utilises essential information investigation apparatuses including information mining and multi-layered examination as well as logical programming like IoT insight and on-going activity observing. This middle can bring down the expense of civil data innovation projects and their upkeep bring down the expense of government activities and lift metropolitan efficiency.
12.5 Smart Cities and Digital Twins
Figure 12.2: Structure of a smart city public information cloud service platform based on digital twins.
223
224
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
Command and control for urban operations and an emergency response centre
collaborative call centre
Observation of Operations
control protocol
Urgent participation
Management of Unified Users
Applications for in-depth analysis of data
Apps for analysis
Fundamental elements ( Forms for reporting ,Analysis on several dimensions, Exploration of data) The Information Level data center in the cloud
Public safety
Emergency care
Knowledge
Knowledge
-------------
Figure 12.3: A schematic illustration of the smart city operating brain based on digital twin technology.
12.6 Digital Twin Administrations for Savvy Matrix The Smart Grid Digital Twin is a reproduction cycle that incorporates numerous actual amounts, various spatially and transiently scales, and different probabilities. It completely uses the actual model, online estimation information, and authentic activity information of the power framework and integrates multidisciplinary information from the fields of energy, machines, network, environment, and economy. By portraying it in the virtual world, it mirrors the total life cycle interaction of the savvy framework. Power framework line examinations are feeling the squeeze because of the rising human interest for power, which is prompting a yearly extension in the size of force network transmission lines (Figure 12.4) as well as the incessant event of normal emergencies and the debasement of the gadget working circumstances. Most of current line examinations are done physically. In any case, this approach has examination vulnerable sides and isn’t just incapable; it additionally continuously misses the mark regarding the prerequisites for power framework
12.6 Digital Twin Administrations for Savvy Matrix
225
reviews (Figure 12.5). It is critically essential for present day power organisations to execute a protected, viable, and wise review mode.
Figure 12.4: The increase of different voltage classes from 2012 to 2014.
7 6 5 4 3 2 1 0 Guangdong Guangxi Hainan Power Power Power Grid Grid Grid
EHV
Guizhou Power Grid
Shenzhen Yunnan Guangzhou Average Power Power Power Supply Supply Grid Bureau Bureau
Figure 12.5: State Energy Branches Employment Status in 100 km.
We can make a strategy for on the web, continuous protector harm recognition and AI online calculation of tree boundary security distance in light of computerised twin brilliant network innovation (Figure 12.6). By empowering on-going investigation, imperfection understanding, and result detailing, this strategy fundamentally brings down work force. Wise city traffic mind (3.3) innovation, another advanced twin-based brilliant city application, is the Smart City Traffic Brain. Urban areas gather a huge number of bits of enormous information on movement designs consistently from cell phones, surveillance
226
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
camera film, taxis, indoor situating frameworks, transports, trams, versatile applications, and then some. The street Traffic Smart Emergency System (Figure 12.7) was made utilising innovations including 3D image mindfulness [18, 20], time-space examination, and information mining and is a vital part of the savvy city traffic cerebrum.
Figure 12.6: Tree boundary security distance in light of computerised twin brilliant network.
The framework interfaces different crisis stage assets, for example, city alert framework, specialists, traffic condition framework, mishap crisis framework, and traffic video framework, and presentations them on a similar connection point to unequivocally integrate multi-network assets and continuous unique traffic data. Continuous traffic information stream is utilised by the Wuhan Road Traffic Smart Safety System to oversee traffic large information. It made a weighty traffic record assessment calculation that joins verifiable information on clog, traffic, vehicle speeds, and other data to accomplish capabilities like (a) exact evaluation of street blockage levels, (b) continuous positioning of street blockage, and (c) relative concentrate between sequential clog and on-going clog. The framework additionally offers highlights including replaying clog occasions, video affirmation, huge occasion security, genuine planning, and route. With the utilisation of brilliant traffic mists and GIS computation, this multitude of elements offers a dynamic system to help traffic divisions in easing and clearing gridlock [19, 21, 22].
12.7 Public Epidemic Services in Smart Cities In view of the information cloud stage, examination framework, reaction framework, and client terminals, a Smart City Municipal Epidemiological Service System might be
12.7 Public Epidemic Services in Smart Cities
Emergency resources availaibilty
One schematic with variable displays
Video verification with a single click
Planning and routing in real time
Major Event Protection
Comprehensive Analysis
227
Incident in traffic, officers reports, immediate assistance, video, and traffic signals
Case of emergencies holograms, Variable Graphical Display, and Variable Layers Architecture
With a single click, you may immediately summon peripheral Video
Connect various police resources in the area Development Impact in Real Time
Predefined exercise path Security deploying that is flexible Monitoring of specified position
Analysis of the Heavy traffic Intensity Indicator Sectional examination of traffic crashes
Figure 12.7: A schematic illustration of the construction and function of a smart city traffic brain based on digital twins.
made (Li et al. 2020) (Figure 12.8). The patient’s spatiotemporal information is made by joining patient data from significant emergency clinics’ clinic data frameworks with correspondence administrator gave information on the patient’s spatiotemporal direction, which is then saved in the patient’s spatiotemporal data set. This data set
Health Coverage Declaration: Employers
Precision maintenance and supervision
Environmental Risk Alert
Individualized Safety Alert
Begin Activities
State
Subscribers
Daily activity spatial trajectory
Transporter of Information
Cell phones
Advanced Indicator of Crowdsourcing Risks
Platforms for the cloud
victims' spatial and temporal path
Risk Exposition The possible danger
Over the last 2 weeks, the victims' spatial and temporal path
Close contact chance of location/ duration of exposure
Artificial Intelligence and Spatial and temporal Location Analysis
228 Chapter 12 Case Studies: Smart Cities Based on Digital Twin
Figure 12.8: Schematic representation of the major national disease prevention and treatment system.
12.7 Public Epidemic Services in Smart Cities
229
can be connected to both the scourges large information examination frameworks and the spatial information cloud stage. This information base can be connected to both the epidemiological huge information examination framework and the spatial information cloud stage. The examination framework utilises AI examination, spatiotemporal nearness investigation, and different advancements to pinpoint the spread of the sickness and recognise those in closeness. The information is then shipped off the reaction framework. The reaction framework lays out associations with government organisations, bosses, and individual clients, and offers information administrations for self-seclusion and security to people as well regarding the public authority for use in pestilence counteraction and the board, worker clinical issue statements, and business references [22]. At present, administrations like identifying close contacts, expecting high-risk areas, and assisting with the investigation of infection course elements models might be acknowledged by utilising spatial huge information and AI area savvy frameworks to allude back the verifiable course of people (Figure 12.9). In particular: 1. An exhaustive assessment of “sickness discernibility”. The capability of examining the degree of pestilence openness risk in different regions can be acknowledged utilising populace information of visiting urban areas from high-risk regions, information conveyance heat guides, and key scourge regions, alongside the areas of cases detailed and the areas of networks where cases revealed have happened. Sensitive Data from a Third Party
The three groups' classification control pepoples Citizens
The concentrated
Audiences
Internet on the go
Internet of things
Networks for Smart Cities
Operators of Telecommunica tions
Data on location
LocationAware Networking
Spatial and temporal huge data
Management of Government
Center for disease control
Society Center for disease control
Artificial Intelligence Platf orm for Urban Spatial and temporal Big Data
Public safety
Assistance in Making Decisions
Communities Organization
Verifiable facts Data from a Third Party
Data on location
Handling of the four input channel' categorization
The General Public
Commercial/ Public Transport
Transport by public transit Society
Networking
Government agencies of law enforcement
Figure 12.9: A schematic design of the smart city public epidemic service platform.
230
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
2. An unmistakable sign of the “Chance advance notice”. According to the transmission of disease model, the framework might inspect the gamble levels of different areas, especially high-risk areas. The framework can send on-going cell phone admonitions to those entering the locale to remind them to play it safe or change traffic to try not to move toward the area (openness alert). Furthermore, the framework can offer gamble early admonition administration contributions on the contacts of affirmed cases, which can be utilised as a dynamic establishment by the sickness the board division to exactly distinguish the vulnerability and dangers and lift infectious prevention viability (contact early admonition). 3. “Framework Prevention and Control” is overseen all around the world and integratedly. The framework shows an intensity guide of the pandemic’s affirmed cases at a few levels including the country, regions, urban areas, locale, roads, and networks. It likewise shows how the infection is spreading the nation over. To empower continuous checking of the on-going pestilence situation, the framework deftly designs dynamic information and examinations the development of significant populaces in various regions, urban communities, and areas utilising different layered outlines. 4. Endorsement control for “Resumption of work and result”. The framework lays out a business’ re-visitation of work statement and survey strategy, which is utilised to finish worker well-being status data and screen the utilisation of the business’ pandemic preventive measures. The framework may all the while survey the endeavour’s unusual state brilliantly and offer continuous pre-emptive guidance administrations to support the incorporation of continued work and the board of pandemic counteraction.
12.8 Services for Flood Situation Observing The standardised flood fiasco life cycle incorporates gauging and standardised observing before calamities, dynamic persistent checking during catastrophes, evaluation and reproduction following debacles: a contextual investigation metropolitan flood re-enactment progressively and research from figure tanks computerised innovation can be utilised to lay out and judgment administrations. Twin innovation, adding administration highlights thus to fabricate savvy city drives. The smart in light of the city’s flood immersion planning and administration. The essential three parts of the advanced twin are the standardised and dynamic enormous information observing of floods, the flood administration application, and the flood information map. The significant objective is to assemble information from checking and assembling devices remembering ground sensors for water conditions in streams and lakes, downpour conditions in metropolitan climate information, and dynamic directions of individuals and vehicles. The checking of cloud and downpour water volume, lake water volume, and changes in stream and repository water levels in the upper and
12.8 Services for Flood Situation Observing
231
lower bowls can be achieved involving satellite remote detecting advances in huge scope air and sky situations. Here are a few blockchain-related digital twin city frontier scenes:
12.8.1 Smart Healthcare The incompatibility of scarce resources and escalating demand highlights the need for powerful, wise, also, manageable medical services. The steps below demonstrate why distributed ledger technology is the best option (Vora et al., 2018a): Step 1: The patient’s well-being information, for example, their pulse, glucose level, breath rate, circulatory strain, and internal heat level, are first gathered and checked by IoT sensors. Step 2: The administrators review the information gathered and provide a report for the patient. Step 3: The doctors study the report they received and then advise the necessary course of action. Step 4: To exchange the treatment reports for additional analysis, doctors may opt to use a distributed database. The verified data is collective in an encoded arrangement in period five. Step 5: The patient asks for access to their medical record from the “cloud service provider (CSP)”. Step 6: The patient receives the treatment record’s encrypted file following a successful validation. Step 7: Patients are using their own key to decrypt the encrypted file they received to retrieve their original medical history.
12.8.2 Intelligent Transit Brilliant transportation framework security and protection concerns can be effectively tended to by blockchain innovation. Without the help of a brought together power, vehicles can cooperate with the side of the road unit or with each other in vehicle organisations. Nonetheless, in such an independent climate, rivals can acquaint bogus or deceiving data with serve their own advantages. Vehicle validation is in this manner expected to give secure information dividing among these vehicles. Stage 1: IoT sensors accumulate and monitor vehicle information including model sort, speed, heading of movement, and burden. Stage 2: The conveyed network hubs monitor the data assembled and produce an on-going correspondence bundle. Stage 3: Calculate the ways of behaving of the vehicles utilising the bundle that was gotten. The sharing strategy will be reported on a “CSP”.
232
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
Stage 4: Vehicles ask the “CSP” to give them admittance to the information that connects with them. Stage 5: Vehicles will execute the calculation to decide their conduct after a fruitful approval. Fruitful results will be recorded to improve the calculation.
12.8.3 Intelligent Supply Chain The sharing procedure will be documented on a “CSP.” Step 1: Vehicles ask the “CSP” to provide them access to the data that relates to them. Step 2: Vehicles will execute the method to identify their behaviour after a successful validation. Successful outcomes will be documented in order to enhance the algorithm. Numerous products have been manufactured and sold globally, thanks to sophisticated supply chains; yet the participants in these supply chains (such as merchants, marketers, movers, and suppliers) have a relatively limited understanding of the products. In any case, such item data is essential because clients need it to increase their faith in brands, and businesses need it to plan ahead or anticipate market patterns. Digital block-chain innovation twin urban communities might be the answer. Stage 1: IoT sensors assemble and monitor substance data including coordinated operations, prerequisites, association, and worth. Stage 2: The dispersion hubs in the organisation monitor the data assembled and produce a constant correspondence bundle. Stage 3: Compute the ways of behaving of the vehicles utilising the bundle that was gotten. The sharing methodology will be reported on a “CSP”. Stage 4: Requests for admittance to pertinent information relating to vendors, wholesalers, transporters, and suppliers might be made to the “CSP”. Stage 5: Following the fruitful check, elements will get the information they require.
12.9 Digital Twin Cities Framework With other advanced twin frameworks, computerised twin urban communities share the accompanying characteristics: self-discernment, self-choice, self-association, selfexecution, and self-transformation. As found in Figure 12.1, they are composed of new applications, metropolitan mind stages, and foundation development. The city mind stage, which comprises of the BIM and GIS-based city geological data framework arcgis stage, interconnected city data resource structure, brilliant application support system, in view of building discernment stage, and criticism stage, is the scholarly capacity centre for the method of urban areas. The establishment for
12.10 Conclusion
233
advanced twin urban communities is made by the city geographic data stage, which combines city information from this present reality and guides it to the computerised stage. The stage for incorporated city information resources unites managerial information from urban areas and sensor-gathered central information from urban areas, and it fills in as a significant structure block for brilliant metropolitan government. The in light of building discernment and reaction stage is an essential part in tying physical and virtual urban communities together. The business 5G framework energises bunch advancement and information synchronisation in both the virtual and genuine universes. The rising AI, huge information, and IoT innovations are utilised by the insightful application the executives stage to empower the information through AI, tasks advancement methods, and demonstrating abilities. Building urban areas starts with the development of foundation, which fills in as its supervisor. It upholds information for the metropolitan cerebrum stage, examinations and utilisations information, and gives criticism to the actual climate progressively. An adaptable application assortment makes up the developing application layer. To supply situation applications, correspondence administrations, and re-enactment administrations, it summons CIMs and city information. Constant data on environmental elements, geo-space, and metropolitan engineering is one of the situation administrations. The organisation administrators incorporate checking present ways of behaving, conjecture the future ways of behaving, and following and following actual elements’ authentic exercises. The re-enactment administration will impersonate administrations and help dynamic through the making of pre-plans. It will re-enact time, occasions, and circumstances.
12.10 Conclusion To understand the insight, control, and clever administrations of individuals and things, a savvy city depends on the reconciliation of this present reality and the computerised world laid out by the advanced city, the IoT, and distributed computing. The computerised twin city is irrefutably another beginning stage for the development of contemporary shrewd urban communities. Computerised twin-based brilliant urban communities offer a few open doors for monetary change, metropolitan shrewd administration, and public savvy administrations, considering more amicable improvement of man and nature. To guarantee that different savvy city applications might be utilised effectively and reasonably, the acknowledgment of shrewd urban communities requires the structure of a more complete spatial data foundation. Shrewd urban communities’ immense information issue sets out both new open doors and hardships. To propel the development of the computerised administrations area, better understand the many savvy utilisations of the Internet + shrewd city and advance the advanced economy, performing great in mechanical development and research is fundamental. A key endeavour is the formation of brilliant urban communities. To
234
Chapter 12 Case Studies: Smart Cities Based on Digital Twin
make a city savvy, high level plan and general arranging should be finished as per the novel qualities of every city, and an activity mind and activity focus should be laid out for brilliant urban communities. The improvement of savvy urban communities must be actually completed by performing great in the plan, arranging, and framework advancement of actual urban communities as well as by establishing suitable guidelines.
References [1] [2] [3]
[4]
[5] [6]
[7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
[17] [18] [19]
M. Batty, Digital Twins, Environ. Plan. B Urban Anal. City Sci. 45, 2018, 817–820. CrossRef T. Kuhn, Digitaler Zwilling, Inform.-Spektrum. 40, 2018, 440–444. CrossRef F. Dembski, U. Wössner, C. Yamu Digital Twin, Virtual Reality and Space Syntax: Civic Engagement and Decision Support for Smart, Sustainable Cities. In Proceedings of the 12th International Space Syntax Conference, Beijing, China, 8–13 July 2019; pp. 316:1–316:13. E. L. Glaeser, G. Ponzetto, A. Shleifer, Why Does Democracy Need Education?, in NBER Working Paper Series; Working Paper 12128. National Bureau of Economic Research, NBER Working Paper Series. Cambridge, MA, USA, 2006, 45. M. Batty, The New Science of Cities. The MIT Press: Cambridge, MA, USA, 2013. J. Portugali, H. Meyer, E. Stolk, E. Tan (Eds.), Complexity Theories of Cities Have Come of Age: An Overview with Implications to Urban Planning and Design. Springer: Berlin/Heidelberg, Germany, 2012. C. Yamu, It Is Simply Complex(ity). Modelling and Simulation in the Light of Decision-making, Emergent Structures and a World of Non-linearity, DisP Plan. Rev. 199, 2014, 43–53. CrossRef G. De Roo, E. Silva, A Planner’s Encounter with Complexity. Ashgate Publishing: Farnham, UK, 2010. W. J. Caldwell, Rediscovering Thomas Adams: Rural Planning and Development in Canada. UBC Press: Vancouver, BC, Canada, 2011. P. Neirotti, A. De Marco, A. Cagliano, G. Mangano, F. Scorrano, Current Trends in Smart City Initiatives: Some Stylized Facts, Cities. 38, 2014, 25–36. CrossRef J. Shapiro, Smart Cities: Quality of Life, Productivity, and the Growth Effects of Human Capital, Rev. Econ. Stat. 88, 2006, 324–335. CrossRef E. Glaeser, J. D. Gottlieb, Urban Resurgence and the Consumer City, Urban Stud. 43, 2006, 1275–1299. CrossRef R. Florida, Bohemia and Economic Geography, J. Econ. Geogr. 2, 2002, 55–71. CrossRef T. Bakici, E. Almirall, J. Warham, A Smart City Initiative: The Case of Barcelona, J. Knowl. Econ. 4, 2013, 133–148. CrossRef A. D’Auria, M. Trequa, M. C. Vallejo-Martos, Modern Conceptions of Cities as Smart and Sustainable and Their Commonalities, Sustainability. 10, 2018, 2642. CrossRef T. Nam, T. A. Pardo Conceptualizing Smart City with Dimensions of Technology, People and Institutions. In Proceedings of the 12th Annual International Conference on Digital Government Research, College Park, MD, USA, 12–15 June 2011; pp. 282–291. A. Caragliu, C. Del Bo, P. Nijkamp Smart Cities in Europe. In Proceedings of the 3rd Central European Conference in Regional Science, Košice, Slovakia, 7–9 October 2009; pp. 45–59. C. Harrison, I. A. Donelly A Theory of Smart Cities. In Proceedings of the 55th Annual Meeting of the International Society for the System Sciences, Hull, UK, 17–22 July 2011; p. 15. J. Viitanen, R. Kingston, Smart Cities and Green Growth: Outsourcing Democratic and Environmental Resilience to the Global Technology Sector, Environ. Plan. A Econ. Space. 46, 2015, 803–819. CrossRef
References
235
[20] Grand View Research. Smart Cities Market Analysis Report by Application, by Region, and Segment Forecasts 2019–2025. Available online: https://www.grandviewresearch.com/industry-analysis /smart-cities-market (accessed on 20 January 2020). [21] A. Van Nes, C. Yamu Space Syntax: A Method to Measure Urban Space Related to Social, Economic and Cognitive Factors, in C. Yamu, A. Poplin, O. Devisch, G. De Roo (Eds.), The Virtual and the Real in Planning and Urban Design: Perspectives, Practices and Applications. London, UK: Routledge, 2018, 136–150. [22] A. Van Nes, C. Yamu, Introduction to Space Syntax in Urban Studies. Springer: Berlin/Heidelberg, Germany, 2020.
Index 3D modelling 53, 98 actuators 30, 33, 51, 131, 149, 156, 160, 201, 203 adaptive control 87 AI-based solution 67 AI-powered chatbot 67 architecture 1, 11, 30, 84, 106 artificial intelligence algorithms 14 artificial neurons 50 augmented reality 9, 17–18, 29 automation ML 115 automobile sensors 35 automotive industries 64 autonomous robots 18 awakening events 191 Behaviour Learning 133 big data 5, 17, 19, 24, 35, 50–52, 57–58, 94, 104, 123 blockchain 61, 218–219, 231 boundary 172, 174–176 Bridgestone 64
deep learning 23, 49–50, 66, 94, 96, 102 detection 32, 49, 51, 65, 68–69 device testing 132 diagnosis 46, 48, 52, 63, 96, 156, 178 digital city 216 digital signal processing 77 digital transformation 39, 65, 87, 89, 110, 119, 217 digital twin 12–14, 17, 20, 22–23, 25, 27–28, 30, 34, 39–41, 43–45 DRL network 52 emerging technology 1 empty configuration 202 encryption 13, 22, 120–122 entrance testing 126 Ethernet/IP 143 five-layered model 220 framework 40, 50–51, 61, 80, 82, 89–90, 92, 94, 104, 109, 115–145, 149, 150–161 framework layer 222 framework testing 126
carla simulator 185 cloud innovation 117 clustering 32, 50, 82 computer layer 30–31 computerised twin 115–127, 129, 131, 133, 135, 139–142, 144–145, 150–151, 155–156, 158–164, 171, 175–179 computing power 2, 217 condition monitoring 87, 91, 94, 102 control 10–11, 18–19, 21, 23–24, 27, 29–30, 32, 35 control theory 77 conventional numerical approaches 77 CPS twinning 134, 142–143 cyber-actual frameworks 115 cyberattacks 24 cyberphysical system 1 cyberspace 39
genuine system 163 geometric model 32 green infrastructure 220
DAF twinning system 130, 132, 144 data analytic 27, 108 data Fusion 71–72, 77–85 data transmission 27, 31, 87 decision trees 33, 50 Decision-Making 87, 89
K-means 50
https://doi.org/10.1515/9783110778861-013
healthcare 89, 95, 96–97, 115, 211 HMI 134 hybrid fusion 81–82 hybrid simulation 186 Industrial 4.0 87, 124 industrial Internet of things (IIoT) 61 Industry 4.0 17–20, 39–41, 65–68 information fusion 78, 79, 81 Information Safety 116 intelligent production 87, 91 Internet of things (IoT) 87, 117, 215 IoT gadget 117, 119
machine learning 3, 12, 31–32, 35, 49–52, 61 machines 18, 21, 24–25, 32–33, 50, 61–62, 65, 87, 99, 149, 151, 157–158, 224 manufacturing 61–68
238
Index
mapping 36–37, 110 MDC simulation 187–188, 190 mixed-resolution simulation 186 model library 196–198, 202, 204 model synchronization 185 modelling 12, 17, 27–34, 47, 49, 53–54, 87, 89, 91, 94, 98, 101–102, 104, 106, 108 modern control frameworks 115 monitoring 2, 6, 12, 18, 21, 25, 27–28, 34, 37, 51, 55, 62, 67, 87–89, 91, 93–94, 97–98, 100, 102–104, 107–109, 111, 130, 132, 156, 178, 185, 188 multisensor data fusion 71, 77 network 9, 30, 102, 115–116, 120–143, 153, 163, 224–226, 231 network layer 30, 130 optimise 39, 42, 45–46, 51–52, 61, 64 perception 174 perturbation event 191, 193, 195 physical entity 32, 40, 71 physical gadget 39 physical layer 30, 201 prediction 1, 3, 9, 17, 21, 33, 49–50, 68, 71, 89, 93–94, 96–97, 108–109, 156, 178, 211 predictive maintenance 42–43, 62, 91, 98, 100, 102 privacy 23–26, 48, 97 probing activity 191 process industry 149 prognosis 172, 176 programmable logic controller 129 Python library 171 Python 134, 139, 143, 154, 159, 164, 172, 185, 188, 192, 194–195, 211 Q-learning 50, 52 real-time data 67–68, 71–72, 87, 92, 94, 98, 101, 117 reduce 3, 14, 37, 43, 62, 66–68, 75, 87, 93, 95, 110–111, 113, 187
regression 50 regulators 97, 138, 158 reinforcement learning 31, 50, 52 replication 125, 127, 131–133 replication mode 125, 127, 132–133, 143 reproduction 53, 87, 216, 218, 224, 230 result events 191 retail 20, 41, 65, 87, 106–107 robotic manipulator 67 safety analysis 133 savvy matrix 224 security 121–124, 127–145, 151, 174, 222, 225–229 security checking 115 sensors 1–2, 9, 11 Shrewd City 216, 219, 221, 233 shuffling 37, 110–111 SimPy 185, 188, 190, 191, 193, 194, 211 simulation 131, 140, 149, 152, 157, 159–160, 164 smart city 216, 221, 224–226 splitting 36, 110, 187 statistical estimation 77 supply chain 232 system data 21, 25 TCP/IP convention 134 traces 221 transformation 9, 31, 39, 65, 87, 89, 100, 119, 217, 232 transportation 57, 101–103, 107, 109 ultrasonic sensor 77 virtual environment 2, 31, 57, 94, 99, 107, 124, 127, 130, 138, 187 virtual model 219, 221 virtual twin 35, 45, 51, 110, 149 visualisation 110–111, 172, 186 wireframe modelling 32 wireless communication 30, 31
De Gruyter Series Smart Computing Applications
ISSN 2700-6239 e-ISSN 2700-6247 Deep Learning for Cognitive Computing Systems Edited by M.G. Sumithra, Rajesh Kumar Dhanaraj, Celestine Iwendi, Anto Merline Manoharan, 2023 ISBN 978-3-11-075050-8, e-ISBN (PDF) 978-3-11-075058-4, e-ISBN (EPUB) 978-3-11-075061-4 Cloud Analytics for Industry 4.0 Edited by Prasenjit Chatterjee, Dilbagh Panchal, Dragan Pamucar and Sharfaraz Ashemkhani Zolfani, 2022 ISBN 978-3-11-077149-7, e-ISBN (PDF) 978-3-11-077157-2, e-ISBN (EPUB) 978-3-11-077166-4 Advances in Industry 4.0 Concepts and Applications Edited by M. Niranjanamurthy, Sheng-Lung Peng, E. Naresh, S. R. Jayasimha, Valentina Emilia Balas, 2022 ISBN 978-3-11-072536-0, e-ISBN (PDF) 978-3-11-072549-0, e-ISBN (EPUB) 978-3-11-072553-7 Soft Computing and Optimization Techniques for Sustainable Agriculture Debesh Mishra, Suchismita Satapathy, Prasenjit Chatterjee, 2022 ISBN 978-3-11-074495-8, e-ISBN (PDF) 978-3-11-074536-8, e-ISBN (EPUB) 978-3-11-074537-5 Knowledge Engineering for Modern Information Systems Methods, Models and Tools Edited by Anand Sharma, Sandeep Kautish, Prateek Agrawal, Vishu Madaan, Charu Gupta, Saurav Nanda, 2021 ISBN 978-3-11-071316-9, e-ISBN (PDF) 978-3-11-071363-3, e-ISBN (EPUB) 978-3-11-071369-5 Knowledge Management and Web 3.0 Next Generation Business Models Edited by Sandeep Kautish, Deepmala Singh, ZdzIslaw Polkowski, Alka Mayura, Mary Jeyanthi, 2021 ISBN 978-3-11-072264-2, e-ISBN (PDF) 978-3-11-072278-9, e-ISBN (EPUB) 978-3-11-072293-2 Cloud Security Techniques and Applications Edited by Sirisha Potluri, Katta Subba Rao, Sachi Nandan Mohanty, 2021 ISBN 978-3-11-073750-9, e-ISBN (PDF) 978-3-11-073257-3, e-ISBN (EPUB) 978-3-11-073270-2
www.degruyter.com