303 61 45MB
English Pages 1301 [1272] Year 2022
Norbert Meyendorf Nathan Ida Ripudaman Singh Johannes Vrana Editors
Handbook of Nondestructive Evaluation 4.0
Handbook of Nondestructive Evaluation 4.0
Norbert Meyendorf • Nathan Ida • Ripudaman Singh • Johannes Vrana Editors
Handbook of Nondestructive Evaluation 4.0 With 622 Figures and 65 Tables
Editors Norbert Meyendorf Chemical Materials and Bio Engineering University of Dayton Dayton, OH, USA Ripudaman Singh Inspiring Next Cromwell, CT, USA
Nathan Ida Department of Electrical and Computer Engineering The University of Akron Akron, OH, USA Johannes Vrana Vrana GmbH Rimsting, Germany
ISBN 978-3-030-73205-9 ISBN 978-3-030-73206-6 (eBook) ISBN 978-3-030-73207-3 (print and electronic bundle) https://doi.org/10.1007/978-3-030-73206-6 © Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Why do we care about NDE 4.0? Every industry is going through digital transformation, which brings about complete networking of devices, equipment, computers, and humans, through Industrial Internet of Things (IIoT). This fourth industrial revolution promises speed, reliability, and efficiencies not possible up until now. What about the NDE sector? New production techniques such as 3D printing allow efficient on-time production for small custom batches of unique and specialized parts previously not possible. Traditional manufacturing lines are getting to be lights-out operation (near total automation). Does it not call for digital transformation of in-line quality assurance? Augmented reality provides real-time guidance and visualization that help make better decisions. Why not use that in NDE? Robotics and automation are improving worker safety and reducing human error. How about the well-being of inspectors working in hazardous environment? Most experts agree that the digitalization of NDE offers unprecedented opportunities to the world of inspection for infrastructure safety, inspector well-being, manufacturing quality, and even product design improvements. That’s why we care for NDE 4.0. Why do we care about this Handbook of NDE 4.0? While the community tends to agree on the value proposition of digital transformation of NDE, it also recognizes the challenges associated with such a major shift in a well-established and regulated sector. Connecting systems will bring value only when each element understands the other and how it fits. Transformation requires cooperation. Does it mean that we need to understand more than our own business in NDE? Successful digital transformation requires new skills, competencies, knowledge, and even leadership mindset. Would NDE industry need that? We are convinced that thought leaders need to come together, share knowledge and insights, to create the next generation of competencies, technologies, products, business models, and application processes in a manner that makes the whole larger than individual developments. We need a body of knowledge that brings the state of art on the subject, offers known solutions, opens new conversations, and even creates human connections needed to advance the science and technology behind this domain of human significance. v
vi
Preface
What’s in it for you? This Handbook of NDE 4.0 is one such effort that Prof. Norbert Meyendorf and Prof. Nathan Ida took on as a follow-up from their previous work on the Handbook of Advanced NDE. This time they also invited Dr. Johannes Vrana and Dr. Ripudaman Singh to broaden the context and strengthen the content. Together, the 4 editors worked for 24 months, with nearly 100 researchers, to bring 45 topical chapters, well beyond the basic NDE. These chapters are categorized under five parts: 1. Concepts and Trends: Eleven chapters providing the why, how, and what of NDE 4.0 2. Technical Disciplines: Thirteen chapters going deep into digitalization 3. Applications: Nine chapters going broad on materials and manufacturing processes 4. Industrial Domains: Seven chapters covering major sectors from oil and gas to aerospace 5. Business: Five chapters on human aspects and organization adoption side This volume aspires to help the leaders and readers orient themselves on this new subject, reduce risk through awareness of emerging new issues, discover the nuggets of wisdom relevant to their business, and eliminate some of the blind spots along their roadmap. This handbook does not promise all the answers but provides guidance to the extent possible through the latest work of contributing authors. You are welcome to network with authors on topics of mutual interest. What next? The subject is evolving fast. The task of keeping the content up to date, of course, can never be complete, and any attempt at doing so can only be a snapshot of the current state of the art. In this spirit, you can expect the online version of the handbook to be updated continuously and grow in content value for you. We sincerely thank all those involved in the writing, editing, and production of this work. Please feel free to provide any feedback on content or recommend additional content/authors to the editors. Happy reading and fruitful use of content. Norbert Meyendorf Nathan Ida Johannes Vrana Ripudaman Singh
Contents
Volume 1 Part I
Concepts and Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
Introduction to NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Johannes Vrana, Norbert Meyendorf, Nathan Ida, and Ripudaman Singh
3
2
Basic Concepts of NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Norbert Meyendorf, Nathan Ida, and Martin Oppermann
31
3
History of Communication and the Internet Nathan Ida
.................
77
4
Creating a Digital Foundation for NDE 4.0 . . . . . . . . . . . . . . . . . . Nasrin Azari
95
5
Digitization, Digitalization, and Digital Transformation Johannes Vrana and Ripudaman Singh
........
107
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chris Udell, Marco Maggioni, Gerhard Mook, and Norbert Meyendorf
125
7
Value Creation in NDE 4.0: What and How . . . . . . . . . . . . . . . . . . Johannes Vrana and Ripudaman Singh
151
8
From Nondestructive Testing to Prognostics: Revisited . . . . . . . . . Leonard J. Bond
177
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daniel Kanzler and Vamsi Krishna Rentala
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel . . . . . Marija Bertovic and Iikka Virkkunen
205 239
vii
viii
11
Contents
“Moore’s Law” of NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Norbert Meyendorf
Part II 12
Technical Disciplines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Johannes Vrana
13
Compressed Sensing: From Big Data to Relevant Data . . . . . . . . . Florian Römer, Jan Kirchhof, Fabian Krieg, and Eduardo Pérez
14
Semantic Interoperability as Key for a NDE 4.0 Data Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian T. Geiss and Manuel Gramlich
271
293
295 329
353
15
Registration of NDE Data to CAD . . . . . . . . . . . . . . . . . . . . . . . . . Stephen D. Holland and Adarsh Krishnamurthy
369
16
NDE 4.0: Image and Sound Recognition Kimberley Hayes and Amit Rajput
....................
403
17
Image Processing 2D/3D with Emphasis on Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andreas H. J. Tewes, Astrid Haibel, and Rainer P. Schneider
423
18
Applied Artificial Intelligence in NDE . . . . . . . . . . . . . . . . . . . . . . Ahmad Osman, Yuxia Duan, and Valerie Kaftandjian
443
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems John C. Aldrin
...
477
20
Artificial Intelligence and NDE Competencies . . . . . . . . . . . . . . . . Ramon Salvador Fernandez Orozco, Kimberley Hayes, and Francisco Gayosso
499
21
Smart Monitoring and SHM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bianca Weihnacht and Kilian Tschöke
553
22
Sensors, Sensor Network, and SHM . . . . . . . . . . . . . . . . . . . . . . . . M. Faisal Haider, Amrita Kumar, Irene Li, and Fu-Kuo Chang
569
23
Probabilistic Lifing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kai Kadau, Michael Enright, and Christian Amann
603
24
Robotic NDE for Industrial Field Inspections Robert Dahlstrom
................
641
Contents
ix
Volume 2 Part III
Applications
.....................................
663
25
NDE for Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . Julius Hendl, Axel Marquardt, Robin Willner, Elena Lopez, Frank Brueckner, and Christoph Leyens
665
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of Additively Manufactured Metal Parts . . . . . . . . . . . Christiane Maierhofer, Simon J. Altenburg, and Nils Scheuschner
697
27
NDE in Additive Manufacturing of Ceramic Components Christian Wunderlich, Beatrice Bendjus, and Malgorzata Kopycinska-Müller
......
735
28
Inspection of Ceramic Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . Susanne Hillmann and Bernd Köhler
755
29
Testing of Polymers and Composite Materials . . . . . . . . . . . . . . . . Kara Peters
775
30
Characterization of Materials Microstructure and Surface Gradients using Advanced Techniques . . . . . . . . . . . . . . . . . . . . . . Paul Graja and Norbert Meyendorf
31
Nondestructive Testing of Welds . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Juengert, M. Werz, R. Gr. Maev, M. Brauns, and P. Labud
32
Optical Coherence Tomography as Monitoring Technology for the Additive Manufacturing of Future Biomedical Parts . . . . . . . . . . . Jörg Opitz, Vincenz Porstmann, Luise Schreiber, Thomas Schmalfuß, Andreas Lehmann, Sascha Naumann, Ralf Schallert, Sina Rößler, Hans-Peter Wiesmann, Benjamin Kruppke, and Malgorzata Kopycinska-Müller
33
NDE for Electronic Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Oppermann, Johannes Richter, Jörg Schambach, and Norbert Meyendorf
Part IV
Industrial Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
799 819
859
883
935
34
NDE 4.0 in Civil Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ernst Niederleithinger
937
35
NDE 4.0 in Railway Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaorong Gao, Yu Zhang, and Jianping Peng
951
x
Contents
36
NDE in The Automotive Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Gr. Maev, A. Chertov, R. Scott, D. Stocco, A. Ouellette, A. Denisov, and Y. Oberdorfer
37
Applications of NDE 4.0 Cases in the Automotive Industry Matthias Nöthen
38
Digital Twin and Its Application for the Maintenance of Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035 Teng Wang and Zheng Liu
39
NDE in Energy and Nuclear Industry Rafael Martínez-Oña
40
NDE in Oil, Gas, and Petrochemical Facilities . . . . . . . . . . . . . . . . 1089 Sascha Schieke, Mark Geisenhoff, and Ke Wang
Part V
979
. . . . . 1011
. . . . . . . . . . . . . . . . . . . . . . 1053
Business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1107
41
Best Practices for NDE 4.0 Adoption . . . . . . . . . . . . . . . . . . . . . . . 1109 Ripudaman Singh
42
Estimating Economic Value of NDE 4.0 . . . . . . . . . . . . . . . . . . . . . 1127 Lennart Schulenburg
43
Ethics in NDE 4.0: Perspectives and Possibilities . . . . . . . . . . . . . . 1159 Ripudaman Singh and Tracie Clifford
44
Training and Workforce Re-Orientation Ramon Salvador Fernandez Orozco
45
Are We Ready for NDE 5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245 Ripudaman Singh
. . . . . . . . . . . . . . . . . . . . 1187
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1263
About the Editors
Dr. Norbert Meyendorf retired in fall 2018 as deputy director of the Center for Nondestructive Evaluation and professor in the Aerospace Engineering Department at the Iowa State University in Ames, Iowa. Before joining ISU in 2016, he had several appointments and ranks. The most recent are: branch director at the Fraunhofer Institute for Nondestructive Testing IZFP and later IKTS, director of the International University of Dayton/Fraunhofer Research Center in the School of Engineering at the University of Dayton, organizing collaborative projects between Fraunhofer and University of Dayton, and program director of the master’s program “Nondestructive Testing, M. Sc. (NDT)” at the Dresden International University (DIU) between 2011 and 2015. Norbert Meyendorf continues to be active as adjunct professor for micro- and nano-NDE at the University of Dresden and adjunct professor in the Department for Chemical, Materials and Bioengineering, University of Dayton. He is the author or co-author of more than 300 peerreviewed journal articles, contributions to edited proceedings, technical reports, and numerous oral presentations at conferences, meetings, and workshops. He is editor in chief of the Journal of Nondestructive Evaluation and edited several books and conference proceedings. His areas of expertise include solid state physics and physical analytics, welding metallurgy, materials testing, nondestructive evaluation (NDE), and structural heath monitoring (SHM). Since 2001, he has been chairman or co-chairman of several Conferences within the
xi
xii
About the Editors
SPIE International Symposium on Nondestructive Evaluation for Health Monitoring and Diagnostics and later the Symposium for Smart Structures and NDE. In 2005, 2006, 2012, and 2013, he was chair or co-chair of the whole SPIE Symposium. In 2018 he became fellow of SPIE. Norbert Meyendorf was founder and chair of two expert committees of the German Society for Non-Destructive Testing (DGZfP), as well as the expert committees for structural health monitoring and materials diagnostics. Between 2016 and 2018, he reorganized and directed the ASNT Section Iowa. Dr. Nathan Ida is currently Distinguished Professor of Electrical and Computer Engineering at The University of Akron in Akron, Ohio, where he has been since 1985. His current research interests are in the areas of electromagnetic nondestructive testing and evaluation of materials at low and microwave frequencies with particular emphasis on theoretical issues, on all aspects of modeling and simulation, and on related issues stemming from research in NDE. Starting with modeling of eddy current and remote field phenomena, and continuing with high frequency methods for microwave NDE, his work now encompasses the broad aspects of computational electromagnetics where he has contributed to both understanding of the interaction of electromagnetic fields with materials and to the development of new methods and tools for numerical modeling and simulation for, and beyond, NDE. Other areas of current interest include electromagnetic wave propagation and theoretical issues in computation as well as in communications and sensing, especially in low-power remote control and wireless sensing. Much of this work has found its way into practice through industrial relations and consulting across industries as diverse as power generation, polymers, steel, medical, and software, spanning the globe. Dr. Ida has published extensively on electromagnetic field computation, parallel and vector algorithms and computation, nondestructive testing of materials, surface impedance boundary conditions, and sensing, among others, in over 400 publications. He has written nine books: two on computation of electromagnetic fields (one in its second edition); one on modeling for
About the Editors
xiii
nondestructive testing; one on nondestructive testing with microwaves; a textbook on engineering electromagnetics, now in its fourth edition; a textbook on sensing and actuation (now in its second edition); a book on the use of surface impedance boundary conditions; and others including on ground-penetrating radar and industrial sensing based on microwaves. Dr. Ida is a life fellow of the Institute of Electric and Electronics Engineers (IEEE), a fellow of the American Society of Nondestructive Testing (ASNT), a fellow of the Applied Computational Electromagnetics Society (ACES), and a fellow of the Institute of Electronics and Technology (IET). Dr. Ida teaches electromagnetics, antenna theory, electromagnetic compatibility, sensing and actuation, as well as computational methods and algorithms. Dr. Ida received his B.Sc. in 1977 and M.S.E.E. in 1979 from the Ben-Gurion University in Israel, and his Ph.D. from Colorado State University in 1983. He has held visiting and/or adjunct positions at various institutions including Nasa Glen Research Center; The Federal University of Santa Catarina, in Florianopolis, Brazil; McGill University in Montreal, Canada; Electricite De France, Paris, France; The University of Lille, Lille, France; and Université Pierre et Marie Curie, Paris, France. Dr. Ripudaman Singh is a freelance innovation and strategy coach. He works with his clients to understand and prepare for the fourth industrial revolution. He has over 30 years of learning in creating new technologies, products, and innovation culture in academics, small business, and fortune 500 companies that span aerospace and defense, energy and power, manufacturing, construction, and IT domains in India, Germany, and the USA. He has created an Innovate-Pedia: a compilation of best practices that we will now have a chance to use. His recognitions include President of India Cash Prize for Research and National Technical Education Award, as well as uninterrupted scholarships for 15 years from high schools in India to postdoctorate from Georgia Tech. With an MS in strategic thinking from RPI in 2006, he began to apply his learnings to develop organizational innovation capacity.
xiv
About the Editors
He currently serves on the council of CT Academy of Science and Engineering (most prestigious body serving the needs of the State), and advisory boards of the University of Hartford and University of New Haven. He is also an instructor with Caltech Technology and Management Center. The most notable roles for him today include serving US delegation on ISO 56000 for Innovation Management, and Chair of NDE 4.0 committee for ASNT. Dr. Johannes Vrana, born in 1978, studied physics at the Technical University of Munich, Germany, and completed his Ph.D. in 2008 at the University of Saarland on Thermographic Testing. He then worked for Siemens Power and Gas in Orlando, USA, as well as in Berlin and Munich, Germany, and was responsible for all supplier-related NDE questions and was Chairman of the Siemens NDE Council. In addition to worldwide harmonization of NDT specifications and the introduction of statistical tools, he was responsible for the development of automated NDT and SAFT. In 2015, he received an honorable mention for ingenuity at the U.S. Excellence Awards, and in 2016, he received the 4th price for ingenuity at the Werner von Siemens Awards. In 2015, he started his own company “Vrana GmbH” in Rimsting, Germany, which specializes in NDE consulting and solutions, R&D, and software development. Moreover, he is chairman of the ICNDT (International Committee for NDT) Specialist International Group “NDE 4.0” of the ASNT (American Society for NDT) German Section, and of the DGZfP (German Society for NDT) subcommittees “Interfaces and Documentation for NDE 4.0” and “Automated Ultrasonic Testing.” In 2019, he was awarded with the DGZfP application award for the implementation of SAFT into serial production of large rotor forgings.
Contributors
John C. Aldrin Computational Tools, Gurnee, IL, USA Simon J. Altenburg Bundesanstalt für Materialforschung und –prüfung, Berlin, Germany Christian Amann Siemens Energy, Mülheim, Germany Nasrin Azari Floodlight Software, Inc., Cary, NC, USA Beatrice Bendjus Testing of Electronics and Optical Methods, Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Marija Bertovic Federal Institute for Materials Research and Testing, Berlin, Germany Leonard J. Bond Iowa State University, Ames, IA, USA M. Brauns XARION Laser Acoustics GmbH, Vienna, Austria Frank Brueckner Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany Luleå University of Technology, Luleå, Sweden Fu-Kuo Chang Aeronautics and Astronautics Department, Stanford University, Stanford, CA, USA A. Chertov Institute for Diagnostic Imaging Research, University of Windsor, Windsor, ON, Canada Tessonics Inc., Windsor, ON, Canada Tracie Clifford Chattanooga State Community College, Chattanooga, TN, USA Robert Dahlstrom Apellix, Jacksonville, FL, USA A. Denisov Tessonics Inc., Windsor, ON, Canada Yuxia Duan School of Physics and Electronics, Central South University, Changsha, Hunan, China Michael Enright Southwest Research Institute, San Antonio, TX, USA xv
xvi
Contributors
M. Faisal Haider Aeronautics and Astronautics Department, Stanford University, Stanford, CA, USA Ramon Salvador Fernandez Orozco Fercon Group, Zapopan, Jalisco, Mexico Xiaorong Gao School of Physical Science and Technology, Southwest Jiaotong University, Chengdu, China Francisco Gayosso Crea Codigo, Guadalajara, Mexico Mark Geisenhoff Flint Hills Resources, St. Paul, MN, USA Christian T. Geiss clockworkX GmbH, Ottobrunn, Germany Paul Graja Fraunhofer IKTS, Dresden, Germany Manuel Gramlich clockworkX GmbH, Ottobrunn, Germany Astrid Haibel Beuth University of Applied Sciences, Berlin, Germany Kimberley Hayes Valkim Technologies, LLC, San Antonio, TX, USA Julius Hendl Institute for Materials Science, Technische Universität Dresden, Dresden, Germany Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany Susanne Hillmann German Center for Rail Transport Research at the Federal Railway Authority, Dresden, Germany Stephen D. Holland Department of Aerospace Engineering, Iowa State University, Ames, IA, USA Nathan Ida Department of Electrical and Computer Engineering, The University of Akron, Akron, OH, USA A. Juengert Materials Testing Institute University of Stuttgart (MPA), Stuttgart, Germany Kai Kadau Siemens Energy, Inc., Charlotte, NC, USA Valerie Kaftandjian Vibrations and Acoustic Laboratory, INSA-Lyon, Villeurbanne Cedex, France Daniel Kanzler Applied validation of NDT, Berlin, Germany Jan Kirchhof Fraunhofer-Institut für Zerstörungsfreie Prüfverfahren IZFP, Ilmenau, Germany Bernd Köhler Accredited NDT Test Lab, Fraunhofer IKTS, Dresden, Germany Malgorzata Kopycinska-Müller Bio- and Nanotechnology, Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Fabian Krieg Fraunhofer-Institut für Zerstörungsfreie Prüfverfahren IZFP, Ilmenau, Germany
Contributors
xvii
Adarsh Krishnamurthy Department of Mechanical Engineering, Iowa State University, Ames, IA, USA Benjamin Kruppke Max Bergmann Center of Biomaterials and Institute of Materials Science, Technische Universität Dresden, Dresden, Germany Amrita Kumar Acellent Technologies Inc., Sunnyvale, CA, USA P. Labud Salzgitter Mannesmann Forschung GmbH, Duisburg, Germany Andreas Lehmann Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Christoph Leyens Institute for Materials Science, Technische Universität Dresden, Dresden, Germany Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany Irene Li Acellent Technologies Inc., Sunnyvale, CA, USA Zheng Liu School of Engineering, The University of British Columbia, Kelowna, BC, Canada Elena Lopez Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany R. Gr. Maev Faculty of Sciences, Institute for Diagnostic Imaging Research, University of Windsor, Windsor, ON, Canada Marco Maggioni Proceq SA, Schwerzenbach, Switzerland Christiane Maierhofer Bundesanstalt für Materialforschung und –prüfung, Berlin, Germany Axel Marquardt Institute for Materials Science, Technische Universität Dresden, Dresden, Germany Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany Rafael Martínez-Oña NDE consultant and AEND (Spanish Society for NDE), Madrid, Spain Norbert Meyendorf Chemical Materials and Bio Engineering, University of Dayton, Dayton, OH, USA Gerhard Mook Universitaet Magdeburg, Magdeburg, Germany Sascha Naumann Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Ernst Niederleithinger Bundesanstalt für Materialforschung und –prüfung, Berlin, Germany Matthias Nöthen Volkswagen AG, Wolfsburg, Germany Y. Oberdorfer Tessonics Europe GmbH, Frechen, Germany
xviii
Contributors
Jörg Opitz Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Max Bergmann Center of Biomaterials and Institute of Materials Science, Technische Universität Dresden, Dresden, Germany Martin Oppermann Centre for Microtechnical Manufacturing, Technische Universität Dresden, Dresden, Germany Ahmad Osman Fraunhofer Saarbrucken, Germany
IZFP
Institute
for
Nondestructive
Testing,
Faculty of Engineering, University of Applied Sciences, Saarbrucken, Germany A. Ouellette Institute for Diagnostic Imaging Research, University of Windsor, Windsor, ON, Canada Jianping Peng School of Physical Science and Technology, Southwest Jiaotong University, Chengdu, China Eduardo Pérez Fraunhofer-Institut für Zerstörungsfreie Prüfverfahren IZFP, Ilmenau, Germany Kara Peters Department of Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, NC, USA Vincenz Porstmann Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Amit Rajput XaasLabs Inc, Sunnyvale, CA, USA Vamsi Krishna Rentala Applied validation of NDT, Berlin, Germany Johannes Richter GÖPEL electronic GmbH, Jena, Germany Sina Rößler Max Bergmann Center of Biomaterials and Institute of Materials Science, Technische Universität Dresden, Dresden, Germany Florian Römer Fraunhofer-Institut für Zerstörungsfreie Prüfverfahren IZFP, Ilmenau, Germany Ralf Schallert Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Jörg Schambach GÖPEL electronic GmbH, Jena, Germany Nils Scheuschner Bundesanstalt für Materialforschung und –prüfung, Berlin, Germany Sascha Schieke Molex LLC, Lisle, IL, USA Thomas Schmalfuß Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Rainer P. Schneider Beuth University of Applied Sciences, Berlin, Germany
Contributors
xix
Luise Schreiber Fraunhofer Institute for Ceramic Technologies and Systems IKTS, Dresden, Germany Lennart Schulenburg VisiConsult X-ray Systems and Solutions GmbH, Stockelsdorf, Germany R. Scott Institute for Diagnostic Imaging Research, University of Windsor, Windsor, ON, Canada Ripudaman Singh Inspiring Next, Cromwell, CT, USA D. Stocco Institute for Diagnostic Imaging Research, University of Windsor, Windsor, ON, Canada Andreas H. J. Tewes Beuth University of Applied Sciences, Berlin, Germany Kilian Tschöke Fraunhofer Institute for Ceramic Technology Systems (IKTS), Dresden, Germany Chris Udell Voliro AG / Proceq SA, Zurich, Switzerland Iikka Virkkunen Aalto University, Espoo, Finland Johannes Vrana Vrana GmbH, Rimsting, Germany Ke Wang Molex LLC, Lisle, IL, USA Teng Wang The University of British Columbia, Kelowna, BC, Canada Bianca Weihnacht Fraunhofer Institute for Ceramic Technology Systems (IKTS), Dresden, Germany M. Werz Materials Testing Institute University of Stuttgart (MPA), Stuttgart, Germany Hans-Peter Wiesmann Max Bergmann Center of Biomaterials and Institute of Materials Science, Technische Universität Dresden, Dresden, Germany Robin Willner Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany Christian Wunderlich Material Diagnostics, Fraunhofer-Institut für Keramische Technologien und Systeme IKTS, Dresden, Germany Yu Zhang School of Physical Science and Technology, Southwest Jiaotong University, Chengdu, China
Part I Concepts and Trends
1
Introduction to NDE 4.0 Johannes Vrana, Norbert Meyendorf, Nathan Ida, and Ripudaman Singh
Contents The Industrial Revolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brief History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technologies Driving Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . So, What Is Industry 4.0? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Revolutions in NDE Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brief History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . So, What Is NDE 4.0? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drivers of the Current Revolution in NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE 4.0 Use Cases and Value Proposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industry 4.0 for NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE for Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE 4.0 as an Eco-System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 4 6 13 14 14 16 21 22 22 25 27 28 28 29
J. Vrana Vrana GmbH, Rimsting, Germany e-mail: [email protected] N. Meyendorf (*) Chemical Materials and Bio Engineering, University of Dayton, Dayton, OH, USA e-mail: [email protected] N. Ida Department of Electrical and Computer Engineering, The University of Akron, Akron, OH, USA e-mail: [email protected] R. Singh Chief Innovation and Strategy, Inspiring next, Cromwell, CT, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_43
3
4
J. Vrana et al.
Abstract
Cyber technologies are offering new horizons for quality control in manufacturing and safety assurance of physical assets in service. The line between non-destructive evaluation (NDE) and Industry 4.0 is getting blurred since both are sensory data-driven domains. This multidisciplinary approach has led to the emergence of a new capability: NDE 4.0. The NDE community is coming together once again to define the purpose, chart the process, and address the adoption of emerging technologies. This handbook is an effort in that direction. In this chapter, the authors define the industrial revolutions and technologies driving the change, use that context to understand the revolutions in NDE, leading up to the definition of NDE 4.0. In the second part of this chapter, authors have proposed several value propositions or use cases under “NDE for Industry 4.0” and “Industry 4.0 for NDE” leading to clarity of purpose for NDE 4.0 – enhanced safety and economic value for stakeholders within the NDE eco-system. Note: This chapter is based on content from “NDE 4.0—A Design Thinking Perspective” [5]. Keywords
NDE 4.0 · Use cases · Value proposition · Future of NDE · NDT 4.0 · Industry 4.0 · Digital Twin · IIoT · OPC UA · Ontology · Semantic Interoperability · Industrial Revolutions
The Industrial Revolutions Over the last three centuries, humanity has seen significant change in lifestyle, driven by three industrial revolutions. At this time in our history, we are going through the fourth revolution, in which the physical assets are getting connected with their digital twin through IIoT, creating smart products, processes, and even factories. Machines are beginning to learn and assist our cognitive function. Small handheld devices are gaining the perception of a life support system. How did we get here?
Brief History The first industrial revolution began in England in the second half of the eighteenth century and brought a change from handcrafted forms of production to the mechanization of production with steam engines or regenerative energy sources such as water. Transportation started to change in early nineteenth century with steam locomotives. The second industrial revolution triggered by electric power, enabled new industries such as the chemical, pharmaceutical, and mechanical production engineering. It began at the end of the nineteenth century and led to the introduction of
1
Introduction to NDE 4.0
5
the assembly line (1870 first conveyer at Cincinnati slaughterhouse, 1913 at Ford for automotive production) and to new forms of industrial organization. We mastered the control of physical materials and products. In the second part of the twentieth century, the development of microelectronics, digital technology and computers ushered in the third industrial revolution, which allowed automated control of industrial production and revolutionized data processing in offices (computers, laptops) as well as in private environments (computers, mobile phones, and game consoles). We mastered the digital space. All these developments enabled by the emerging technologies of the period, were implemented to simplify industrial production and allowed new and cheaper products. For example, the textile industry started with the first revolution and allowed everybody to afford clothing. However, multiple professions became unnecessary and working conditions challenging. That, in the long run, resulted in the creation of trade unions, better and safer working conditions, more jobs, shorter worktime, longer life expectation, and a higher living standard for everybody. The second and third revolution helped to build further industries, made more products affordable (or enabled them, as, for example, a computer), made some professions and certain product categories unnecessary but in the long run they improved the working and living conditions, created jobs and resulted in higher living standard up to the point that a 40-h work week and an expected lifetime of 80 years is nowadays considered normal in developed countries. We mastered the control of physical materials and products in the second revolution and digital space in the third revolution. Now, new developments in digital domain and connectivity with physical world through sensors is bringing in the fourth industrial revolution, where we are beginning to harness the potential of digital physical integration. Technology is enabling new products and devices, which are changing and presumably simplifying everybody’s life, for example web mapping tools (Goggle maps), self-driving vacuum cleaners or cars, intelligent virtual assistants (Amazon’s Alexa), cryptocurrencies (bitcoin), or ridesharing companies (Uber, Grubhub). All those new products is the outcome of the ongoing fourth industrial revolution. A good example of the digital physical integration is a self-driving car. The car gathers the data from multiple cameras and sensors to determine its position, velocity, and separation to other cars. It uses the data in real time to take physical actions with an intent to reach the destination without collision or discomfort. The technology requirements are immense speed and serious networking across sensors, computers, power, and transmission. By choosing open standardized interfaces the car manufacturer only needs to implement the standard interface once and afterwards they can use all sensors. With semantic interoperability the car knows that the sensor is measuring a distance and that it is located at the front of the car. In addition, the car can get maps and traffic conditions from web mapping tools and gets information from the cars around itself from various manufacturers. A similar change is happening in industrial manufacturing and maintenance. Manufacturing shops are installing sensors to monitor production, collecting the data from all kinds of manufacturing, and handling machines, connecting Enterprise Resource Planning and Manufacturing Execution Systems to simplify, enhance, and
6
J. Vrana et al.
secure industrial production, to streamline supply chains, and to allow new, cheaper, and safer products. In addition, there is a growing desire to make that those cyberphysical systems learn from experience and adopt to variations make select decisions independently. These concepts extend into inspection systems and form the focus of next two sections. Even though from a hardware standpoint the fourth industrial revolution uses the technical principles of the third revolution it leads to a completely new transparency of information through the informatization, digitalization and networking of all machines, equipment, sensors and people in production and operation. Industry 4.0 enables feedback and feedforward loops to be established in production, trends to be determined through data analysis and a better overview to be gained through visualization. To deliver on the promise of effective connectivity, we need open standardized interfaces with semantic interoperability between all devices in the industry. To drive those developments the term Industrie 4.0 was introduced in 2011 [1]. Within a very short time, especially in Germany, many projects and groups were created with the aim of standardizing the development, like the Platform Industrie 4.0 and the International Data Spaces Association (IDSA). Without them, the fourth revolution cannot function. Similarly, the Industrial Internet Consortium (IIC) was established in the USA in 2014 working on the IIoT standards. Outside Germany, the term in use is Industry 4.0 and is here to stay and radically change our lifestyle [2–4]. A point to appreciate is that the first three industrial revolutions were declared by historians, after the changes were accepted by society and professionals. The fourth, on the other hand, uses the term “4.0” to introduce it, as the community has realized its onset soon enough to prepare and steer it. What is driving this change?
Technologies Driving Industry 4.0 The list of digital technologies is growing rapidly. Everyone has his/her favorite top technologies as a part of Industry 4.0. Which ones are more important depends upon the application? From the diversity typical in the NDE domain, most of them will be discussed briefly. This enables a later discussion whether those technologies provide more or less value to the outcome of NDE, and if they even create new use cases. The portfolio of technologies discussed [5–7] begins with a differentiation between digitization, digitalization, and digital transformation. It continues with meaningful data collection (digital twin, digital thread, Industrial Internet of Things (IIoT), and semantic interoperability to enable machine readability) and with technologies enabling new ways of data transfer (5G), revision safe storage (blockchain), computing (Cloud, AI, Big Data, mobile devices, and Quantum Computers), and visualization (XR). Finally, multiple technologies (Additive manufacturing, Automation, Simulation, Reconstruction, and Digitization) enabling automation, data processing and purposeful application, are discussed in a new light within the fourth revolution.
1
Introduction to NDE 4.0
7
Digitization, Digitalization, Digital Transformation, and Informatization Digitization is the transition from analog to digital and Digitalization is the process of using digitized information to simplify specific operations [2, 8] (▶ Chap. 5, “Digitization, Digitalization, and Digital Transformation”). Digital transformation uses digital infrastructures and applications to exploit new business models and value-added chains (automated communication between different apps from different companies) and therefore requires a change in the thought processes. Digital transformation requires collaboration for an improved digital customer experience. Informatization is the process by which information technologies, such as the World Wide Web and other communication technologies, have transformed economic and social relations to such an extent that cultural and economic barriers are minimized [9]. Digitization is the core of the third revolution, digitalization marks the process to the fourth revolution, and digital transformation is the core of the fourth revolution. Digital Twin A digital twin must be treated as a concept. It provides value on three fronts. (1) It encapsulates all relevant data of an asset or a component (or respectively uses the data stored in multiple computer systems/databases using their semantic interoperability accessed by the IIoT), (2) it enables the analysis/simulation of the asset usage based on the stored data, and (3) it enables users to visualize the data and the analysis/ simulation results or it leads to certain automated actions. For example, a digital twin of a human would encapsulate physical data like dimensions (weight, length, . . .), financial aspects, connections (friends, colleagues), eating and drinking preferences, health history, . . . etc. Social media cites (like Facebook or LinkedIn), user accounts (like Google or Apple), health insurance records, or governmental records can be individually considered partial digital twins. With simulation tools added, the behavior or the lifetime of a human can be predicted by those digital twins. This shows the value of data and importance of data security and data sovereignty. One of the first implementations by an independent body for a digital twin is the asset administration shell of the Platform Industrie 4.0 [2, 3]. Digital twins should be differentiated by the type of asset they represent. These types include production facilities, production equipment, assembled products, components, inspection systems, devices, sensors, and operators. Digital twins can be layered and allow inheritance. For example, a digital twin of a production facility can contain the digital twins of all the production equipment and inspection systems within the production facility. And the digital twin of an inspection system can contain the digital twins of all sources, sensors, detectors, and manipulators. Computer Aided Design (CAD) Systems could be viewed as quite simple digital twins as they integrate the design data of components, simulations, and visualization processes. However, it is firstly a digital twin of an early development state of a component as it does not contain, for example, any operational data and secondly it only contains the dimensional design data. This indicates that a complete digital twin might be difficult to achieve. For NDT purposes two main digital twin types must be differentiated. #1 the digital twin of the component to be inspected and #2 the digital twin of the inspection
8
J. Vrana et al.
system / equipment. The digital twin of the component to be inspected could contain the product model, usage space, performance parameters, inspection, and maintenance records, etc. and will help the component owner to help improve production, design, and maintenance. The digital twin of the inspection system helps to improve the inspection process.
Digital Thread A digital thread, proposed by the US Department of Defense [10], connects the data from the planning and design phase of an asset over production and service until it goes out of service allowing traceability of every decision and its implications. This means digitalization and traceability of a product “from cradle to grave.” It is more than a life cycle record as it also includes all the information from the planning and design phase. Digital thread can be seen as a successor to PLM (Product-LifecycleManagement) enabling the realization of the dream of PLM systems. IIoT: Industrial Internet of Things and the Infrastructure The industrial internet of things (IIoT) connects assets with each other, with the digital twin, with the digital thread, with data-base systems, and with the cloud. For these connections it uses open standard communication interfaces like OPC UA, WebServices, or oneM2M, and core gateways to link the core communication standards. The IIoT requires semantic interoperability. The development of each of the standard communication interfaces is driven by organizations such as the OPC UA Foundation. The Industrial Internet Consortium (IIC) requires such core gateways in its standards and the International Data Spaces Association (IDSA) is in the process of implementing these gateways. The so-called IDS connectors connect all the communication standards with digital twins, the cloud, and data markets while guaranteeing data sovereignty [2]. Similar concepts, such as the Building Internet of Things or the infrastructure internet of things, are conceivable for every industry and use case. Within the inspection world, IIoT makes, for example, remote inspection, collaborative decision-making, inspection workflows, archiving, and integrated inspection system design a reality. Moreover, it is the precondition to integrate NDE as one of the key data sources into Industry 4.0. Semantic Interoperability, Ontologies To effectively deal with communication standards, digital twins, digital threads, or data types/formats, one key aspect is to provide meaning data for unambiguous interpretation and shared meaning, which is achieved by semantic interoperability, and by ontologies. Syntactic interoperability, which is the necessary basis for semantic interoperability converts the data from a system-dependent to a systemindependent format, enabling data exchange between different systems. However, for a computer to understand data (just like for a human) the computer needs to know the unit of the submitted number, it needs to know whether the number refers to a height, a length, a weight, a time, . . ., and it needs to know the connections between the data objects. For example, that a car has a speedometer, and the
1
Introduction to NDE 4.0
9
speedometer shows a value of 136 km/h. This is enabled by semantic interoperability by not only storing a value but also by identifying its connections and giving it a meaning. Semantic interoperability converts data into information and enables different inspection equipment, asset digital twins, data analytical tools to communicate and understand each other, for a meaningful inspection outcome.
Industry 4.0 Data Processing Some examples for Industry 4.0 data processing, also known as big data analysis, are digital engineering, feedback loops, trend prediction, probabilistic lifing, predictive maintenance, behavioral analytics, risk modelling, and reliability engineering. All these tools are intended to improve design, production, maintenance, etc. and are based on data-derived knowledge. This requires the conversion of data from IIoT, digital twin, digital thread, computer systems, databases, etc. into information by using semantic interoperability and conversion of the information by statistical analysis or AI into knowledge. 5G In layman language, the 5G is viewed as a successor to 4G for faster mobile data exchange. This part of 5G is called eMBB (Enhanced Mobile Broadband). However, it is the speed, range, and device density which makes 5G one of the corner stones of Industry 4.0. 5G brings Ultra-Reliable Low Latency Communications (URLLC) which allows robust real time data connections (latencies 500 km/h). The Massive Machine Type Communications (mMTC) allows the connection of high density of devices (1 million/km2) and inexpensive low complex mobile implementations. 5G provides the necessary bandwidth for high-speed remote inspection and largescale implementation – even for inexpensive inspection equipment – and could lead to the possibility for every inspection device to be accessed through IP connections. Blockchain Blockchains present a way to assure that data is not changed after being stored. The security is enabled by a chain of data blocks, where every block consists not only of new data but also of a hash representing the data of the previous block (which contains a hash of the block N – 2, and so on). This makes it difficult to change earlier blocks as the hashes in all the following blocks would have to be recalculated. To further enhance the manipulation safety, a ‘nonce’ (number used once) is added to each block. A nonce is the result of a restricting rule for the hash of the following block. Meaning, there is a restricting rule for the hash of block N (for example it must be smaller than a certain number) and the nonce of block N-1 (which is included into the calculation of the hash of block N) must be changed multiple times until the hash of block N fulfils the given rule. The technology emerged out of the need to secure financial transaction records or component files. For crypto currencies, the rules for the nonce are usually increasingly difficult to achieve to ensure the value of the currency due to the difficulty of finding a fitting nonce.
10
J. Vrana et al.
In quality assurance and maintenance it is important to guarantee that results obtained and reported cannot be changed in the future or that changes are tracked. This can be guaranteed by Blockchains.
Cloud Another technology connected to Industry 4.0 is cloud computing, which enables the access to an IT infrastructure using the internet from any computing device around the globe and allows data storage, data processing and application software as a service. For a reasonable use of clouds, semantic interoperability of data and of the communication interfaces are key so that the standard data processing and visualization software available in the cloud can understand the data. Artificial Intelligence (AI) Artificial intelligence is defined as “A system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” [11]. In fact, as Fig. 1 shows, artificial algorithms are a subset of algorithms that can mimic human behavior. Machine learning algorithms are AI algorithms with the ability to learn without being explicitly programmed. Deep learning algorithms are machine learning algorithms in which
Fig. 1 Visualization of algorithms vs. artificial intelligence vs. machine learning vs. deep learning. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
1
Introduction to NDE 4.0
11
artificial neural networks adapt and learn from vast amounts of data. An intermediate step towards AI is intelligence augmentation whereby the algorithm helps the human with information and know-how to assist in decision making rather than replace the human’s judgement. AI technology can help improve inspection reliability through comprehensive interpretation of large inspection data, not humanly possible. However, the implementation of AI requires a high skill set, both in the inspection process and the machine learning algorithms to ensure that system output is dependable. Otherwise, there is a risk of capturing and duplicating human incompetency. AI technology can also help to improve Industry 4.0 Data Processing.
Big Data Big data is a field treating multiple challenges resulting from the vast amounts and different types of data collected by the IIoT using machine-to-machine and humancomputer interface. These challenges include capturing, storing, analyzing, searching, sharing, transferring, visualizing the data which cannot be processed by traditional software. The collection of data, and the conversion into information and knowledge is the main idea of Industry 4.0 Data Processing. With the collection of larger amounts of data, the potential for enhanced knowledge is created which could lead to enhanced design, production, and maintenance. This shows the criticality of solving the big data challenges. Within NDE, the ability to manage big data can be especially useful over the life span of an asset or the fleet, as the ways, forms, and volume of data continuously evolves. For nondestructive materials characterization usually sophisticated calibration procedures are required to correlate measured physical properties with the target properties like from example hardness or yield strength. Big Data might give to opportunity to improve and simplify the calibration procedures. Mobile Devices, Handheld and Small-Size Computers Handheld computers, such as the widely available tablet computers or cellphones, provide the user with an impressively high level of computing power and a connection to computer networks in about every location. Moreover, they already implement a variety of sensors, including cameras, microphones, vibration sensors, accelerometers, and GPS devices. Due to their connectivity, integrated sensors, easy to use software, and the convenience to seamlessly integrate other wireless or wire-based sensors these devices are ideal for many Industry 4.0 scenarios. Similarly, inexpensive smalls-size computers, like Raspberry Pi or Arduino, or micro-size computers enable every device to be connected and every device to become “smart.” Quantum Computers Quantum computers use multiple entangled qubits (53 in [12]) for the purpose of calculations. The possibilities arising due to quantum computers are astounding but the working principle is hard to grasp. It is based on two quantum mechanical
12
J. Vrana et al.
phenomena: superposition [13] and entanglement [14]. These phenomena can only be observed in a microscopic world (~ size of an atom) and not in the macroscopic world. This is why they seem to even contradict our everyday experience, but they can be used to build computing devices working in a completely different way than our current computers. Back in the early days, Schrödinger, one of the fathers of quantum mechanics, didn’t accept the validity of superposition. This is why he wrote his paper which includes the famous thought experiment on the cat [13]. Eventually, Schrödinger had to accept that this thought experiment was misleading and accepted superposition. Even though the working principle is hard to understand and beyond the scope of this chapter, the possibilities of Quantum Computers are so immense that Quantum Computers could become the most important element to the fourth revolution. Unlike classical computers the results of all possible variations of input parameters are calculated at the same time. This enables an exponential speedup of certain algorithms compared to classical computers. Quantum computers are now moving out of research to first applications. They will not replace classical computers, but they will be a powerful add-on for certain computational challenges. Currently quantum computing is not considered a prime Industry 4.0 technology, but this could change fast. In particular as quantum computers could play an essential role in solving some of the big data challenges and enabling game-changing artificial intelligence algorithms.
Extended Reality (XR) Virtual Reality (VR) utilizes headsets to display a computer-generated 3D image (for example computer games), replacing normal vision. Augmented Reality (AR) presents an interactive experience of a real-world environment enhanced by computer-generated images. Mixed Reality (MR) combines the VR, AR, and the physical world. Finally, the term XR (Extended Reality) is used to summarize all forms of digital enhancements to visualize data overlays in the physical world. XR brings multiple improvement opportunities such as superior inspector training, real-time work instructions, ability to visualize anomalies through virtual overlay on real asset, visualize simulated future, and remote expertise consulting. Additive Manufacturing Additive manufacturing or 3D printing encompasses multiple new manufacturing technologies for various materials which add material layer by layer to shape a 3D object. This contrasts with classical subtractive manufacturing methods that remove material from a block of raw material to achieve the shape. This has multiple benefits, in particular lot-size-one, creation of internal structures, embedded sensors, which would not be possible or immensely difficult to manufacture with subtractive manufacturing. 3D printing makes the nondestructive evaluation of such components challenging as discussed later. Moreover, for small lot sizes the state-of-the-art procedures for reliability assessments need to be reconsidered.
1
Introduction to NDE 4.0
13
Automation, Robotics, and Drones Automation, robots, and drones evolved during the third revolution [2, 3]. However, their ability to interpret and adapt to the environment makes them a part of Industry 4.0. The ongoing activities could be considered enhanced automation, where the machine can choose to scan and capture data based on observations rather than on pre-programing. For example, collaborative robots (cobots) allow a shared working space for humans and robots in production environments by assuring the safety of the humans by (for example) sensors. This makes the typical protective fences unnecessary and allows direct interaction between humans and robots. Automation and AI technology can keep inspectors away from harm’s way, by accessing confined spaces, hard to reach areas, heights and depths, radiation exposure, and extreme weather. Simulation and Reconstruction Simulation and reconstruction from the third revolutions brings in a new meaning into the fourth by enabling real time control of physical actions based on predictive analysis. This provides both a capability and a purpose to the digital twins and digital threads. In the asset sustainment space, simulation technology enables optimization of the inspection program tied to parameters of choice – risk, cost, and downtime.
So, What Is Industry 4.0? For all the technologies described above somebody might argue that none of those technologies is new; and there is an element of truth to those statements. Industry 4.0 is not a single technology; it is a suite of cyber-physical technologies. The fourth revolution is not a discrete event; it is a phase over which the suite of cyberphysical technologies is coming together to change the way humans work and live, produce and consume, learn and stay healthy, and other things along the way like NDE. Overall, what constitutes Industry 4.0 is not the emerging technologies, but their integration for a purpose, that was not achievable previously. The increase in communication bandwidth and flexibility afforded through 5G, the ability to manage terabytes of data, computational processing speed and capacity, mobile devices, location services, ease of programming, have all enabled such integration. In the past strict customer retention was key for suppliers in all industries, but new business models are arising to enable the economic use of data by enabling collaboration. This means reduction of burdens, the reduction of proprietary data formats and proprietary interfaces. It also means collaboration of different players around the globe to work on the greater good. This will eventually lead to a completely new market – a market for data – and a market for purposeful application of data.
14
J. Vrana et al.
Recommended Terminology The term Industrie 4.0 was coined in Germany and continues to dominate in that region and parts of EU. Globally the terms Industry 4.0 or the fourth Revolution are popular.
The Revolutions in NDE Domain The revolutions in manufacturing factories, chemical plants, and transportation systems came with unfortunate incidents and fatal accidents. The engineering community rose to the challenge of quality, safety, and reliability through non-destructive inspections, testing, and evaluations (NDI, NDT, and NDE). This important domain serving so many industries, has also seen its share of revolutions, aligned with the changing needs of the industrial revolutions, and enabled by similar suite of technologies. Although NDE can only exist within other sectors, it has certainly grown to be economically strong enough and behave like an industry in itself, at least up until the third industrial revolution. Going into the fourth, it is becoming an integral part of manufacturing and operational systems; and begs the engineering communities to come closer to purposeful applications from design/ manufacturing to safe in service. We can even argue that sensory systems developed by NDE and digital processing of Industry 3.0 are the primary contributors to emergence of Industry 4.0. Let us look at how we got here. For centuries humans have taken care of their safety using the five basic senses – touch, sight, hearing, smell, and taste. Which means that NDE pre-dates the first industrial revolution, for sure. We will refrain from calling it NDE 0. Over the last 250 years, the safety and quality assurance has evolved into a planned and instrumented approach to meet the needs of the industrial revolutions [2, 3, 5, 15]. The exact timeline of various revolutions in NDE and Industry will not have a 1-1 correspondence, because it takes time to realize the need, validate and mature the technologies, and adopt them in the eco-system. In addition, certain sub-areas may have seen more than 4 major changes, but we chose to stick to the term 4.0 to represent the digital-physical integration (Fig. 2).
Brief History The first inspection revolution was based on human sensory perception, just like the first industrial revolution was based on handcraft developed over the millennia. Through their senses, people have been able to “test” objects for thousands of years. They looked at components and joints, smelled, felt, tasted and knocked on them to learn something about their condition and interior. The birth of non-destructive inspection took place on the one hand through the introduction of tools that sharpened the human senses, and on the other hand through standardized procedures. Tools such as lenses, colors or stethoscopes improved the detection capabilities.
1
Introduction to NDE 4.0
15
Fig. 2 Visualization of the four industrial and NDE revolutions. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
Procedures made the outcome of the inspection comparable over time. At the same time, industrialization also made it necessary to expand quality assurance measures (even though the term quality assurance was created later). The second revolution in NDE, like the second revolution in industry, is characterized using physical and chemical knowledge and electricity. The transformation of electromagnetic or acoustic waves, which lie outside the range of human perception, into signals that can be interpreted by humans, resulted in a “look” into the components or a better visualization of material inhomogeneities at or close to the surface. In the first half of the twentieth century X-ray and gamma testing was dominant; UT had the breakthrough in the 50s for instance with transistor technology that resulted in light weight and fast electronics. The first detectors for infrared and terahertz detection were invented and the first eddy current devices became available. However, the breakthrough in those methods came during the third revolution with digitization and simplified imaging of signals. The third revolution in NDE parallels the advent of microelectronics, digital technology and computers. Digital inspection equipment, such as X-ray detectors, digital ultrasonic and eddy current equipment, and digital cameras became a part of the system. Robotics came in to automate processes, making it convenient, fast, and repeatable. The digital technologies offered another leap in managing inspection data acquisition, storage, processing, 2D and 3D imaging, interpretation, and communication. Data processing and sharing became the norm, and data security and integrity came in as a new challenge. The fourth revolution in NDE integrates the digital techniques (from the third) and physical methods of interrogating materials (from the second) in a closed loop manner transforming human intervention and enhancing inspection performance. Within the context of the physical-digital-physical loop of NDE 4.0; digital technologies and physical methods may continue to evolve independently, interdependently, or concurrently. The real value is in concurrent design of an inspection through application of Digital Twins and Digital Threads. This provides the ability to capture
16
J. Vrana et al.
and leverage data right from materials and manufacturing processes to usage and in-service maintenance. The data captured across multiple assets, can be used to optimize predictive and prescriptive maintenance, repairs and overhauls over the lifetime of an asset. The relevant data can be fed back to OEM for design improvements. NDE 4.0 also serves the emerging trends in custom manufacturing. Remote NDE can keep the inspector away from harm’s way and integration by “telepresence” can bring additional specialists in the decision process from anywhere in the world quickly and affordably. There are multiple facets and significant changes in various elements of the NDE process as we evolved from preindustrial revolutions into the present day NDE 4.0. Table 1 below helps understand those trends and changes to provide clarity on NDE 4.0 along with an opportunity to self-assess your current state.
So, What Is NDE 4.0? We have covered a lot of ground in the historical brief and the table above. Much has been discussed by various industry leaders in the past 5 years bringing us to this point of a handbook. Multiple papers have been published on NDE 4.0 [2–7, 16–19] to bring awareness to this topic. They focus on single use cases and on comparisons across the four industrial revolutions and the four revolutions in NDE [4, 20, 21]. At this point in time, we propose a comprehensive definition of NDE 4.0 covering why, what, and how as follows – ‘Cyber-physical Non-Destructive Evaluation (including testing); arising out of a confluence of Industry 4.0 digital technologies, physical inspection methods, and business models; to enhance inspection performance, integrity engineering, and decision making for safety, sustainability, and quality assurance, as well as provide relevant data to improve design, production, and maintenance.’
In short ‘Non-destructive evaluation through confluence of digital technologies and physical inspection methods for safety and economic value’
As a clarification note, we would like to say that a new revolution in NDE does not imply that work on the previous ones has stopped. Physical methods are still emerging. However, a new physical method created today does not automatically qualify for NDE 4.0 just because we are in the fourth industrial revolution. For example, a better fluorescent dye penetrant discovered today will be called NDE 2.0 if it is still being applied manually and data recorded in physical ledgers. However, robotic handling of the part and digital processing of the image will take it to NDE 3.0. What makes PT an NDE 4.0 would be computer vision and robotics for physical processing, digital image capture, AI for image interpretation, and data handling in the digital twin.
Human senses (Eye, ear, nose,
Experience
Technology Class (WHY)
Knowhow Basis
Empirical
NDE R&D
Human Senses, + Analog sensors, Instrumented, Amplificaon Waves outside human (Lens, Stetho, Spray, ) percepon
Defect visibility
Challenges (WHY)
Prevent Reliability Hidden anomalies, Solid Volume
NDE 2.0
NDE 1 .0
NDE Technology, Computer Science
Microelectronics, Digital signal processing/Imaging Robocs & Automaon
Monitor & predict Cost Opmizaon Parts Volume, repeatability, Reproducibility
NDE 3.0
Digital instruments, Automaon
Digital, Computers, Automaon, Product families using common conveyor
Electricity, Tech, Chem, Bio Mass producon using conveyor belts Analog Instruments
Industry 3.0
Industry 2.0
Procedures, Simple tools
Detect Safety
Percepon
Inservice inspecon sector
Steam, Mechanizaon
Industry 1 .0
Opportunity (WHY) Movaon (WHY)
Muscle, Handcra
Manufacturing Sector
Pre-Industry
Table 1 Generally the higher number includes earlier things Industry 4.0
(continued)
Predict / Prescribe Economic returns from data Manufacturing Process control, Mass Customizaon (Addive Mfr) Digital Twin, IIOT Autonomous, Intelligence, Cyber-Physical Integraon; Microelectromechanical. Smart-Sensors-Actuators NDE Science, Data Science, Cyber-Physical Technology
NDE 4.0
Interconnected Cyber-Physical, Machine Intelligence, Mass Customizaon Autonomous Machines, NDE for product and process development
1 Introduction to NDE 4.0 17
NDE IP Focus
Anatomy to Technology
Human Role
Table 1 (continued)
Animals
Smell/Taste
Adhoc Learning from experience
Informal, Hands on, Internal
Hand work
Paper sheets and Notepads Arithmec Brain
Periodic Coin tapping
Hearing, Acousc
Touch Memory & Organizaon Analycal Cognion Moon Physical
Periodic Visual Check
Sight, Vision
NDE 1.0
Prominent
Purely
NDE 2.0
Methods
Formal class room training, Generally internal, Paper cerficates
Portable Mechanical Leverage
Touch Sensors Structured Paper-Ledgers Slide rule
Analog Chemical sensors
Ultrasound, Acousc Emission
Dominant Film X-ray, Fluorescent Penetrant, Eddy Current, Magnec Parcle
NDE 3.0
Equipment
Classroom and online training, Training schools and services, Formal cerficaon structure,
In the loop Digital X-ray, Gamma ray, Infrared Camera, CT Scans Digital UT, Laser UT, UT Phased Array, TFM or SAFT Digital chemical sensor, X-ray spectroscopy Atomic force microscopy Computers, Tablets, Personal devices Relaonal Database Calculator, Computer Logic / Algorithms Remote control Robots
NDE 4.0
Cloud, Digital Twin, Semanc interoperability Quantum Computer AI/ML/DL Autonomous Cobots AR/VR based human training, Humans to train the AI/ML, Blockchain training/cerficaon record Cerficaon to be determined, Models and Data Science
Digital Twin/Thread, Computer vision, Volume Imaging, 3D topography, AI interpretaon, VR/AR Visualizaon, Embedded Sensors Lab on chip
Out of loop
18 J. Vrana et al.
Tribal
Human Observation Human Decisions
Interpretation and Decision Making
Validation
Gut feel
Data Analysis & Processing
Tribal
Empirical
Modelling and Simulation
Sketches
Closet / drawer
Hand marking, Handwriting
Human reception Human Perception
Discover existing
Responsive
100 % Critical
NDE 2.0
Capability demos
Human Interpretation Human Decisions
Diagnostics, Analog processing
Strength of Materials based, Safe life philosophy
Lock + Key
Photos, Films
Analog readouts
Active Actively managed & controlled Simple-Scheduled Adhoc Calendars and Instructions (~home appliance) Suspected Qualitative
Sample of Critical
Data Security & Integrity
Data Format
Data Acquisition
Anomaly
Workflow
NDE Scheduling
Application possibility
NDE 1.0
Formal POD Studies
Digital Signal Processing, Modelling, Reconstruction, Health Monitoring, Prognostics Machine Interpretation, Human Decisions
Solid mechanics, Finite Element, Micromechanics Materials, Wave propagation models; System modelling & simulation
Passwords, Encryptions, Backups
Proprietary digital formats (Digital Photo, Video, Waveforms)
Digital Readouts
Quantitative, Trend
Emails, ERP enabled
Adaptive and flexible control, Remote possibility
100% Significant
NDE 3.0
To be developed
(continued)
Machine Interpretation, AI-Assisted Human Decisions
Prognostics using Digital Twin/Thread Learning Machine Signal Processing
Sovereignty, Traceability (Blockchain) Multiscale material modelling, NDE physics based modelling, Component & System modelling, All tied to digital twins, Learning Models in cyber-physical loop
Data Transparency
Digital direct to cloud
Prescriptive, Autonomous, Learning and adapting Intelligent Workflow systems, IIOT enabled Prognostics
100%
NDE 4.0
1 Introduction to NDE 4.0 19
10Ethics Concerns
Workflow, Interpretation, Reporting
Training and Competency,
Human sensory disability
Expensive but worth it
Dominant Human Factors
Cheap
Cost
Inspection equipment sales
Assigned to human Damage Mechanics
Generally inhouse
Biz Model
Rigid Hierarchical Organization
Quantitative Check
Preventive, Scheduled
Safe-Life
NDE 2.0
Responsibility Critical new skills
Functional groups
Tribal
Reactive
Reactive, Corrective Qualitative Check, Experience based sampling
NDE 1.0
Organization
Applications Other
Application Sustainability (Planet)
Application - Asset Manufacturing
Application Philosophy Application - In service Maintenance
Table 1 (continued)
Interpretation, Data entry
Physical discomfort, Mental distraction
Cost neutral over extended lifetime Tracking of human actions Computer Programming
Inspection services, Equipment sales/lease
Value Stream, Supply chain
Awareness, Life extension for life cycle cost reduction
Quantitative Assurance, Manufacturing optimization
Predictive Planning, Condition based Monitoring
Fail-safe, Damage Tolerance
NDE 3.0
NDE 4.0 Safety assurance through continuous monitoring Predictive, Prescriptive, Adaptive, Digital thread getting longer Precision Mfr Control, Custom Mfr Control (3D Print) Digital twin is born One of the design objectives, Enabling circular economy, Life extension for waste reduction ( Repurpose, reuse, recycle) Product Design, process improvement, Microelectronics development Eco-system (Learning, adaptive, resilient) Variety of Value proposition, (Assurance from digital twins Equip + Service + data + Analysis) Uberize, Amazonize, … Possibility of Net Positive through data monetization. Traceability to a competent human(s) Data Scientists Technology over-reliance, Human-Machine Co-working/loops (AI/robotics/VR) Bias in AI, Data ownership,
20 J. Vrana et al.
1
Introduction to NDE 4.0
21
Recommended Terminology – NDE 4.0 In the manufacturing and maintenance sectors across most industries three terms are popular – NDI, NDT, and NDE. The community intrinsically understands the difference but tends to use them interchangeably without any significant impact on outcome. So many of our fellow community members prefer to use the term NDT for emotional attachment, certification standards, and names of professional bodies that have existed since the second revolution. Just like we still use the term ‘cell phone’ while the manufacturers like to call it a ‘mobile device’ because it does so much more than make a phone call. Once we grasp the concept of the fourth revolution and accept that a closed loop cyber-physical system is at the core; and appreciate that machines will be inspecting, testing, and evaluating parts; we should be able to accept that the NDI/NDT is a subset of NDE 4.0. The terms NDI 4.0 or NDT 4.0 may not correctly reflect the intent of fourth revolution in inspection. The Editors in Chief of this handbook and authors of this chapter recommend the use of Term NDE 4.0 and discourage the use of terms NDT 4.0 or NDI 4.0. The terms NDT/NDI can continue to reflect the inspections systems of the third generation or before.
Drivers of the Current Revolution in NDE NDE 4.0 has the same driver as the fourth industrial revolution – integration of digital tools and physical methods. Technology push is coming from the portfolio of the digital technologies. Technology developers, R&D centers, and universities are bringing new ways of digitalization of specific steps in NDE process, with a promise of overall efficiency and reliability. These technologies were discussed in previous section. Being integral to other industries, NDE 4.0 can become the greatest change for the better in non-destructive evaluation (see Fig. 3), turning the entire business upside down. First, the Industry 4.0 emerging digital technologies can be used to Fig. 3 The three components of NDE 4.0. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
22
J. Vrana et al.
enhance NDE capability (“Industry 4.0 for NDE”). Second, a statistical analysis of NDE data provides insight into product performance and reliability, becoming a valuable data source for Industry 4.0 to continuously improve the product design (“NDE for Industry 4.0”) [5]. Third, immersive training experience, remote operation, intelligence augmentation, and data automation can enhance the NDE value proposition in terms of inspector safety and human performance (“Human Considerations”). This is reflected in Fig. 3 below and various use cases are discussed at length in the subsequent section.
NDE 4.0 Use Cases and Value Proposition In the spirit of design thinking [5], various value propositions or use cases can be classified in two broad categories, (A) Industry 4.0 for NDE, and (B) NDE for Industry 4.0. The following presents an explicit look at them, as a synthesis of various implicit remarks from surveys and research on this subject; eventually building up to the purpose of NDE 4.0.
Industry 4.0 for NDE Enhancing NDE Capability and Reliability through Emerging Technologies To begin with, most digital systems offer a clear advantage over traditional systems in terms of accuracy and speed. However, a significant contribution of CyberPhysical NDE system (NDE 4.0) stems from the better control or partial elimination of human factors in probability of detection. Having developed techniques to quantify human factors in NDE-POD studies [22, 23], the authors have a firsthand appreciation of value NDE 4.0 in the context of system performance. This leads to a more reliable inspection system, i.e., better Probability of Detection (POD) and a more consistent POD from inspection to inspection. In Fig. 4, Virkkunen et al. [24] have shown that using sophisticated data augmentation, modern deep learning Fig. 4 Industry 4.0 Capability has the potential to provide a steeper asymptote to a POD of 1.0 as compared to Intrinsic Capability or System Reliability
1
Introduction to NDE 4.0
23
networks can be trained to achieve superhuman performance by significant margin in ultrasonic inspections. All this adds to a dependable NDE permitting optimization of inspection programs, saving time, money, and improving asset availability.
Improving Efficiency and Effectiveness of Inspections Through Better Control The entire inspection process can be made more effective and efficient through use of digital workflow control and tracking, just like so many manufacturing processes on the shop floor or logistics in retail distribution. Multiple aspects have been identified. Component traceability requires the need to ensure that the correct component was inspected, the documentation is revision-safely stored, and the results can easily be retrieved. Revision-safe data storage can be implemented by using blockchains and the component identification by digital component files and electronic component identifiers. Digital workflows can enable full value stream efficiency: task allocation from the customer to the inspector and the results transferred back to the customer. Those data transfers are performed using IIoT technologies and interfaces (instead of Excel files or PDFs using email typical of the third revolution). Digital commissioning goes one step further by also transferring the order-related information using standard IIoT interfaces. Finally, with the implementation of digital supply chain processes both to customers and suppliers using standard interfaces a complete digital workflow can be established enabling NDE Processes 4.0. NDE result statistics from production and in-service inspections, feedback loops from destructive tests, component acceptance, and EOL component testing, help quality assurance personnel to get a better appreciation of the value of the inspections. This data can also provide insight into system performance and reliability. Usage statistics or inspection performance evaluations can show the need for a certain inspection on one hand and identify human factor influence on the other hand. This should help reduce operator dependence, inspection inconsistencies, and need for additional training or process change. Such evaluations can also be used to monitor training, experience hours for personnel evaluation, qualification, and certification. If required and permitted by local law, it can even indicate the mental state of inspectors, to provide support elements to improve their conditions such as stress monitoring or fatigue. Overall, a better inspection control will help prove and visualize the value of NDE, helping the industry. Improving NDE Equipment Using Inspector and Built-in Application Feedback The use case to improve NDE equipment using feedback was not mentioned by any of the answers of the surveys conducted by the authors [2, 5] and from several personal conversations it seems that the NDE hardware manufacturers and software developers do not yet see a big value within this use case. The idea is simple: provide the data like error codes, system parameters, system status information, software
24
J. Vrana et al.
exceptions/errors, or use or misuse back to the NDE equipment developers, so they can improve the equipment, the systems, or the software. The statistical evaluation of user behavior helps improve the user interface design, training, and applicability. This feedback loop may also contribute to accelerated troubleshooting and improvements of inspection equipment as a competitive advantage.
Improving Inspector Safety and Inspection Support Through Remote Access Remote support or TeleNDE by equipment manufacturers or other experienced inspectors/engineers can mitigate several challenges. Such remote support solutions or remote-controlled robot/drone-based equipment can be enabled by extended reality platforms and connected devices. Remote support by equipment manufacturers at the inspection location, in particular in hard-to-reach locations, could be a huge help to the inspectors and a money saver for the inspection service providers. At the same time, it is an opportunity for the manufacturer to investigate/reproduce potential issues/bugs with the equipment, if any. Remote support by other inspectors can help in inspection situations where a second opinion is needed, were an in-depth evaluation of indications identified by the inspector at location needs to be conducted and where local (potentially inexperienced) inspection personnel must be used (for example due to travel restrictions). Such remote support scenarios can be expanded by engineers inside and outside of NDE. This can be taken to extreme by remote control of aerial drones or underwater robot-based inspection systems. The global pandemic in 2020, which led to widespread shutdown, forcing essential services to continue under stressful social distancing, demonstrated the value of remote NDE. This motivated the industry to look deeper into remote NDE use cases. NDE for Everybody Highly powerful and widely available electronic devices, such as tablet computers and cellphones, incorporate various sensors in the form of cameras, microphones, vibration sensors and accelerometers. Other smartphone attachable tools are available for purchase including IR cameras [25], terahertz arrays [26], eddy current transduces [27] that can be used for household NDE. The use of these tools is as simple as downloading an application and attaching the removable device to the phone. That is literally everything that is necessary to start taking measurements. This will make the whole world’s accumulated knowledge (that is, a large amount of data) available to anyone at any time and any place. For the younger generation (a generation, unfortunately, not much involved in NDE careers at present) this technology is self-evident, and they possess a natural flair for it. Merging the highly specialized knowledge of NDE techniques with current technology will open a new market for NDE 4.0 [28]. These new hand-held devices will be applied to make NDE available and affordable to anybody. As a benefit, product inspection at home can become an additional component of monitoring the life cycle of a product. This
1
Introduction to NDE 4.0
25
might significantly increase the acceptance of NDE 4.0 by solving new inspection problems for everyday services. In [28] one of the authors published some ideas that students in an introductory NDE class at Iowa Stata University generated in class projects. Examples are: • Self-Inspection of used automobiles by acoustic signal analysis of, for example, engine noise. • Cellphone based eddy current measurement system, • Portable viewing of x-ray films, • Visual inspection of defects in glass, for example in car wind shields, • Unmanned Arial systems for pipeline leak detection and inspection, • Detecting of heat loss, electric overheating, or fault ventilation at home.
NDE for Industry 4.0 Quality Assurance in the Factory and the Infrastructure of the Future Smart manufacturing or the digital factory are technology-driven approaches that utilize Internet-connected devices to monitor the production process. The goal is to identify opportunities for automating operations and use data analytics to improve manufacturing performance. And where is the data coming from? – #1 from NDE sensors, providing point-like information at short time intervals for in-situ inspections during manufacturing or operation, such as ultrasonic sensors (like vibration analysis or acoustic emission) and optical sensors (like IR, UV, visual), #2 from traditional NDE inspections, providing a view into the component at longer intervals. Only the combination of the data from both NDE sensors and inspections will provide sufficient data input for predictive and prescriptive maintenance and structural health monitoring (SHM) of infrastructure such as buildings and transportation. To enable this use case, open standardized Industry 4.0 interfaces enabling semantic interoperability are required. “The NDE sector will not succeed in giving the industry new interfaces. It is more reasonable to use the Industry 4.0 interface developments and to participate in the design in order to shape them for the NDE requirements” [2, 3]. Once the data is transferred it can be stored and used in digital twins, in digital threads, in data-base systems, or in the cloud. As manufacturing gets into mass customization, this would require NDE to adapt to the customized product creating another use case for NDE 4.0. Quality Assurance of the Additively Manufactured Components Components manufactured additively are usually difficult to inspect due to their complex internal structures or complex external shape. This is why the usability of most of the traditional NDE methods is very limited for those components. In most cases, out of the traditional methods, only computed tomography works. This motivated several groups to start working on in situ NDE methods which monitor the signal during the additive manufacturing process. The most frequently
26
J. Vrana et al.
used sensors are optical sensors monitoring and recording the internal and external dimensions using infrared, visual, or ultraviolet light. The heating and cooling processes, the melting and freezing processes, and the expansion and shrinking processes can be monitored. Parameters like grain size and textures, but also new types of defects like micro porosity, micro cracking and oxide inclusions have the be characterized because this can be critical for the performance of the component. The feedback control can correct the process to ensure quality in real time. NDE 4.0 can improve the 3D printing process. The reduced lot size of additively manufactured components is an additional challenge that can be addressed by NDE 4.0 much more conveniently. The traditional inspection reliability metrics such as POD are not valid in this situation. With NDE 4.0, the possibilities around big data analytics considering multiple data sources will be harnessed to establish a baseline, which then gets refined with machine learning algorithms.
NDE of Drones and Critical Industrial Robots Very soon, society will be in an era where expensive drones performing everyday functions will need inspection and maintenance programs. For the most part, the society reacts to problems when they show up. It is not hard to imagine that one day in the near future, a package delivery drone will have a catastrophic failure in someone’s back yard, and new regulations will emerge around inspection of aging drones, industrial cobots, and similar devices. At the moment this technology and application is changing so fast that it is hitting obsolescence before aging, and so no maintenance plans are created. Continuous Improvement Through Data Mining Once the data is transferred using standard interfaces, it can be stored and used in digital twins, in digital threads, or in data-base systems, stored in the cloud. This NDE results/data becomes a valuable asset through statistical analysis and crosscorrelations with other data sets. This asset can then be used, for example, in • • • • •
Feedback loops for design improvements Optimization of associated manufacturing processes Trending to assure a constant or rising production performance Probabilistic lifing methods to calculate the life of components more accurately [29] Predictive maintenance to calculate the necessary maintenance inspections more accurately • Reliability engineering to enhance the reliability of components and products
Data Monetization New business models will emerge as data shows promise [2, 3]. The structured data amenable to information extraction can become a commodity with a price tag for data owners because it has value for product performance service life improvement. NDE 4.0, with data security and sovereignty enabled by IIOT, opens up the possibility of asset customized prescriptive maintenance, which can significantly improve the value derived from a well know Data Analytics Maturity Model, [30]. Analytics
1
Introduction to NDE 4.0
27
at various levels require increasingly specialized skills, with possibility to command increased prices in the data market: • • • • •
Analytics Level 1: Descriptive – What happened? Analytics Level 2: Diagnostics – Why did it happen? Analytics Level 3: Predictive – What will happen? Analytics Level 4: Prescriptive – What should we do? Analytics Level 5: Cognitive – What don’t we know?
Note: Ownership of NDE data as of 2020 is not regulated and needs to be covered by individual contracts.
NDE 4.0 as an Eco-System Digital Transformation is easier said than done. By now, you must have realized that the multi-disciplinary nature of the technology, and the multiple stakeholders in a successful application, calls for a system’s perspective; where in active contributors can appreciate and support each other’s roles. The NDE 4.0 Eco-system, as shown in figure 5, includes four primary stakeholders: Asset Owners, Asset Inspectors, Asset OEMs, and NDT OEMs. We use the term asset as a generic reference to a physical component, machine, vehicle, system, or a plant that needs inspection for safety or performance assurance. In addition, the enabling stakeholders include Universities and R&D centers, Inspection service providers, Inspector training schools, regulatory, and certification bodies. While the value proposition of NDE 4.0 as a system is clear – Safety and Economic value, the driving force at stakeholder levels is not same for all parties. Asset owners care for inspection reliability, process efficiency, and effectiveness. Inspectors care of personal safety, comfort, and personal performance. Asset OEMs care of equipment design optimization and manufacturing quality. NDT OEMs care for NDT equipment capability and field performance. Of course, everyone is conscious of their cost structure, business model, and customer engagement. All these Fig. 5 The NDE 4.0 Eco-system and its stakeholders
28
J. Vrana et al.
entities need to appreciate and support each other with knowhow, technology, and development roadmap to the extent possible. The mutual dependence of all these entities makes it harder of any of the stakeholders to deliver value without the concurrent evolution of others in the eco-system so everyone needs to get on the bus for everyone to benefit [6]. The concept is not all that new, but it has a significant implication in the fourth revolution from diversity of stakeholders in a regulated industry. NDE 4.0 is team sport, no single player can win by doing their thing in isolation.
Summary The world of NDE is about to change, radically, for the better. The suite of digital technologies shows multiple opportunities and strong use cases, like the use of the emerging digital technologies for NDE and the new possibilities for inspection control (Industry 4.0 for NDE) or the integration of NDE as a data source and enabler for Industry 4.0 (NDE for Industry 4.0). All the use cases identified show that Industry and NDE are growing together with the fourth revolution and will eventually lead to an improved awareness about NDE. This will help increase visibility and value perception of NDE. There is still a long way to go. When the barriers to digital communication are lowered, proprietary data formats and interfaces are replaced, and semantic interoperability becomes natural; then it will be possible to combine the emerging technologies into new cyber-physical inspection equipment. It will be possible to connect equipment from different manufacturers and analyze big data for safety and quality. It will enable manufacturers to focus on their core-knowledge resulting in rapidly improving products and superior services. Given the challenges and opportunities, NDE 4.0 needs collaboration on an international scale, without burdens or old structures. Ideas like NDE-manufacturer based clouds are the use of emerging technologies for maintaining the old structures. This eventually may not work. Opening up, collaboration, and the willingness to innovate are key to NDE 4.0 and will decide the future of individual companies and of the NDE sector in general. If taken on thoughtfully, NDE 4.0 will lead to a completely new way of sustaining product quality and safety, a new way of doing business, a new market for data – an ecosystem with huge potential for purposeful NDE. The journey will have challenges and how the community comes together to pursue it is the purpose of the present handbook.
Cross-References ▶ Are We Ready for NDE 5.0 ▶ Digitization, Digitalization, and Digital Transformation ▶ Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0 ▶ Value Creation in NDE 4.0: What and How
1
Introduction to NDE 4.0
29
References 1. Kagermann H, Lukas W-D, Wahlster W. Industrie 4.0: Mit dem Internet der Dinge auf dem Weg zur 4. industriellen Revolution. VDI-Nachrichten. 2011;2011(13):2. 2. Vrana J. NDE perception and emerging reality: NDE 4.0 value extraction. Mater Eval. 2020;78(7):835–51. https://doi.org/10.32548/2020.me-04131. 3. Vrana J. ZfP 4.0: Die vierte Revolution der Zerstörungsfreien Prüfung: Schnittstellen, Vernetzung, Feedback, neue Märkte und Einbindung in die Digitale Fabrik. ZfP Zeitung. 2019;165:51–9. 4. Vrana J. The four industrial revolutions. YouTube. 2020. https://youtu.be/59SsqSWw4b0. Published: 30 March 2020. 5. Vrana J, Singh R. NDE 4.0 – a design thinking perspective. J NDE. 2020;40:4. https://doi.org/ 10.1007/s10921-020-00735-9. 6. Singh R, Vrana J. NDE 4.0 – why should ‘I’ get on this bus now? CINDE J. 2020;41(4):6–13. 7. Meyendorf NGH, Bond LJ, Curtis-Beard J, Heilmann S, Pal S, Schallert R, Scholz H, Wunderlich C. NDE 4.0 – NDE for the 21st century – the internet of things and cyber physical systems will revolutionize NDE. In: 15th Asia Pacific conference for non-destructive testing (APCNDT 2017), Singapore; 2017. 8. Bloomberg J. Digitization, digitalization, and digital transformation: confuse them at your peril. Forbes. 2018. https://www.forbes.com/sites/jasonbloomberg/2018/04/29/digitization-digitaliza tion-and-digital-transformation-confuse-them-at-your-peril. Accessed 27 Sept 2020. 9. Kluver R. Globalization, informatization, and intercultural communication, Am Commun J. 2000;3(3). 10. United States Air Force. Global horizons final report. AF/ST TR 13-01. 2013. 11. Kaplan A, Haenlein M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horizons. 2019;62(1):15– 25. https://doi.org/10.1016/j.bushor.2018.08.004. 12. Arute F, Arya K, Babbush R, et al. Quantum supremacy using a programmable superconducting processor. Nature. 2019;574:505–10. https://doi.org/10.1038/s41586-019-1666-5. 13. Schrödinger E. Die gegenwärtige Situation in der Quantenmechanik. Naturwissenschaften. 1935;23:844–9. https://doi.org/10.1007/BF01491987. 14. Volz J, Weber M, Schlenk D, Rosenfeld W, Vrana J, Saucke K, Kurtsiefer C, Weinfurter H. Observation of entanglement of a single photon with a trapped atom. Phys Rev Lett. 2006;96(3):030404. https://doi.org/10.1103/PhysRevLett.96.030404. 15. Meyendorf N, Schallert R, Pal S, Bond LJ. Using remote NDE, including external experts in the inspection process, to enhance reliability and address todays NDE challenges. In: Proceedings of the 7th European-American workshop on reliability of NDE; 2017. 16. Link R, Riess N. NDT 4.0 – overall significance and implications to NDT. In: 12th European conference on non-destructive testing (ECNDT 2018), Gothenburg; 2018. 17. Singh R. NDE 4.0 the next revolution in nondestructive testing and evaluation: what and how? Mater Eval. 2019;77(1):45–50. 18. Singh R. Purpose and pursuit of NDE 4.0. Mater Eval. 2020;78(7):785–93. https://doi.org/10. 32548/2020.me-04143. 19. Chakraborty D, McGovern ME. NDE 4.0: smart NDE. In: 2019 IEEE international conference on prognostics and health management (ICPHM); 2019. https://doi.org/10.1109/ICPHM.2019. 8819429. 20. Vrana J. Welcome to the world of NDE 4.0. YouTube. 2020. https://youtu.be/MzUKHmp4exE. Published: 24 March 2020. 21. Vrana J. The four NDE revolutions. YouTube. 2020. https://youtu.be/lvLfy4zfSYo. Published: 14 April 2020. 22. Singh R. Three decades of NDT reliability assessment and demonstration. Mater Eval. 2001;59(7):856–60. 23. Singh R, Palumbo D, Hewson J, Locke D, Paulk MD, Lewis RR. A design of experiments for human factor quantification in NDI reliability studies. In: USAF ASIP conference, San Antonio; 2001.
30
J. Vrana et al.
24. Virkkunen I, Koskinen T, Jessen-Juhler O, Rinta-aho J. Augmented ultrasonic data for machine learning. J NDE. 2021;40:4. https://doi.org/10.1007/s10921-020-00739-5. 25. Systems, Inc. FLIR. How does an IR camera work? FLIR Systems. 2017. Web. 03 Apr. 2017. 26. Boyle R. Terahertz-Band cell phones could see through walls. Popular Science. 2012. https:// www.popsci.com/technology/article/2012-04/terahertz-band-cell-phones-could-send-fastertexts-and-see-through-walls 27. Mook G, Simonin J. Eddy current tools for education and innovation. In: 17th World conference on nondestructive testing, 25–28 Oct 2008, Shanghai; 2008. 28. N. Meyendorf. Re-inventing NDE as science – how student ideas will help to adapt NDE to the new ecosystem of science and technology. In: AIP Conference proceedings 1949, Issue 1, id 020021; 2018. https://doi.org/10.1063/1.5031518. 29. Vrana J, Kadau K, Amann C. Smart data analysis of the results of ultrasonic inspections for probabilistic fracture mechanics. VGB PowerTech. 2018;2018(7):38–42. 30. eCapital Advisors. Analytics maturity. https://ecapitaladvisors.com/blog/analytics-maturity/
2
Basic Concepts of NDE Norbert Meyendorf, Nathan Ida, and Martin Oppermann
Contents Introduction to NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Classification of NDE Methods Based on Basic Physical Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . Visual Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancement of Visibility of Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electric and Magnetic Fields (MPT, EC, MRI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electromagnetic Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elementary Particles – Electrons, Positrons, Neutrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elastic/Acoustic Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semi-nondestructive Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantification of NDE and Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32 33 33 34 35 42 58 59 67 68 71 72
Abstract
The present chapter introduces several basic aspects of Nondestructive Evaluation (NDE). Its purpose is to explain what characterizes NDE. A brief overview of various methods used for NDE with respect to the challenges of NDE 4.0 will be given. The physical basics will be introduced for the most important methods, including X-ray imaging, thermography, ultrasonic and electromagnetic testing,
N. Meyendorf (*) Chemical Materials and Bio Engineering, University of Dayton, Dayton, OH, USA e-mail: [email protected] N. Ida Department of Electrical and Computer Engineering, The University, of Akron, Akron, OH, USA e-mail: [email protected] M. Oppermann Centre for Microtechnical Manufacturing, Technische Universität Dresden, Dresden, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_35
31
32
N. Meyendorf et al.
for instance. Typical NDE tasks and how NDE is applied in the industrial production, service, and research and development will be discussed briefly at the end of the chapter. Keywords
NDE methods · NDE instruments · X-ray testing · Neutron radiography · Ultrasonic testing · Visual testing · LPT · Magnetic and electromagnetic testing · Eddy current · Nuclear magnetic resonance · Microwave testing
Introduction to NDE The abbreviation NDE stands for “Nondestructive Evaluation” (according the American Society for Nondestructive Testing (ASNT)). Sometimes it is written as Non-Destructive Evaluation. The term Nondestructive Testing (NDT) is often used interchangeably with NDE, for instance, in older literature and in Europe. However, NDE describes a measuring and decision-making procedure that is more quantitative in nature. It does not only detect defects but also quantifies size, location, and orientation. For example, NDE also includes the determination of material properties and residual stress. Another variation is the use of NDI, especially in the military. The term refers to Nondestructive Inspection and can be considered as NDE applied to inspection of components. Therefore, we will prefer to use “NDE” throughout this Handbook because the term is the heart of NDE 4.0. Some languages use the term “Introscopy,” meaning to look into . . .. (the material, the component, etc.). This is also the main task in many methods of medical diagnostics. At present we see NDE only as applied to materials and technical components, not to biological structures or humans; however, NDE and medical diagnostics share similar techniques. In medicine, diagnostics methods may be summarized under “Radiology,” a term that, for example, also includes Ultrasound and Magnetic Resonance Imaging (MRI). • Usually, NDE is used to describe a group of physical methods to characterize a material or component. The method or methods should be applied to but not harm or permanently modify the test object. • This is not always true or exact. For example, a medium such as water in contact with the material can cause degradation, for example, in polymer coatings or it can initiate corrosion of steel. • Acoustic emission as a method to monitor the integrity of a component or a machine requires growth of cracks or some phase transitions in the material to generate signals. However, this should have insignificant residual effect and not impair the use of the part. • If a piece of concrete is removed from a bridge to study the integrity of the materials (by an NDE method), that can also be considered to be nondestructive.
2
Basic Concepts of NDE
33
• The Scanning Electron Microscope (SEM) can be an NDE tool for very small objects like microelectronic parts or MEMS that fit into the microscope chamber. However, for most applications, specimens have to be prepared from the test object – a destructive process. In summary, “nondestructive” means that functionality and integrity of the tested structure, component, or product is not affected. “Evaluation” means performing measurements and signal processing for an interoperation of the results that have to be documented. In most cases, this will be the basis for a diagnosis and leads to a decision about the possible integrity or usage of the test object. NDE employs a plethora of different, mostly physical, measurement methods. Usually the interaction of any type of energy (as described below) with the test object may be studied. This is mainly done for the detection of material defects, measuring material properties or anomalies, determination of material degradation, and characterizing the materials microstructure. The most important methods will be described briefly in the following section.
Classification of NDE Methods Based on Basic Physical Principles The basic idea of NDE is to study and analyze the interaction of any type of energy with materials. This can be an electric or magnetic field, an electric current, an elastic wave, a mechanical load, a thermal input, or high energy particles. The interaction, penetration, scattering, or conversion of the source energy by the inspected material or the component is measured and analyzed. The amount of energy that is applied should be sufficiently low to not significantly modify or harm the test object.
Visual Inspection Visual inspection does not mean only looking on an object; it is now integrated in almost every step of the manufacturing processes. It should be the first step when planning other NDE procedures. Optical tools such as fiberscopes or borescopes are used to inspect parts that are not directly accessible like cavities or the inside of pipes (Fig. 1). Inspectors look for • • • • •
Surface breaking cracks. The surface topography. Corrosion or temper colors at the heat affected zone of welds. Foreign objects or. Missing parts, for example, on electronic circuit boards.
Of major importance is the correct illumination of the scene and the right angle to look at the object. Miniaturized CCD cameras and machine vision tools are now widely available at low cost, as, for example, through use of cellphones. In combination with optical tools or microscope lenses, this opens a broad potential for visual
34
N. Meyendorf et al.
Fig. 1 Left: visual inspection of a turbine [1], middle: Fiberscope, right: principle of fiber optics and fiberscopes Fig. 2 Basic steps of liquid penetrant testing (LPT)
NDE techniques that do not require contact with the test object. This technique can be significantly improved under NDE 4.0 by generating large amounts of data and analyzing them using image recognition and artificial intelligence methods.
Enhancement of Visibility of Defects The most common technique to enhance the visibility of surface defects is the application of liquid penetrants inspection (Fig. 2). The basic idea is that a liquid
2
Basic Concepts of NDE
35
with a high wettability penetrates into cracks that are open to the surface. After a distinct wait time followed by cleaning, a developer is applied that pulls the penetrant back to the surface and enhances the visibility of the defect. Typical systems in the field are red penetrant and white developer with an intermediate surface cleaner (available as spray cans). In the lab, fluorescent penetrants are preferred and the inspection is done under black light. Another option is to observe the object with an infrared camera possibly at elevated temperatures. Cracks and other defects open to the surface act like black radiators and have a higher emissivity than the object surface. For metals, the emissivity of a plain surface for infrared radiation is very low resulting in good contrast for possible open surface breaking defects. For most surface breaking defects, methods like induction or acoustic thermography are needed. Some polymer coatings are transparent to IR radiation, so that it is possible to visualize defects under the coating with an IR camera. Also, spectroscopic methods might be considered to enhance defects or surface corrosion. To visualize and quantify surface topography, structured illumination in conjunction with image analysis of the recoded images can be applied.
Electric and Magnetic Fields (MPT, EC, MRI) Basics The disadvantage of liquid penetrant methods is the long testing time that is required for diffusion of the penetrant into and out of defects. For ferromagnetic materials such as steel magnetic stray field techniques, usually applied as Magnetic Particle Testing (MPT) is a faster method to visualize surface braking defects as well as defects slightly below the surface and under coatings. Eddy current testing relies on the interaction of alternating magnetic fields with electrically conductive materials. The following briefly introduces electric and magnetic fields and their interaction with materials. The electric field E (vector) is characterized by the force F (vector) on a charged particle Q. “Field lines” indicate the direction of the force. The density of field lines characterizes the strength of the force. The electric flux Φel is the “number” of filed lines passing through an area. The electric flux density D (vector) characterizes the density of field lines, where D ¼ ε0 εrel E ð ε0 : absolute dielectric constant εrel : relative dielectric constantÞ: ð1Þ The magnetic field H (vector) is characterized by the force on a moving charged particle Q. “Field lines” indicate the direction perpendicular to the direction of the force and that of velocity. The Flux Density B (vector) is related to the magnetic field H (vector) (see formula below). It is the cross product of the force F and the velocity v (vector) of the charged particle, multiplied by the charge:
36
N. Meyendorf et al.
B ¼ Q ðF vÞ
ð2Þ
The density of field lines characterizes the strength of the force. The magnetic flux Φmag is the “number” of field lines passing through an area. The magnetic flux density B or magnetic induction characterizes the density of field lines, where B ¼ μ0 μrel H ð μ0 : absolute permeability μrel : relative permeabilityÞ:
ð3Þ
ε0 and μ0 are natural constants, while εrel and μrel characterize the material that is penetrated by the electric or magnetic field. A homogeneous electric or magnetic field is characterized by parallel field lines. Stray fields are characterized by divergent field lines, usually starting from a narrow (or point) pole. The relative permeability of a materials μrel depends on the magnetic moments of its atoms. There are three classes of materials with different behavior in the magnetic field. • Diamagnetic materials, showing a weak repulsion. μrel < 1. • Paramagnetic materials, showing a weak magnetization and μrel > 1. • Ferromagnetic materials are paramagnetic at high temperatures. But below the Curie temperature (specific to the material), the exchange interactions between neighboring atoms in the crystal lattice align all magnetic moments of the atoms (sometimes called elementary magnets) in a crystal zone (Domain). These materials show strong magnetization that follows a magnetic hysteresis path for an oscillating magnetic field. Ferromagnetic materials are those that are strongly attracted to a magnet and can become easily magnetized. Examples are iron, nickel, and cobalt. μrel > > 1.
Magnetic Particle Testing Magnetic stray field methods detect discontinuities that are primarily linear and located at or near the surface of ferromagnetic components. The basic principle is that such defects disturb the magnetic flux in the test component and create stray fields at the surface of the test objects. The most common application to visualize stray fields for the purpose of flaw detection in components is magnetic particle testing (MPT). It uses small magnetic particles that are attracted by the stray fields. For automated inspection processes, it is also possible to scan across the surface using magnetic sensors. The components are first magnetized using electromagnets (yokes, large coils) or by passing a high current though the part. The detection media are always made of fine-grain magnetic particles. It is often preferable to use fluorescent particles for high contrast. Suspensions in liquids like water or oil are often used to reduce friction between particles and part surfaces. Magnetic fields must be created within the test objects. Fields can be generated directly or indirectly. The Magnetic flux depends on the type of electric current used. Currents can be AC or DC, full wave or half wave rectified, or multiphase. To detect
2
Basic Concepts of NDE
37 Fluorescent magnetic particles are concentrated within stray fields
UV Light Magnetic Particles
H
Steel (µ»1)
Not Detectable
H
Conditional Detectable
Detectable
Fig. 3 Principle of magnetic particle testing
all defects with different orientations, the field must be generated in two orthogonal directions. Magnetic fields only “leak” when they travel across cracks that break field lines (Fig. 3).
Eddy Current Testing Eddy current testing uses the modification of the AC impedance of a test coil or coil system when interacting with a conducting material. The interaction between electric and magnetic fields and electric currents is described by Maxwell’s Equation (Fig. 4). If the alternating magnetic field of a transducer interacts with a conductive material, eddy currents are created parallel to the surface. The eddy currents originate from a time-dependent magnetic field that by Lenz’ law opposes the acting field and modifies the impedance of the detector coil or coil system in the transducer (Fig. 5). The distance of the transducer from the conductive material is called lift-off, and this distance affects the mutual inductance of the circuit. The lift-off effect can be used to measure the thickness of nonconductive coatings, such as paint, that hold the probe at a certain distance from the surface of the conductive material. Eddy currents are affected by the skin effect. This means that current density is maximum at the test material’s surface and decreases exponentially with depth. Because of decreasing current density, the sensitivity to defects decreases with depth (Fig. 6). The impedance of a coil is a complex value displayed in an impedance plane by a real part (horizontal axis) usually considered to be the ohmic resistance and an imaginary part (vertical axis) usually considered to be the inductive reactance. The impedance of an eddy current transducer is affected by the following factors when interacting with the tested material.
38
N. Meyendorf et al.
®
Conservation of charge
div D = ρ
Magnetic charges (monopoles) do not exist
div B = 0
Law of Induction Faraday’s Law
rot E = -¶B/¶t
Oerstedt’s Law; Maxwell-Ampere Law
rot H = ¶D/¶t + j
Electric charges are the origin of electric Fields
®
Magnetic field lines exist as closed loop’s only
®
®
®
®
Electric eddy fields are generated from variations of magnetic fields
®
®
r = - div j
®
®
Magnetic eddy fields are generated by electric currents or/and variations of electric fields Electric current causes variations of charge density (charge transport)
®
D = ere0 E
j=s E
®
®
B = mrm0 H
Fig. 4 Maxwell equations and related formulas, describing various conditions. (D, B, E, H see above, ρ: electric charge, j: electric current density)
Alternating Voltage
Eddy Currents are affected by:
Number of Turns Magnetic field
Frequency Coil Geometry
Eddy Currents
Current through Coil
Shape of Object Surface Permeability µrel Conductivity σ
Fig. 5 The principle of eddy current (EC) testing
• The variations in operating frequency that also affects the penetration depth. • The variations in electrical conductivity and the magnetic permeability of the material (might be modified by the material’s microstructure and structural changes such as grain structure, work hardening, heat treatment, etc.) • The shape of the object and an uneven surface. • The presence of surface defects such as cracks, and subsurface defects such as voids and nonmetallic inclusions.
2
Basic Concepts of NDE
39
Fig. 6 The depth and frequency dependence of the eddy current density for a bobbin coil (left) and the eddy current penetration depth δ as function of frequency f for different materials (right), ω ¼ 2πf (angular frequency), μ ¼ μ0 μrel (permeability), σ: conductivity Fig. 7 Frequency dependence of a coil (continuous line) and the effect of different material defects or properties changes on the transducer system for one frequency
• Dimensional changes, for example, thinning of tube walls due to corrosion, deposition of metal deposits or sludge, and the effects of denting. • The presence of supports, walls, brackets, and discontinuities such as edges. However, in most cases, the strongest effect is the lift-off, that is, the distance of the coil from the surface, that might also be caused by nonconducting coatings. The different effects can be distinguished by the different directions of the impedance change in the impedance plane. These effects are also frequency dependent (Fig. 7). Several of these factors are often present simultaneously. For example, variations in electrical conductivity and thinning of the specimen might affect coils of a probe simultaneously. However, if unwanted parameters are affecting the measurements, they can sometimes be eliminated by mixing results for signals collected at several frequencies.
40
N. Meyendorf et al.
NMR, MRI Nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) techniques use the interaction between strong magnetic fields and the atomic magnetic moments, preferably those of Hydrogen atoms. The magnetic dipole moment m of an atom is the sum of the spins of the nucleus (angular momentum) and the spins of electric charges (charge distribution). Figure 8 illustrates the elementary magnetic moments of the electrons shell of an atom; in this example, a helium atom with 2 electrons. ms are the spin moments and mb are the orbital moments of the electrons. In this case, the moments compensate each other and Helium is paramagnetic. Hydrogen has only one spin and one orbital moment that do not compensate each other and therefore, Hydrogen has a resulting magnetic dipole moment. If a strong magnetic field B0 is applied, the nuclear spins rotate with an orientation parallel or antiparallel to the magnetic field B0 with the Larmor frequency ω0: ω 0 ¼ γ B0
ð4Þ
γ is a material constant and is called gyromagnetic ratio. If a high frequency (HF) magnetic pulse is applied perpendicular to the constant field, the rotating moments can be synchronized, thus creating a macroscopic magnetization (Fig. 9). The decay of this magnetization provides information about the mobility and binding status of the protons to distinguish, for example, between different organic materials like muscle and bone. For MRI gradients, both the strong DC field and the orthogonal HF pulse are applied. For each voxel in the test volume, there is only one characteristic Larmor frequency, facilitating imaging of the volume. For one side access (OSA), the technique can also be applied by a one
Fig. 8 Magnetic moments of the electrons of a helium atom
W0 B0
µ
µ
S
S
W0 B0
S W0
µ
Fig. 9 Rotating magnetic fields create a resulting magnetic moment in the test material
M
2
Basic Concepts of NDE
41
Sensitive volume Conventional NMR tomography
OSA-NMR Test object Sensitive volume
Magnetic field lines Magnetic field strength Frequency sweep
Magnetic field lines
NMR resonance frequency,ω0 = γ∗ B0 Sweep of sensitive volume Volume image
Fig. 10 Schematics of MRI in medicine and OSA NMR for NDE [2]
side access sensor whereby a sensitive volume is created in front of a magnetic probe by means of strong permanent magnet and a divergent HF-field. In the sensitive volume, the two field comports have to be perpendicular. By changing the frequency, a change of the depth of the sensitive volume is possible. The method has been applied to Hydrogen molecules in construction materials such as concrete or wood and to characterize adhesive layers even in front of a metallic surface (Fig. 10) [3].
Micromagnetic Techniques Ferromagnetic materials are characterized by a magnetic domain structure. Domains are microscopic zones in a material of identical orientation of the atomic magnetic moments (spontaneous magnetization). Bloch walls are the borders between domains of different magnetization direction. Application of an external magnetic field results in a re-arrangement of the domains to optimize the materials magnetization into the field direction by moving the Bloch walls. This process is restrained by the interaction of Bloch walls with lattice defects, grain boundaries, precipitations, and local residual stresses in the materials. This can be compared to the movement of dislocations that is responsible for mechanical material properties. Application of a strong oscillating magnetic field to a ferromagnetic material results in cycling of its magnetic hysteresis. The hysteresis is characteristic for each material and any microstructure modification. Measurement of electromagnetic signals while cycling the magnetic hysteresis is a useful tool to characterize the materials. Most common is Barkhausen noise detected by a coil close to the material. This noise is a broadband frequency signal that is created by jumps of the Bloch wall system between different crystal obstacles that block the walls movements. Such “Bloch jumps” create sudden changes in the magnetic flux that induce pulses in the coils. Several other parameters beside Barkhausen noise can be measured that depend
42
N. Meyendorf et al.
Fig. 11 Micromagnetic parameters to characterize ferromagnetic materials and a list of applications (left) Thermography
X-Ray Diffraction
Emission Transmission (for Semiconductor)
Diffraction
λ = Wavelength 1m
1mm
1µm
1nm Wavelength
Microwaves
Visual Inspection
Interference Reflection Transmission
Reflection Transmission (Transparent Media)
1pm
1fm
Electic Field
netic Direction Mag Field
Radiography Radioscopy Transmission
Fig. 12 Wavelength scale of electromagnetic radiation and typical NDE Methods
upon the magnetization process (see Fig. 11). The combination of various parameters and using artificial intelligence techniques can be very helpful in correlating the measurements to materials properties or residual stresses [4, 5].
Electromagnetic Radiation A large group of NDE methods uses the interaction of electromagnetic radiation at various wavelengths or elementary particles with the material by penetration, absorption scattering, or reflection. Figure 12 shows the wavelength scale of electromagnetic radiation and typical NDE applications. Electromagnetic (EM) radiation is characterized by oscillation of the magnetic and electric filed vectors perpendicular to each other at frequency ϕ ¼ c/λ. c is the velocity of light that is equal for all EM-waves in vacuum. In a material, the velocity is smaller according to the diffraction index n.
2
Basic Concepts of NDE
43
n ¼ cvacuum =cmaterial
ð5Þ
EM radiation can appear both as wave and as particle (photon). This is called “wave / particle dualism.” The photon caries an energy quantum (also called photon energy) e ¼ hϕ where h is Plank’s constant. Electromagnetic radiation occurs at all frequencies but in most application, the resolution is directly related to wavelength and hence higher frequencies are of particular interest for NDE purposes. Higher frequencies are also associated with higher energy transport as can be clearly seen from the energy quantum. Clearly for a wave to carry any significant energy, its frequency must be high. Nevertheless, there are microwave methods in which the frequency need not be high and may be used at the low end of the microwave spectrum.
Microwave THz and Radar Techniques At the lower end of the electromagnetic spectrum, one finds the microwave range, commonly defined between 300 MHz and 300 GHz (wavelengths between 1 m to 1 mm (see Fig. 12)). Above that, lies the infrared region, stretching from 300 GHz to 400 THz (1 mm to 750 nm). Above that is visible light (approximately 400 THz to 800 THz or 750 nm to 380 nm). The lower portion of the infrared region is often termed the Terahertz region, formally spanning the range 300 GHz to 3 THz (1 mm to 0.1 mm). In some cases, the Terahertz region is assumed to extend to 10 or even 30 THz and sometimes frequencies lower than 100 GHz are also included. The ranges are somewhat arbitrary and are typically set by convention. Microwave Methods of NDE In spite of the fact that microwaves can be applied in a vast number of situations, most test can be traced to evaluation of changes in permittivity, permeability or conductivity and anything that affects these including density, porosity, material structure, moisture content and many others. In addition, microwaves are also used in dimensional evaluation. There are a number of distinct methods in microwave NDE as follows. Reflection, Scattering, and Transmission Methods The most obvious methods of NDE rely on the reflection or scattering of electromagnetic waves from conducting and dielectric surfaces and transmission through lossy or lossless dielectric media. Both methods are demonstrated in Fig. 13 although the sources, types of antennas, and arrangements may vary from application to application. Some possibilities are shown in Fig. 14. These methods are often referred to as far field methods and have been used for a variety of test applications including concrete [6–8], composites [9], and biological media [10]. Measurements may be based on amplitude, phase, time of flight, and secondary effects including heating. In this class, one can also include all radar and radar-like methods. Of particular interest is the ground penetrating radar (GPR), which has found useful applications in evaluation of concrete structures including imaging [11]. Newer ultra wide-band radars, specifically designed for low-power surface and near surface testing [7, 12], can be found as single-board and single-chip system at negligible costs. Synthetic aperture radar (SAR) has found uses as well [13]. Figure 15 shows a
44
N. Meyendorf et al.
microwave horn (antenna) (R)
(S) incident wave
microwave source
θi θr reflected wave
ε=ε’+jε”
processor
ε”
transmitted wave
θt (T)
ε’
ampl.
receiver
sensed medium
phase
microwave horn (antenna)
Fig. 13 A generic method of evaluation of the complex permittivity of a medium [14] transmitter
transmitter/receiver
transmitter/receiver
air (a) d1
transmitter
receiver
air (a) d1 d ε=ε’+jε”
ε=ε’+jε”
(a)
d ε=ε’+jε”
Γad
ε=ε’+jε”
conductor
(b)
air (a)
dielectric (d) reflector
(c)
d2
Γad
Tad
dielectric (d) Tda
receiver
(d)
Fig. 14 Reflection tests on a dielectric. (a) Both antennas on one side, (b) reflection test on a conductor backed dielectric, (c) the use of a reflector to reflect the signal, (d) transmission test
differential reflection sensor for the detection and evaluation of subsurface inclusions (in this case for land mines) that may also be used for pavement testing. Fig. 16 shows a simulation of GPR application to detection and imaging of rebar and air and water inclusions in thick concrete as a demonstration of applicability. Microwave Microscopy A uniquely sensitive method for use in the near field for surface, near surface, and for material characterization is microwave microscopy [15–17]. It relies on the fact that the waves at an open waveguide do not propagate outwards but rather decay over a very small region generating very high spatial frequency fields (evanescent fields). This creates a very small area of high sensitivity affecting the impedance of the microwave probe. By establishing the impedance of the probe with an intact sample, any variation in the material properties of the sample can be associated with flaws or material variations. Figure 17 shows the testing of a thin film of CO-NETIC alloy sample for uniformity in magnetization by scanning over the sample using an open waveguide probe operating at 889.61 MHz [18]. The S11 parameter (reflection) is measured. Although the method is limited to small test areas, it has found applications in material characterization [15, 19], in thickness measurements [20], in surface cracks [21], and detection of corrosion [22], among many others.
2
Basic Concepts of NDE
45
10.7 MHz 0.7
10.7 MHz f1 − f 2
9989.3 GHz f 2 source mixer
receiving antenna
0.6
amplifiers
10 GHz f1 source
mixer
transmitting antenna
amplitude [V]
f1 − f 2
receiving antenna
−
instrumentation amplifier re fle c
ted
ve wa ted c e fl re
wa ve
peak detector
0.5 0.4 0.3 0.2 0.1 0
a.
5.0
b.
20.0 10.0 15.0 distance [cm]
25.0
30.0
Fig. 15 (a) Detection of buried objects using a differential reflection method. (b) The signal obtained shows a peak at the center of the dielectric and two dips indicating its corners
1 2 3
Diametre 25mm
Rebars
Diametre 50mm
Water
Diametre 10mm
600mm
Air void
4 5
Concrete 1625mm
Fig. 16 A GPR scenario with 12 simulated targets including rebars, air and water inclusions in a lossy concrete block (left). GPR scan of the inclusions (right)
Resonant Methods Some of the most sensitive and simplest microwave NDE methods use the change in resonant frequency due to changes in materials in resonant structures. A closed or partially open cavity or an open transmission line resonates at a characteristic frequency that depends on dimensions and the content of the resonator. Any change in conditions in the resonator perturbs the fields and causes a change in the resonant frequency. A change in permittivity Δε or permeability Δμ either throughout the cavity of in part of it will change the resonant frequency as follows: Ð Ð ΔeE E0 dv þ v ΔμH H0 dv Δf vÐ Ð ¼ f v μH H0 dv þ v eE0 Edv
ð6Þ
where the fields with index zero are those of the unperturbed cavity. Although the expression is rather involved in that the fields change, the measurement is simple and accurate and can be related to a variety of effects that change permittivity or permeability. Both permittivity and permeability are complex values; hence, the conductivity
46
N. Meyendorf et al.
Fig. 17 Numerical and experimental profiles obtained by scanning over a 2-mm sample. Comparison of simulated and experimental results [14]
of dielectric can also affect the resonant frequency. Changes in frequency of less than 100 Hz can be reliably measured resulting in very high sensitivities. The method is commonly used to evaluate pavements at highway speeds making use of an open transmission line resonator but has been successful in other areas including in dielectric thickness measurements with resolution of less than 1 micrometer [23], in moisture control and monitoring [23], and in material characterization [24]. Other Methods There are other microwave methods and variations that come into play. Mention should be given to radiometry, in which the radiant flux is correlated with the source and the intervening medium, with an active source or using the background radiation as source. These methods are typically discussed in the context of remote sensing, often of environmental conditions, they still fall within the general idea of NDE. Similarly, microwave NDE is integral to medical and biomedical work both as sensors and in application of energy to initiate an effect such as heating. Microwave radiation can also be used as a source in thermography methods.
Terahertz Methods of NDE As a frequency range at the high end of the microwave range and at the low end of the infrared range, one would expect Terahertz waves and hence methods of NDE to have properties related to both. As a simple example, the higher frequencies mean higher resolution than microwaves, whereas being at the far-infrared limit, it exhibits better penetration than infrared radiation. Nevertheless, while microwave methods are well established in the lower range of the microwave band and infrared methods in the upper range of their band, the Terahertz band was, until recently, a virtually unused “gap” in the spectrum, primarily because of difficulties in the generation and
2
Basic Concepts of NDE
47
detection of TeraHertz radiation [25]. The problem is simple: semiconductor as well as vacuum-tube technologies are generally limited to frequencies below 100 GHz (although devices in the 650 GHz band have been demonstrated). Some far-infrared lasers in the Terahertz region have also been devised but all of these sources suffer from serious limitations in power capabilities and in efficiency [26]. The use of background radiation as passive sources has enabled some applications at higher frequencies. Most sensors too are limited by frequency for the same reasons that limit generation. In recent years however, some of these difficulties have been partially resolved and newer approaches to generation and detection allow for some rather interesting NDE applications. There are two basic approaches in use. One takes advantage of the harmonicrich spectrum of very narrow pulses (of the order of a few femtoseconds), whereas the second uses a frequency sweep in a heterodyne arrangement. Time-Domain Spectroscopy The most common method associated with terahertz applications is the timed-domain spectroscopy (TDS) [25, 27]. A source, typically a femtosecond laser beam is split into two paths. One path travels through the test sample, whereas the second is delayed by an arrangement of mirrors in an optical delay line to produce the reference signal (Fig. 18). This produces a time delay Δt ¼ ts – tr, where ts and tr are the time delays through the sample and the reference signal. Given a sample with index of refraction n and thickness d, the delay between the two beams is: ts tr ¼
n1 d c
ð7Þ
where c is the velocity of light in vacuum. This allows measurement of thickness d or, alternatively, the index and through its properties of the sample. By varying the delay, the test signal is measured as a function of time. Frequency domain response can then be obtained through use of the Fourier transform. Continuous Wave Method In this method, a saw-tooth generator drives a VCO, which, in turn, drives a frequency multiplier. A diode mixer superimposes the direct
Fig. 18 Principle of terahertz time-domain spectroscopy in transmission mode
N. Meyendorf et al.
Triangular signal generator
VCO
Power divider
Sample
48
Detector Mixer
Fig. 19 Principle of frequency modulated continuous wave
generator signal and the signal through the sample in a classic heterodyne approach (Fig. 19). This produces a beat frequency as the difference between the two superimposed signals (generator and sample), allowing the system to detect the time delay between the signals since the beat frequency is proportional to the path difference of the two signals. The saw-tooth generator has, in effect, frequency-modulated the signal and hence this method is called FM-CW [28] (a method that is often used in radar). From this, and given the bandwidth of the modulating signal Δf (from the VCO), the path through the sample, d, is immediately available as. d¼
cf b 2nΔf
ð8Þ
where n is again the index of refraction, fb the beat frequency, and c the speed of propagation in vacuum. In a radar system, d would represent the distance to the target. Both methods are capable of submicron resolution and especially the time domain method is suitable for testing of layered media. Although it was implicit in this discussion that the signal travels through the sample (transmission method), reflection can also be used. Imaging The methods described above are particularly useful in measuring thickness such as paint or coatings. However, other applications exist, particularly in imaging. This takes one of two fundamental forms. One is a scanning method, whereby a single sensor or a linear array is scanned over the illuminated sample (either from passive or active sources) [25]. While this is economical in hardware, it is slow and can only be done on small samples. It is also possible to devise terahertz cameras in which a sensor array (such as microbolometers) are used for a predictable resolution [29, 30]. In terms of applications, terahertz methods have been applied to testing and monitoring coatings, most notable coating thickness and uniformity [31–34], in spectroscopy, in safety and security applications [29, 30], and composites [35, 36] to list a small sample of possibilities.
Thermography and Thermal Techniques The wavelength ranging from 780 nm to 3 μm is designated as near infrared because it is directly neighboring the wavelength range of optically visible light. The wavelength ranging from 3 μm to 50 μm is designated medium infrared and from 50 μm to 1000 μm far infrared.
2
Basic Concepts of NDE
49
In recent years, thermography has gained widespread acceptance as a nondestructive test method, particularly in civil engineering for assessing the thermal insulation of buildings and in power supply technology for monitoring electrical contacts and terminal connections for high current rates (Fig. 20). The atmosphere has a reduced transparency for several wavelength ranges due to the existence of molecular gas, the absorption bands. Infrared (IR) cameras that are available on the market typically use the “atmospheric windows” 3–5 μm (short wave) and 8–12 μm (long wave). These are infrared wavelength ranges of good transmission in clean, dry atmosphere. According to Plank’s radiation law for the absolute temperature (T ) and wavelengths (λ), the Spectral Radiation density L(λ,T) of a “black body” (the emission coefficient has to be ε ¼ 1) can be written as: Lðλ, T Þ ¼ e
h hc i1 2hc2 λkT e 1 5 λ
ð9Þ
were h is Plank’s constant, k the Boltzmann constant, and c the velocity of light. An IR camera is capable of establishing a relationship between the detected electromagnetic radiation and the corresponding absolute temperature of the black body (T). Real bodies (so-called grey radiators) do not fully absorb incident radiation and hence emit less radiation. Their emission coefficient is ε < 1, meaning that the temperature in an IR image that is calibrated for black body radiation appears lower. ε ¼ 1 corresponds to the ideal black body. The emission coefficient ε plays an important role in actual measurements on surfaces with different properties. It means that substances with the same temperature appear as variously levels of warmth. This effect is shown in Fig. 21. Imaging of a temperature destitution by an IR camera without a thermal stimulation of the test object is called “passive” thermography. NDE usually uses the temperature dynamics of an object that is stimulated by a heat source also called active thermography. Flash lamps or laser pulses are most commonly active sources. Induction heating by an eddy current pulse or internal friction by a high energy ultrasound pulse is also used. The heating or cooling process is recorded by the IR
Fig. 20 Thermography of a building showing heat leaks and overheated power electronic contacts
50
N. Meyendorf et al.
Fig. 21 Different materials at the same temperature appear in various levels of warmth (copper seems to be colder than epoxy), (courtesy G. Grossmann, EMPA, Switzerland)
T
T defect
Flash lamps
T0
IR camera
Response time τ Heat radiation ΔT
t
Stimulation Lock in Signal
t Modulated Lasers
IR detector Amplitude, Phase
Fig. 22 Pulsed thermography or lock-in techniques
camera during or after the surface stimulation by a heat pulse. If the surface is periodically stimulated (lock-in or thermal wave techniques), a temperature oscillation that varies in amplitude and phase shift is observed by the IR camera or IR sensor. Defects below the surface modify the heat diffusion into the material and the result is a temperature contrast on the surface. Similar information can also be created by Fourier transformation of the frame sequences after stimulation with an energy pulse (pulse-phase thermography) (Figs. 22 and 23).
2
Basic Concepts of NDE
Nickel coating 100 µm
51
Steel
30 ms
before firing flash
during flash heating
25 ms
45 ms
75 ms
150 ms
Fig. 23 Typical frame sequence for a subsurface delamination (courtesy G. Walle and U. Netzelmann, Fraunhofer IZFP)
Optical Spectroscopic Techniques In addition to the visual inspection mentioned above, several other optical techniques are applied for NDE. Examples are spectroscopic techniques like Raman scattering to measure of stresses in silicon or optical fiber methods to monitor stresses and defects or even detect acoustic waves in buildings [37, 38], wind energy systems or polymer structures in aircrafts [39, 40]. White light interference microscopy is used to measure surface topography or characterize surface roughness [41] (Fig. 24). Optical coherence tomography (OCT) expands this principle of applying the short coherence length of back scattered white light to opaque 3D structures. It creates cross sections of near-surface regions of an opaque material. This method was originally applied in medicine to image the human skin, but it has also potential applications in polymers and composites (see Fig. 25). X-Ray Techniques Wilhelm Conrad Röntgen discovered X-rays on November 8, 1895, which are named after him in German-speaking countries. As a result of this discovery, he was the first researcher to be awarded the Nobel Prize for Physics in 1901. Röntgen refrained from filing a patent because he wanted to make his invention and discovery available to the public. Thanks to Röntgen’s generosity, this radiographic technique spread very quickly and was used early on, in particular in the field of medicine. Engineers and material scientists also quickly recognized the importance of X-ray diagnostics. As shown in Fig. 26, X-ray diagnostics was already used for coarse and fine structure analysis at the end of the 1920s [42]. In electronics, the potential of X-ray inspection for the analysis of internal structures of, for example, electron
52
N. Meyendorf et al.
Fig. 24 Surface topography in front of a crack tip of a mill annealed Ti-6-4 specimen by white light microscopy [41]
Delamination of plastic films Glass fiber stripes
Plastic micro weld
AL203-ceramic, B-image
Plastic fume
Plastic fume with cover layer
Fig. 25 left) examples of OCT cross-section images, right) 3D Impact in glass fiber reinforced Plastic (Resolution: axial 6 μm, transversal 8 μm, Speed: 2 Frames/s with 600 lines/image (courtesy Fraunhofer IZFP-D, 2010)
tubes, was recognized early (see Fig. 27). Further fields of knowledge in which X-ray diagnostics were already taken advantage of at that time include: • • • •
Archaeology. Biology. Radiotherapy. Authentication of paintings.
X-ray methods are classified into X-ray imaging techniques (coarse structure analysis) and X-ray fine structure analysis. Fine structure analysis deals with the atomic structure of the materials, for example, the lattice arrangement in crystals. One of the fine structure analysis methods involves X-ray diffraction for characterization of residual stress in materials, texture, dislocation density, and phase analysis. X-ray imaging techniques include X-ray radiography (vertical and oblique radiation) and 3D techniques such as X-ray laminography and X-ray computer tomography (CT).
2
Basic Concepts of NDE
53
Fig. 26 Early examples of X-ray coarse structure (left) and fine structure analysis (right), [42]
Fig. 27 X-ray images of vacuum tubes, 1929, [43]
In traditional X-ray imaging and X-ray computer tomography, information concerning the internal structures of the test object is generated because the X-rays intensity deceases during transmission through the test object. Only minor similarities are demonstrated by the X-rays used in NDE and those used in medical diagnostics. X-rays are electromagnetic waves with wavelengths within a range of 109 to 12 10 m. They can show wave and particle behavior and are therefore also called photon radiation. X-rays are generated in X-ray tubes by causing accelerated electrons
54
N. Meyendorf et al.
to bombard a target. The emitted spectrum contains continuous radiation called bremsstrahlung (deceleration radiation) and characteristic radiation. X-ray radiation occurs when an electron penetrates the target’s material and is deflected through the Coulomb field into a hyperbolic orbit in close proximity to an atomic nucleus. Discrete radiation (characteristic radiation) results from direct interaction of the accelerated electrons with the orbital electrons of the target’s atoms [44]. The maximum energy or minimum wavelength of the X-ray spectrum depends directly on the target’s material and acceleration voltage. The following relationship applies: λmin ¼
hc e VA
ð10Þ
where λmin – shortest emitted wavelength, VA – selected anode voltage / acceleration voltage, h – Planck’s constant, e – elementary electric charge (e ¼ 1.6021019 C), c – speed of light Overall, only a very small portion of the energy used (approx. 1%) is converted into X-rays – roughly 99% is converted into thermal energy. Figure 28 shows the principle of an X-ray tube with transmission target for best resolution and an example of the X-ray spectrum of a tungsten target. While an object is being exposed to X-rays of intensity I0, the intensity is reduced relative to the material’s absorption properties (μ – attenuation coefficient of the material) and its thickness (L – material thickness). The intensity of the X-ray radiation after passing through the sample can be described for a narrow beam (by ignoring scattering): I ¼ I 0 eμL
ð11Þ
Fig. 28 Sectional view of an X-ray tube (left) and example of an X-ray energy spectrum of a Tungsten target
2
Basic Concepts of NDE
55
where I – intensity of X-rays at the detector, I0 – intensity of X-rays at the X-ray tube (radiation source), μ – attenuation coefficient (material dependent), L – material or sample thickness. Thus, the wavelength of the X-rays, the atomic number (or nuclear-charge number) of the irradiated object, the density of the object, and its thickness affect the attenuation of the X-rays. Historically the X-ray intensity distribution after passing the test object was imaged by X-ray film. This is still an important detection medium today. However, image detectors and those where a digital image is created are progressing and will be the main detection media for NDE 4.0. A detector then receives the attenuated X-ray beam. Two different processes with scintillators (system for converting X-rays into visible light, for example) are currently used for this purpose: • Image intensifier with camera. • Digital flat panel detector. The image intensifier with camera is more sensitive and is capable of recording and displaying even very weak X-rays. On the other hand, its optical and grey value resolution is lower than that of the flat panel detector, and due to the electron optics, geometric distortion may occur in the image. However, the digital detector has high pixel and grey value resolution (up to 16-bit depending on type). Its general sensitivity is still not as high as that of the image intensifier, and it is physically larger. Direct converting X-ray detectors will predominate in the future. They convert X-rays directly into electrical signals and thus have no need for a scintillator. It is expected that this will result in higher sensitivity and improved local resolution and also a limited energy resolution [45]. Figure 29 shows the principles of these three detector types.
Fig. 29 Principles of current X-ray detectors – image intensifier (left), flat panel detector (middle), direct converting line detector with 1024 pixels (right)
56
N. Meyendorf et al.
Fig. 30 Geometric relations in case of ideal X-ray source (mathematical point)
Image recording systems deliver a digital image that is processed and evaluated with the help of software. Special algorithms make it possible to detect structural defects inside the sample. The achievable resolution R depends on the selected primary magnification M and the dimensions of the detector’s pixels. The following formula applies: R¼
lp M
ð12Þ
where R – resolution, lp – pixel edge length, M – magnification. The magnification effect of an X-ray microscope results from the geometric relationship among the X-ray source, the sample, and the detector. The following formula applies to geometric magnification M (quantities a and b from Fig. 30): M¼
a b
ð13Þ
Due to electron scattering process in the target of the X-ray tube, the “focal spot” of the source is limited to approximately 1 μm, even though the actual focal spot of the electron beam on the target is significantly better. This is also the reason why the resolution of an EDS system in an SEM is limited to about 1 μm. This results in an X-ray source with a finite measurable diameter size dfocus. The resulting geometric blurring B can be calculated from this as follows: B ¼ dfocus
ab ¼ dfocus ðM 1Þ b
ð14Þ
where B – blurring, dfocus – focus diameter, M – magnification. If the edge length of a pixel is considerably smaller or of the same order of scale as the blurring B, the X-ray focus diameter df determines the achievable resolution.
2
Basic Concepts of NDE
57
Fig. 31 Genesis of blurring effect by use of a real X-ray source (dfocus > 0)
Fig. 32 Setup for automated x-ray inspection of welds [46]
Conversely, in the case of minimal blurring (B < lp), the pixel dimensions dictate the possible resolution. Digital images can be processed by image processing algorithms. In NDE 4.0, concepts of artificial intelligence will assist the inspector in detection and classifying of possible defects (Figs. 31 and 32). By combining multiple X-ray frames taken from different orientations of the part relative to the X-ray beam, 2D cross-sections or 3D images can be reconstructed (tomography, laminography). Most common is X-ray computer tomography that requires a full rotation of the part in the beam or the X-ray system around the test object [47] (Fig. 33).
58
N. Meyendorf et al.
Fig. 33 Typical setup of X-ray computer tomography in NDE – object rotates (left), cross section and 3D visualization of the CT of an Ancient pocket watch (16th ct.)(middle and right) (pictures courtesy Peter Krüger – Fraunhofer IZFP-D)
Elementary Particles – Electrons, Positrons, Neutrons Gamma Rays (Gamma Particles) Gamma rays are similar to X-rays in that they constitute electromagnetic radiation but are generated by radioactive decay and not by stopping of accelerated electrons at a target material as in an X-ray tube. Gamma radioscopy techniques are therefore very similar to X-ray imaging; however, the handling of gamma sources requires special care. Gamma sources are preferred for field application if electric power is not available. They are also useful to generate higher photon energies, for example, an average of 1.25 MeV for Cobalt-60, for a better penetration of thick parts. Electrons Elementary particles like electrons or neutrons also show particle and wave properties; however, they do not travel with the speed of light. Electrons are much more readily absorbed by mater than X- or gammy radiation. Electron radiography is useful for very thin low absorbing media such as thin coatings or watermarks on banknotes. Neutrons Thermal neutrons (called thermal because of the low kinetic energy) have the ability to penetrate materials with high atomic numbers that show high absorption for X-rays. However, elements with low atomic numbers (organic materials, wood, paraffin) are much better absorbers for neutrons than for X-ray photons. As a result, X-ray radiography and neutron radiography are complementary in their use (see Fig. 34). However, a neutron source usually means that a nuclear reactor is required to generate a sufficient neutron flux for such applications. Due to the relation between the particles pulse energy, p ¼ m*v, and the particles’ De-Broglie wavelength λ ¼ h / p, thermal neutrons also exhibit wave behavior with wavelength comparable to X-rays that are used for X-ray diffraction. Because of the better ability to penetrate materials with higher atomic numbers, for example, iron, neutron diffraction is an excellent technique for the study residual stresses or texture in volumetric components, while X-ray diffraction can only analyze a surface layer of several micrometers.
2
Basic Concepts of NDE
59
Fig. 34 Buddha statue (left), X-ray image (middle), and neutron radiographic image (right), (courtesy E.H. Lehmann, PSI, Switzerland) [48]
Positrons Positrons are the antimatter particles of the electron with exactly the opposite properties (similar mass but positive charge). They are generated by beta+ decay, for example, of Sodium 22. In materials, positrons annihilate with electrons by creating typically two gamma particles with an energy equal to the electron mass (0.51 eV). During the short lifetime of the positron (of the order of several ps (picoseconds)), it can be trapped by lattice defects (vacancies, dislocations) but also nano-precipitations like Guinier-Preston Zones. Both the lifetime of the positron and its annihilation radiation, the angular correlation between the two annihilation photons or broadening of the annulation peak in the radiation spectrum gives information about the location of the annihilation and hence the type and concentration of lattice defects. Therefore, positron annihilation is an excellent tool to determine dislocation densities bat also the creation and growing of precipitations in alloys, for example, due to aging (Figs. 35 and 36).
Elastic/Acoustic Waves Sound waves within a frequency range from 20 to approximately 16,000 Hz are perceptible to the human ear. Lower frequencies are referred to as infrasound, higher frequencies (> 20 kHz to approx. 2 GHz) as ultrasound. The range above ultrasound is referred to as hypersound (109 to 5 1012 Hz). Higher frequencies in the GHz and THz ranges in crystals are referred to as acoustic phonons. Phonons have a cut-off frequency of roughly 10 THz, depending on the type of atom and lattice. The following is a list of various acoustic NDE principles: • Sound propagation: The time of flight of a short ultrasound pulse is measured to determine the elastic moduli or the size of an object. Precise measurements can be used to evaluate residual stresses in the material, creep damage by minor changes
60
N. Meyendorf et al. 1.28 Mev 22Na
Time Delay t ≈ 10-9 ... 10-10 S 0.51 Mev
0.51 Mev
Bulk lattice
tb = 106 ps
Dislocations
td = 160 ps
Vacancies
tv = 180 ps
Vacancy Clusters
tc = 500ps
Fig. 35 Schematics of positron lifetime measurements and typical lifetime for lattice defects in Nickel
219 217
0 2
Time [h] 10 100 AI 2024
1000 219
210
103
AI 2024 Natural Aging at RT
215
200
200
198
198
196
196
194
194
190 1 10 Time [days]
Average lifetime [ps]
Average lifetime [ps]
Natural Aging at 85°C
Average Lifetime Linear fit
105 210
217
215
192
Time [h] 104
205
205
200
200
195
195
192 190
190
Average Lifetime Linear fit
190 0.1
1
10
Time [years]
Fig. 36 Aging of Aluminum alloy AlCuMg 2024 by positron lifetime spectroscopy. The increasing copper content in the GP zones leads to a decrease of the positron life time [49]
of the moduli or density of materials. Measurement of the attenuation of ultrasound includes scattering and absorption. Both are used for characterization of materials. Absorption is the conversion of elastic wave energy into heat. It is affected by internal damping processes like internal friction and can help characterize changes of lattice defects such as dislocation density due to fatigue. Scattering strongly depends on the grain size of crystalline structures and is therefore very useful for characterization of heat treatments.
2
Basic Concepts of NDE
61
• Natural vibration modes of components: Sound testing by tapping a glass or piece of china and listening to the sound is common in assessing their quality and integrity. It is also used to test iron casting parts that might contain hard phases (Iron carbide). However, with a sensitive frequency analysis, it is also possible to identify steal parts with small cracks that are difficult to find with other techniques. This method is called resonance testing [50, 51]. The measuring of vibration modes of rotating machines, bearings, railway wheels, etc., or noise created by engines or cars is standard in mechanical engineering. The frequencies in this case are mostly in the audio range. • Acoustic emission analysis: Growing defects or irreversible microstructure changes can generate acoustic emissions, usually in the ultrasonic range. This can be in the form of continuous emissions or short pulses. Monitoring of such emissions is a very common tool for Structural Health Monitoring (SHM). Analyzing these signals by several criteria such as frequency, pulse length, and pulse shape is used to discriminate significant information from background noise. Multiple transducers are used to localize the signal’s origin. Multiple signals from the same origin indicate growing defects that can become critical for a monitored component. • Pulse echo technique: This is the most common ultrasonic technique used to detect obstacles in a material that reflect short pulses of acoustic waves and/or convert between different wave modes. The obstacles (defects) have a different acoustic impedance than the bulk material. Very thin material separations such as cracks or delaminations are excellent reflectors for acoustic waves and are easily detected by pulse echo techniques. Ultrasonic testing is, in fact, the only NDE method that can detect such critical defects with high probability in volumetric parts. Because of its importance, this method is described in more detail below: For technical use, ultrasound is generated by means of piezoelectric crystals or ceramics (e.g., barium titanate) which oscillate at the resonance frequency of the piezo element (or a harmonic of that frequency) if a voltage pulse is applied across the crystal. In medical diagnostics and conventional NDE, frequencies from 0.5 to 20 MHz are common. For acoustic microscopy that is, for example, applied for electronics packaging samples, depending on the object and the expected defect, the frequency ranges from 5 to 300 MHz. For surface inspections, frequencies up to 2 GHz are applied. Relevant physical attributes include the density and the elastic properties of material layers of the samples. Absorption and scattering cause ultrasonic waves of higher frequencies to penetrate the material to lesser depths than low-frequency waves. However, lateral and axial resolution decreases as frequency is reduced. A compromise must always be found in this respect for actual investigations. Sound waves or elastic waves are deflections of mass elements around their equilibrium position in the interior of condensed matter, which vary over time (see [52]). As a result, and contrary to electromagnetic waves, they can only propagate in a medium – they can propagate in liquids, solids, and gases (but not in vacuum). When considering wave propagation within the test object, distinction must be made between substances that exhibit volume elasticity (liquids and gases) and those
62
N. Meyendorf et al.
that exhibit form elasticity (solids). Longitudinal waves (also called pressure or compression waves) occur in all materials and cause tensional and compressional stresses in the material. Transversal waves only occur in their purest form in solids that can also transmit shear forces. Depending on the type of excitation, the shape of the body, and the dimensions in relation to the wavelength, other types of waves occur, of which surface waves (Rayleigh waves) and plate waves (Lamb waves, extensional or bending) are particularly important for investigation purposes. They can be understood as a superimposition of transverse and longitudinal components. Rayleigh waves are limited to the immediate surface area and follow surface irregularities. Oscillation in a deformable medium is not limited to the excitation center. Figure 37 shows the different types of sound waves. The propagation of sound waves takes place at a material-specific phase velocity, that is, the speed of sound. Speed of sound c is specified as follows: c¼
λ ¼λf T
ð15Þ
where c – speed of sound, T – period of oscillation, f – frequency of oscillation. Above all, the longitudinal component of sound waves is of interest for ultrasonic microscopy. Depending on the state of aggregation, the correlations shown in Table 1 below apply to the speed of sound of longitudinal waves for isotropic media.
Fig. 37 Types of sound waves in (a) longitudinal wave, (b) transversal wave, (c) symmetric lamb wave (extensional wave), (d) asymmetric lamb wave (bending wave), (e) surface wave (Rayleigh wave)
2
Basic Concepts of NDE
63
Table 1 Speed of the longitudinal wave relative to the state of aggregation qffiffiffiffiffi cL ¼ pk ρ Speed of sound in gases Speed of sound in liquids
Speed of waves in solids
where cL – speed of the longitudinal wave, k – adiabatic exponent (k ¼ cp/cV), p – pressure, ρ - density qffiffiffi cL ¼ Kρ where K – Compression modulus (K ¼ -V dp/dV) qffiffiffi cL ¼ Eρ 2 cT Where E – Nonrelaxed elastic modulus of elasticity (or longwave modulus) for longitudinal waves or shear modulus for transverse waves. cT – Speed of the transversal wave
The speed of sound in liquids and gases depends strongly on temperature due to its influence on pressure and density. The speed of sound is also highly dependent on the state of aggregation. Thus, the velocity is highest in solids, followed by liquids and finally gases. Acoustic waves propagate as a function of material density and elastic material parameters. Only in homogeneous materials, they are able to propagate unhindered. At material interfaces with changing acoustic material properties or interfaces at defects such as cracks, voids, or delaminations, the reflected signal is subject to phase shift and amplitude changes. Diffraction or refraction of the incident sound waves appears at the geometric discontinuities caused by defects due to material differences and due to diffuse scattering. Changes in amplitude of approximately 1% with respect to the surrounding areas can be detected. For defects in electronic applications, the solid-gas transition is of particular importance, since the phase shifts that occur can always be explicitly detected. If sound waves strike obstacles, the following cases should be considered, depending on the ratio between the wavelength and the dimensions of the obstacle • Geometric wave propagation (radial, shadow for large obstacles). • Diffraction (wave motion in shadow area). • Diffusion (obstacle as starting point of new elementary waves for small obstacles). A quadratic dependence on the obstacle diameter can be observed up to λ/2. Below λ/6 a cubic dependency on the obstacle diameter can be observed (Rayleigh scattering). Diffusion is a major hindrance for many tasks in ultrasonic microscopy. With reference to inspection tasks in the field of packaging technology, it occurs interferingly at edges (e.g., IC lead frames) and bond wires and renders it difficult to interpret the obtained images. The yellow areas in Fig. 38 actually indicate a significant change in the acoustic impedance, thus indicating a delamination or a gas
64
N. Meyendorf et al.
Fig. 38 Packaging details in X-ray (left) and ultrasound images (right; false color image)
inclusion. In this case, the signal is most likely based on the diffraction of the sound waves at the lead frame edges. Acoustic impedance (also known as acoustic resistance) is an important parameter for the description of processes in the sound field. On the one hand it characterizes the relationship between alternating sound pressure and sound particle velocity and, on the other hand, the behavior of sound waves at boundary surfaces (reflection, transmission). The sound particle velocity v describes the instantaneous velocity of a single oscillating particle (oscillator) around its rest position, whereas alternating sound pressure p describes the pressure fluctuations in the propagation medium (see also [53, 54]). Sound particle velocity and alternating sound pressure vary at the same frequency. The acoustic impedance Z of a martial can be calculated using sound particle velocity and sound pressure as: Z¼
p ¼ρc v
ð16Þ
where p – sound pressure, v – sound particle velocity, ρ – density, c – velocity of sound. Sound pressure and sound particle velocity are the physical quantities that are directly evaluated by ultrasonic inspection systems. Reflection and transmission resulting from inhomogeneities in the acoustically irradiated body can be described for the simplified case of perpendicular incidence of sound waves on flat obstacles (no change of direction of the reflected and transmitted component). The software included with in an ultrasonic microscope working in emersion mode uses the absolute maximum value of the echo signal for image generation and enters it at the corresponding x-y position as color or grey-scale value. The transition from a solid medium to air (or another gaseous medium) means 100% reflection. The
2
Basic Concepts of NDE
65
Table 2 Reflection factors with different directions of injection Injection to Aluminum
Ceramic
Material 1 Aluminum Copper Polystyrene Water Ceramic Water Polystyrene Copper
Material 2 Copper Polystyrene Water Ceramic Water Polystyrene Copper Aluminum
Reflection factor R 0.42 0.89 0.25 0.92 0.92 0.25 0.89 0.42
(Source: [53]). Negative signs indicate a 180-degree phase shift for reflections at an interface form higher to lower acoustic impedance
Fig. 39 Characteristic curve of the echo signal at interfaces of materials during injection from different directions (corresponds to Table 2– according to [53])
inspection of underlying layers is therefore no longer possible. This effect is of particular interest for the investigation of inclusions and delaminations. In the case of multilayer test objects, it is important from which side the ultrasonic signal is injected. A test object with a material sequence of aluminum-copper-polystyrene-waterceramic is examined as an example. The values listed in Table 2 are result for injection from different sides. Figure 39 shows the resulting characteristic signal curve from a qualitative standpoint (without taking the change in signal amplitude, that is, attenuation, into consideration) with increasing penetration depth of the ultrasonic signal. Due to the extremely low acoustic impedance of air, ultrasonic inspection usually requires a coupling medium between the ultrasonic piezoelectric transducer and the tested components. Most common are two options • Water (immersion testing). • Coupling medium (contact technique).
66
N. Meyendorf et al.
Fig. 40 Ultrasonic inspection in contact mode Left: with a normal beam transducer (1) creating a sound field in the test material 2 with a back wall of the tested plate (3) and a typical defect (4). TS is the transducer signal at the test surface, BWE the back wall echo and D the defect signal. Right: with an angle beam transducer with α (1) at the welded plate (2) and a typical weld defect (3). During the inspection, the transducer is moved between full and half skip distance ap
Fig. 41 X-ray image of a phased array transducer with a modeled sound field focused (time sequence) toward an assumed defect (white circle). Clearly visible are the elementary waves emitted for each of the piezo elements and the resulting wave front. (Courtesy: Frank Schubert, IKTS)
Water is used for acoustic miscopy as mentioned above where the images are generated by linear or meander type scanning. ▶ Chapter 33, “NDE for Electronic Packaging,” contains a more detailed description and several examples. In the contact technique, normal beam and angle bean transducers are common (Fig. 40). For phased array ultrasonics, the transducer contains a number of small transducer elements that are stimulated with a time delay (phase shift). The superposition of the elementary waves emitted by each transducer element can create sound fields with different angles or a variable focal depth (Fig. 41). By sweeping the incident angle, sector scans are created that are cross sections of the inspected object. These are most common in medical imaging. Variable focus depth, linear scanning, or combination of all options can also be achieved by phased array systems. For conventional phased array techniques, a pulse is phase-shifted for the different piezo elements or multiple pulses stimulate the elements in a time sequence according to the “focal laws.” The recorded signals from all elements generate one A-scan that is related to the sound field generated. A modified method that was developed in the early 2000s under the name “Sampling Phased Array”
2
Basic Concepts of NDE
67
Fig. 42 Principle of sampling phased array or total focusing
(SPA) [55] and shortly later as a similar idea under the name “Total Focusing Method” (TFM) [56]. The idea there is that every element “j” of the transducer with N elements is stimulated separately, creating a hemispherical (elementary) sound wave in the test specimen. The signal is received by all transducer elements j ¼ 1,2,..,N, creating j x j A-scans. If, for example, the transducer has 16 elements 16 x 16 ¼ 256 A-scans are generated (Fig. 42). Using this data, a computer can calculate virtual signals for different incident angels and different focal depth. For conventional phased array systems, each of these will require a separate shot. Figure 43 compares the result of ultrasonic inspection of carbon-fiber materials between conventional and sampling phased array [55]. Simply stimulating one transducer element and then scanning along the transducer line allows the application of another reconstruction technique that was adapted to ultrasonic testing in the 1980s for imaging of cross sections of specimens. This is called Scanning Aperture Focusing Technique (SAFT). The idea is that a transducer with a wide aperture is scanned over the surface and the collected A-scans are used the reconstruct the cross section [57, 58]. SAFT and TFM can be combined in one hardware unit.
Semi-nondestructive Methods Some so-called destructive tests such as hardness testing, where an indenter penetrates the part, have only a minor impact and can, therefore, in many cases be considered nondestructive. This is true for very small indenters (e.g., micro- or nano-indentation). Standard hardness testers are instruments in a laboratory that require preparation of species; however, hardness tests by using mobile hardens testers can be considers as semi nondestructive.
68
N. Meyendorf et al.
Fig. 43 Comparison of phased array images of carbon-fiber materials showing the obvious benefits of sampling phased array with inverse phase matching [55]
Similarly, removing small parts form a concrete structure and analyzing this in the laboratory by typical nondestructive methods (NMR, Ultrasound, X-ray) is considered to be NDE for civil engineering.
NDE Tasks NDE is a multidisciplinary approach with significant contribution to the quality of products and the safety of machines, production, and transportation systems along the value chain. Figure 44 illustrates the interdisciplinary character of NDE. Most common application for NDE to ensure the quality of products is: • Inspection of the raw materials. • Inspection of raw products like forgings or castings and. • Secondary processes like machining, grinding heat treating or plating. For quality and safety and because of product warranty regulations, NDE is applied at the final product stage before delivering to the customer. The integration of this inspection in a cyber-controlled production of individual components tailored to the needs of the customer is a challenge for NDE 4.0. If products are in service, NDE is used for regular inspections and maintenance. This is very important for safety relevant products such as pressure vessels, pipelines, railway systems, automobiles, and aircraft.
2
Basic Concepts of NDE
Fig. 44 NDE is an interdisciplinary approach with multiple applications
69
Materials Engineering
Physics
Test-object
NDE Principle
Instrumentation
Facilities
NDE
Condition Monitoring
Quality Assurance
Determination of changes in Structure
Evaluation of structure and properties of products
Prediction of property variations
Comparison with quality criteria
Process Control Determination of Fabrication-process parameters Estimation of properties of the final product
The traditional NDE philosophy was based on: • • • • • •
Testing numerous similar components. Trained inspectors. Following test instructions. Using certified instruments. Making decisions based on inspection rules and standards. Validation based on the POD concept.
This philosophy will not satisfy future challenges where theoretically printing individual components or even an aircraft turbine is envisioned. Decisions have to be made based on knowledge about the load and strength of a component, the service history (component lifetime files), the aging processes that might modify material properties, and the NDE inspection results. This may be been termed a “Machine Doctor” [59]. Under Industry 4.0 conditions, NDE for product and process development means that NDE for Research and Development will gain significant importance. Applying NDE can replace expensive and time consuming destructive tests like, for example, cross sectioning. Examples are X-ray computer tomography for 3D printed parts or ultrasound scattering for evaluation of materials microstructure. The most important task for NDE applications is to detect or visualize defects. Examples are: • Cracks. • Pores and cavities.
70
N. Meyendorf et al.
• Inclusions. • Corrosion defects. • Surface irregularities and roughness. and other types of defects. Volumetric methods such as computer tomography or optical shape recognition are not only used for detecting and quantifying internal defects but also for 3D metrology and to create volumetric images or data sets of components. The second group of tasks is materials characterization. This includes: • Measurement of physical properties such as electric or thermal conductivity, using eddy currents or thermal wave techniques. • Estimation of mechanical properties such as the elastic modulus using sound velocity, hardness, or yield strength by empirical correlations to electric and magnetic properties, including Barkhausen noise or parameters of magnetic hysteresis [4]. • Measurement of residual stresses, or texture through X-ray or neutron diffraction. Other options are accurate sound velocity measures or micro magnetic techniques. • Microstructure characterization, usually by acoustic scattering or correlation to physical properties. The concept of material characterization by NDE techniques is illustrated by the Fig. 45.
Structure of Materials
Material Science
Solid State Physics
Mechanical Properties
Empirical correlation
Destructive Testing
Microscopic Techniques
Physical Properties
Nondestructive Testing
Fig. 45 Illustration of the concept of materials characterization for mechanical material properties by NDE methods
2
Basic Concepts of NDE
71
Fig. 46 Comparison of techniques used for characterization of surfaces layers
Characterization of surface layers and surface heat treatments such as case depth measurement is one of the most important tasks in the industry. A special chapter in this book will summarize this technique. For comparison of techniques, see Fig. 46. The most successful approach for characterization of surface layers is the measurement of back scattered ultrasound. This works because the coarse grain structures of the base materials scatter the ultrasound better than the hardened surface layer. The backscattered signal can be detected in pulse-echo mode when the average grain size increases. This is illustrated in the last row in Fig. 44. Important tasks for materials characterization are NDE methods that quantify aging of the materials by fatigue or creep. Present approaches mostly employ acoustic or electromagnetic methods or measurement of heat dissipation using thermography [60, 61]. Another important task is material analytics, based mostly on spectroscopic methods. The most common in NDE is X-ray fluorescence where a broadband X-ray spectrum stimulates characteristic radiation that allows identification of the chemical elements in the specimen and roughly estimates their concentration. Because both the simulating radiation and the characteristic radiation are X-rays with the ability to penetrate the material, this is a method for volumetric analysis, different than X-ray spectroscopy (EDS) in a SEM [62].
Quantification of NDE and Decision-Making The previous sections discussed some of the basic NDE techniques, focusing primarily on the physics of the methods and touching on their applications. NDE however involves other issues that are equally important. These include, but are not limited to:
72
• • • • • • •
N. Meyendorf et al.
Handling of NDE data and NDE results. Statistical methods and POD. Material properties and ageing processes. Accreditation of NDE labs. Certification of personal and instruments. NDE education and training. NDE research and development.
Over the last couple of decades, we have seen significant digitalization of NDE. The new era is called NDE 4.0. This handbook provides significant content relevant to this new trend in NDE. These important aspects of NDE are at the heart of the present handbook and are discussed elsewhere within it, in the context of NDE 4.0.
References 1. NDE Resource Center on the WEB.: https://www.nde-ed.org/GeneralResources/IntroToNDT/ GenIntroNDT.htm 2. Wolter B, Dobmann G. Nuclear magnetic resonance as a tool for the characterization of concrete in different stages on its development. In: Schickert G, Wiggenhauser H, editors. Non-destructive testing in civil engineering 1995, vol. 1. Berlin; 1995. (DGZfP-Berichtsbände 48.1) S.181–188. 3. Bloem P, Greubel D, Dobmann G, Lorentz OK, Wolter B. NMR for non-destructive testing of materials. In: Saarton LA, Zeedijk HB (eds) Proceedings of the 5th European conference on advanced materials and processes and applications. Vol. 4: Characterization and production/ design: EUROMAT 97, Maastricht, 21–23 April 1997. Zwijndrecht: Netherlands Society for Materials Science; 1997. ISBN: 90-803513-4-2, p.135–138. 4. Altpeter I, Tschuncky R, Szielasko K. Electromagnetic techniques for materials characterization. In: Hübschen G, Altpeter I, Herrmann H-G (eds) Materials characterization using nondestructive evaluation (NDE) methods, ScienceDirect (2016). https://www.sciencedirect.com/ science/article/pii/B9780081000403000080 5. Schreiber J, Meyendorf N. New sensor principles based on Barkhausen noise. In: Proceedings of SPIE 6530, sensor systems and networks: phenomena, technology, and applications for NDE and health monitoring (2007), 65300C (2007, 10 April). https://doi.org/10.1117/12.717214 6. Arunachalam K, Melapudi VR, Udpa L, Udpa SS. Microwave NDT of cement-based materials using far-field reflection coefficients. NDT&E Int. 2006;39:585–93. 7. Park J, Nguyen C. An ultrawide-band microwave radar sensor for nondestructive evaluation of pavement subsurface. IEEE Sensors J. 2005;5:942–9. 8. Kharkovsky S, Akay MF, Hasar UC, Atis CD. Measurement and monitoring of microwave reflection and transmission properties of cement-based specimens. IEEE Trans. Instrum. Meas. 2002;51:1210–8. 9. Mukherjee S, Tamburrino A, Haq M, Udpa S, Udpa L. Far field microwave NDE of composite structures using time reversal mirror. NDT&E Int. 2018;93:7–17. 10. Stuchly M, Stuchly S. Coaxial line reflection methods for measuring dielectric properties of biological substances at radio and microwave frequencies—a review. IEEE Trans Instrum Meas IM. 1980;29:176–83. 11. Travassos XL, DAG V, Ida N, Vollaire C, Nicolas A. Characterization of inclusions in a nonhomogeneous GPR problem by artificial neural networks. IEEE Trans Magnet. 2008;44(6):1630–3.
2
Basic Concepts of NDE
73
12. Yang Y. Development of a real-time ultra-wideband see through wall imaging radar system. PhD dissertation, University of Tennessee, Knoxville, TN; 2008. 13. Yang X, Zheng YR, Ghasr MT, Donnell KM. Microwave imaging from sparse measurements for near-field synthetic aperture radar. IEEE Trans Instr Meas. 2017;66:2680–92. 14. Ida N. Microwave and Millimeter wave nondestructive testing and evaluation. In: Ida N, Meyendorf N, editors. Handbook of advanced NDE. Cham: Springer; 2019. p. 929–66. https://doi.org/10.1007/978-3-319-26553-7. 15. Anlage SM, Talanov VV, Schwartz AR. Principles of near-field microwave microscopy. In: Kalinin SV, Gruverman A, editors. Scanning probe microscopy: electrical and electromechanical phenomena at the nanoscale, vol. 1. New York: Springer; 2007. p. 215–53. 16. Chen G, Hu B, Takeuchi I, Chang KS, Xiang XD, Wang G. Quantitative scanning evanescent microwave microscopy and its applications in characterization of functional materials libraries. Meas Sci Technol. 2005;16:248–60. https://doi.org/10.1088/0957-0233/16/1/033. 17. Rosner BT, Van der Weide DW. High-frequency near field microscopy. Rev Scientific Instrum. 2002;73:2505–25. 18. Razvan C, Ida N. Transmission line matrix model for detection of local changes in permeability using a microwave technique. IEEE Trans Mag. 2004;40:651–4. 19. Tabib-Azar M, Garcia-Valenzuela A, Ponchak G. Evanescent microwave microscopy for high resolution characterization of materials. Norwell: Kluwer; 2002. 20. Bakhtiari S, Ganchev S, Zoughi R. Open-ended rectangular waveguide for nondestructive thickness measurement and variation detection of lossy dielectric slabs backed by a conducting plate. IEEE Trans Instrum Meas. 1993;42:19–24. 21. Mazlumi F, Sadeghi SHH, Moini R. Interaction of an open-ended rectangular waveguide probe with an arbitrary shape surface crack in a lossy conductor. IEEE Trans Microwave Theory Tech. 2006;54:3706–11. 22. Qaddoumi NN, Saleh WM, Abou-Khousa M. Innovative near-field microwave nondestructive testing of corroded metallic structures utilizing open-ended rectangular waveguide probes. IEEE Trans Instrum. Meas. 2007;56:1961–6. 23. Ida N. Open resonator microwave sensor systems for industrial gauging: a practical design approach. London: IET; 2018. 24. Li Y, Bowler N, Johnson DB. A resonant microwave patch sensor for detection of layer thickness or permittivity variations in multilayered dielectric structures. IEEE Sensors J. 2011;11:5–15. 25. Jonuscheit J. Terahertz techniques in NDE. In: Ida N, Meyendorf N, editors. Handbook of advanced NDE. Cham: Springer; 2019. p. 967–85. https://doi.org/10.1007/978-3-319-26553-7. 26. Armstrong CM. The truth about terahertz. IEEE Spect. 2012;49(9):36–41. https://doi.org/10. 1109/MSPEC.2012.6281131. 27. https://en.wikipedia.org/wiki/Terahertz_time-domain_spectroscopy 28. https://en.wikipedia.org/wiki/Continuous-wave_radar 29. May T, Heinz E, Peiselt K, Zieger G, Born D, Zakosarenko V, Brömel A, Anders S, Meyer H-G. Next generation of a sub-millimetre wave security camera utilising superconducting detectors. IOP Publ J Instrum. 2013;8 https://doi.org/10.1088/1748-0221/8/01/P01014. 30. Luukanen A, Grönberg L, Helistö P, Penttilä JS, Seppä H, Sipola H, Dietlein CR, Grossman EN. Passive Euro-American terahertz camera (PEAT-CAM): passive indoors THz imaging at video rates for security applications. Proc SPIE. 2007;6548 https://doi.org/10.1117/12.719778. 31. Dong J, Bianca Jackson J, Melis M, et al. Terahertz frequency wavelet domain deconvolution for stratigraphic and subsurface investigation of art painting. Opt Express. 2016;24(23):26972–85. 32. Fukuchi T, Fuse N, Okada M, et al. Topcoat thickness measurement of thermal barrier coating of gas turbine blade using terahertz wave. Electr Eng Jpn. 2014;189(1):1–8. 33. Catapano I, Soldovieri F, Mazzola L, Toscano C. THz imaging as a method to detect defects of aeronautical coating. J Infrared Millimeter Terahertz Waves. 2017;3810:1264–77. 34. Ho L, Müller R, Gordon KC, et al. Terahertz pulsed imaging as an analytical tool for sustainedrelease tablet film coating. Eur J Pharm Biopharm. 2009;71(1):117–23.
74
N. Meyendorf et al.
35. Stoik CD, Bohn MJ, Blackshire JL. Nondestructive evaluation of aircraft composites using transmissive terahertz time domain spectroscopy. Opt Express. 162:17039–51. 36. Cristofani E, Friederich F, Wohnsiedler S, Beigang R. Non-destructive testing potential evaluation of a THz frequency-modulated continuous-wave imager for composite materials inspection. Opt Eng. 2014;53(03) https://doi.org/10.1117/1.OE.53.3.031211. 37. Krebber K. Fiber optic sensors for SHM – from laboratory to industrial applications. In: OSA conference “Applied Industrial Optics: Spectroscopy, Imaging and Metrology (AIO), Seattle, WA, USA, 2014, June . 38. López-Higuera JM, Rodriguez L, Quintela A, Cobo A. Fiber optics in structural health monitoring. In: Proceedings of SPIE – the international society for optical engineering 7853, 2010, November. 39. Meyendorf N Frankenstein B, Schubert L. Structural health monitoring for aircraft, ground transportation vehicles, wind turbines and pipes – prognosis. In: Paipetis AS (eds) Emerging technologies in non-destructive testing V: proceedings of the fifth conference on emerging technologies in NDT, Ioannina, Greece, 19 – 21 September 2011 Boca Raton, FL: CRC Press, 2012. ISBN: 0-415-62131-3 ISBN: 978-0-415-62131-1 ISBN: 978-0-203-11445-2, p.15–22. 40. Minakuchi S, Takeda N. Recent advancement in optical fiber sensing for aerospace composite structures. Photon Sens. 2013;3:345–54. Springer open access. 41. Shell EB, Khobaib M, Hoying J, Simon L, Kacmar C, Kramb V, Donley M, Eylon D. Optical detection of surface damage. In: NGH M, Nagy RSI, editors. Nondestructive materials characterization – with applications to aerospace materials. Springer; 2003. 42. Günther H Im Reiche Röntgens – Eine Einführung in die Röntgentechnik. Stuttgart: Kosmos – Gesellschaft der Naturfreunde, Franckh’sche Verlagshandlung; 1930. 43. Ardenne MV. Neue Widerstandsverstärker mit hohen Verstärkungsgraden. Radiotechnische Monatsschrift, year VI, issue 12/1929. Wien: Radio Amateur; 1929. German; 1929. 44. Oppermann M. Zerstörungsfreie Analyse- und Prüfverfahren zur Detektion von Fehlern und Ausfällen in elektronischen Baugruppen. Templin: Verlag Dr. Markus A. Detert; 2014. 45. Schumacher D. Beitrag zur ZfP von photonenzählenden und spektralauflösenden Röntgenmatrixdetektoren am Beispiel von Werkstoffverbunden. PhD Thesis, TU Dresden, 2019. 46. Pohle R. Digital image processing for automated weld inspection. PhD Thesis, TU Magdeburg, 1994. 47. Kastner J, Heinzl C. X-ray tomography. In: Ida N, Meyendorf N (eds) Handbook of advanced nondestructive evaluation. Springer; 2019. p. 1095. 48. Eberhard H. Lehmann. Anlagen und Möglichkeiten für Neutronen Imaging am PSI. 18. Sitzung Fachaussschuss Durchstrahlungsprüfung der DGzfP, 25. November. am Paul Scherrer Institut; 2015. 49. Staab TEM, Zschech E, Krause-Rehberg R. Positron lifetime measurements for characterization of nano-structural changes in the age hardenable AlCuMg 2024 alloy. J Mat Sci. 2000;35: 4667–72. 50. Coffey E. Acoustic resonance testing. In: Proceedings of future of instrumentation international workshop (FIIW) 8–9 October 2012, Gatlinburg, USA, 2012. 51. Vivek Hari Sankaran, Low cost inline NDT system fir internal defect detection in automotive components using acoustic resonance testing. In: Proceedings of the national seminar & exhibitionon non-destructive evaluation NDE, December 8–10, 2011. 52. Kühnicke E.. Elastische Wellen in geschichteten Festkörpersystemen: Modellierungen mit Hilfe von Integraltransformationsmethoden. Simulationsrechnungen für Ultraschallanwendungen. Bonn: TIMUG e.V; 2001. 53. Wolter K, Bieberle M, Budzier H, Zerna T. Zerstörungsfreie Prüfung elektronischer Baugruppen mittels bildgebender Verfahren. Templin: Verlag Dr. Markus A. Detert; 2012. p. 2012. 54. Krestel E. (publisher, 1988). “Bildgebende Systeme für die medizinische Diagnostik”, 2nd edition. Berlin/München: Siemens Aktiengesellschaft; 1988.
2
Basic Concepts of NDE
75
55. von Bernus L, Bulavinov, A, Jonet D, Kroning M, Dalichov M, Reddy KM, Sampling phased array a new technique for signal processing and ultrasonic imaging. ECNDT 2006 – We.3.1.2. 56. Tweedie A, O'Leary RL, Harvey G, Gachagan A, Holmes C, Wilcox PD, Drinkwater BW, Total focussing method for volumetric imaging in immersion non destructive evaluation, published in: 2007 IEEE Ultrasonics symposium proceedings, date of conference: 28–31 Oct. 2007. 57. Corl P, Kino G. A real-time synthetic aperture imaging system. In: Acoustical imaging. Springer; 1980. p. 341–55. 58. Jensen JA, Nikolov SI, Gammelmark KL, Pedersen MH. Synthetic aperture ultrasound imaging. Ultrasonics. 2006;44:e5–e15. 59. NG Meyendorf, P Heilmann, LJ Bond, NDE4.0 in manufacturing: challenges and opportunities for NDE in the 21st century. Mater Eval, 78, Issue 7, 2020. 60. Dobmann G, Meyendorf N, Schneider E. Nondestructive characterization of materials A growing demand for describing damage and service-life-relevant aging processes in plant components. Nuclear Eng Design. 171(1–3):95–112. 61. Meyendorf NGH, Rösner H, Kramb V, Sathish S. Thermo-acoustic fatigue characterization. Ultrasonics. 40(1–8):427–34. 62. Beckhoff B, Kanngießer B, Langhoff N, Wedell R, Wolff H. Handbook of practical X-ray fluorescence analysis. Springer.
3
History of Communication and the Internet Nathan Ida
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . From the Beginning to ARPANET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Developments in Cellular Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Growth of Communication Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The World Wide Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Communication in NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Role of Communication Networks in NDE: The Road to 5G and Beyond . . . . . . . . . . . . . . . . . Summary: Today and Tomorrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78 78 80 83 87 88 89 90 91 91
Abstract
In the context of NDE and in particular NDE 4.0, a historical perspective is needed to understand why and how we got here and, in fact, what NDE 4.0 is all about. There are different ways to describe or define NDE 4.0 but the one adopted in this work, that is, that NDE 4.0 is a “Cyber-physical Non-destructive Evaluation System,” is most revealing: that of a system that is driven by and is intimately connected with developments in connectivity and software tools enabled by the digital revolution. And this digital revolution, in the midst of which we conduct our work is that of communication and the Internet. Keywords
The Internet · World Wide Web · Communication · Communication Networks · Cellular Networks · Internet of Things
N. Ida (*) Department of Electrical and Computer Engineering, The University of Akron, Akron, OH, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_29
77
78
N. Ida
Introduction Communication has a long history and one can go back as far as one wishes. However, for the purpose of this discussion, communication starts with wire and wireless communication in the nineteenth century. Of course, nothing happens in a vacuum and the developments that led to communication as we know it were important. These will not be discussed here for the sake of brevity and because they contribute little to understanding of the issues associated with NDE 4.0. The shift to electric methods that led to the telegraph and early wireless systems was at their time revolutionary. But, looking back at the developments in communication following these monumental discoveries can only be described as an evolution – it took more than 150 years from the first attempts at wire communication and over 100 years from the first wireless links to the rise of the Internet. By contrast, the Internet may be viewed as true revolution: it not only changed our concepts of what is possible but did so in a very short period of time to an unprecedented level of sophistication and a truly global reach. It also took communication into uncharted territories out of which grew new concepts in human interactions as well as unprecedented concerns of security and safety. Of course, there were many developments and trends that contributed along the way. It is not a stretch to say that the invention of the telegraph led to the telephone and wired voice communication or that Hertz’s discovery of electromagnetic waves led to radio communication. Similarly, cell communication followed as a development of both. The Internet rose as a confluence of all previous technologies and new, enabling developments in digital electronics, computer science, and data handling.
From the Beginning to ARPANET Since communication is fundamental to human existence, one cannot pinpoint a beginning to the concept. Nor is any timeline unique or complete – many concepts have evolved separately, sometimes in isolation, sometimes in competition, converging to similar outcomes. For example, it is common to attribute the “invention” of mail to the ancient Persians, some 2,500 years ago but it is certain that other kingdoms and empires have used some form of communication to administer their lands and their militaries, some perhaps much earlier than that. Similarly, the invention of the telegraph is often attributed to Joseph Henry (in 1835) but, at about the same time, Edward Davey was working on the same concepts and independently came up with the same solution. The timeline that follows is limited to those events in the relatively recent history that contributed to the development of modern communication (especially wireless communication) and the Internet. Even so, this is, to an extent, a personal view. Events and developments that may seem unimportant, sometimes all but forgotten, can lead to others that are central to reaching critical roadposts or forks in the road to other developments. In the present discussion, one focusing on the “electric” concept of communication, it all starts in 1820 with the discovery of the link between current and magnetism by Christian Oersted. This led directly to the development of the telegraph in 1835 [1–6]. The simple possibility of operating an electromagnet over a line using rather low currents exposed the need for a way of translating human
3
History of Communication and the Internet
79
language to that of the simple on-off operation of the relay. Thus the origin of codes and in particular digital codes. The first code was the Morse code, introduced in 1835 (and patented in 1837) by Samuel Morse [7]. It is the forerunner of digital codes and allowed practical communication, first on telegraph lines but later on wireless links as well. The on/off function of a key, combined with modulation of the width of the “on” period (short and long pulses) was sufficiently simple for the low bandwidth of early systems and sufficiently complex to translate the entire alphabet and numerals into the code. Other codes followed, an example being the Baudot code (Emile Baudot, 1870), which mapped the alphabet onto 5 bits. In itself it was only used briefly but it eventually led to the ASCII code that is used to this day (the term Baud is in acknowledgment to this important invention and its inventor) [8]. The telephone as a voice communication device had to wait until 1876 for the invention of the microphone by Alexander Graham Bell [9, 10]. Many improvements followed but the fundamentals of wired communication have changed little since then. Wireless communication was not far behind. The publication of Maxwell’s equations (James Clerc Maxwell, 1873) [11] was first only of interest as a theory but with the experiments of Heinrich Hertz in 1887 [12, 13], the idea of electromagnetic waves and their utility was proven. But even before Maxwell’s theory was published, the basics of wireless communication were demonstrated and patented (Malcolm Loomis, 1865) [14, 15]. It was a short hop from there to the idea of mobile telephone to be born (first attempts can be traced to 1880 [9], with practical implementation by 1915) [16] and transatlantic radio transmission (Guglielmo Marconi 1901) [17, 18]. Developments followed, enabled by other inventions such as the amplifying vacuum tube (De Forest 1907) [19, 20], the emergence of radio stations, television, and many others but, in effect, the fundamentals were all in place by 1900. There were of course many other developments and inventions around these basic communication technologies. For example, the principles of magnetic recording were discovered in 1888 by Oberlin Smith [21] and demonstrated in 1900 by Valdemar Poulsen [22]. The development of photography (starting with the development of photographic film by George Eastman in 1885) [23, 24] and that of the iconoscope (the first electronic TV camera, Vladimir Zworkin, 1923) [25] can be seen as leading to the modern digital cameras of today. In any attempt to describe development of technologies and inventions, one is bound to do injustice to many worthy, competing, and sometimes even preceding developments, either because of accepted “history,” poor documentation or, unfortunately, due to language barriers. Indeed, many would argue that the contributions of people like Heinrich Barkhaussen to modern wired and wireless telegraphy, including radar, are equally important [26]. Similarly, the work of Alexander Stepanovich Popov, whose 1894 coherer led to work on detection of lightning strikes in 1895 but also to radio transmission as early as 1896 is barely mentioned in most histories of communication [6, 27]. In Russia he is viewed as the inventor of radio. Another example, out of very many, is the work of Manfred von Ardenne on television, raster displays, scanning microscopy, and many others [25, 28, 29]. Although it is not possible in the context of this work to do justice to the work of all those that led to where we are, it is important to at least acknowledge that the prevailing history is by no means comprehensive.
80
N. Ida
In addition to the developments above, and with the rise in technology, the need for regulation led to the establishment of the Federal Radio Commission (1928) that eventually evolved into the Federal Communications Commission (FCC) and the Comité International Spécial des Perturbations Radioélectriques (CISPR) in France, both in 1934. These regulatory agencies and others (both national and international) had critical impact on future developments in communication and the Internet. WWII spurred a flurry of developments that can be seen, certainly in retrospect, as leading to the Internet. These include advances in communication and radar but more importantly the development of the first digital computers. The best known is the ENIAC (Electronic Numerical Integrator and Computer), completed in 1946 and used to calculate trajectories for naval shells [30, 31]. But there were others including the Atanassof-Berry computer (1942), the EDVAC (Electronic Discrete Variable Automatic computer, 1949), the Colosus (first programmable digital computer, 1942), and the Zuse (the first program-controlled, Turing-complete computer, 1941) [32]. As starting attempts that eventually led to today’s computers it is only natural that these early computers explored various means of accomplishing computation. For example, the ENIAC was a decimal computer, whereas the Zuse approach was electromechanical using telephone relays in the switching functions. Looking back some of the developments look rather primitive but, nevertheless, these attempts were necessary in the development process that led to the modern digital computers. Although some of the early computing machines were programmable on the machine code level (through switches – the ENIAC or through perforated tape – the Zuse), true programmability could only come later through development of high-level programming languages to appear in the 1950s [33–35]. Now that wire and wireless communication technologies were available and developing quickly, the availability of digital computation meant that it was only a matter of time before these will combine in a number of ways. By the 1950s, after the invention of the transistor (Bell Labs, 1947) [36, 37] and the integrated circuit (Texas Instruments, 1958) [38], the hardware tools needed were all available for the next steps in communication. But there were other components needed to fall in place before the Internet could become reality. Two of these are the concept of communication networks and that of packet switching. Another important contribution was the evolution and development of the infrastructure necessary to support networks and do so at ever-increasing speeds and bandwidths.
Developments in Cellular Communications In the classical communication domain, that of telephone communication, there were three important developments that had profound effects on the Internet. These are cellular communication, replacement of copper with optical fibers, and satellite communication. One can add to these the cable TV network, although the network has largely transitioned from its original purpose of delivering TV programming on coaxial cables to a more general optical fiber network for general communication. It should also be recognized that some of the developments in
3
History of Communication and the Internet
81
communication were influenced by the rise of computers and the early attempts at internetworking. Early attempts at cellular communication can be traced back to the period after WWII and specific proposed systems came as early as 1947 [39–44]. These early attempts were hampered by a number of issues including equipment and regulations. By 1979, the first commercial network was introduced in Tokyo as an analog network and is known as the 1G network [45, 46]. Other limited 1G networks existed before that in Chicago and Dallas and also expanded to wider area networks. Soon after, in 1981, the first digital cell network, known as 2G was introduced, first in Finland followed by other Nordic countries. By then the main concepts including roaming have become common and the size of the mobile units evolved primarily through reduction in size. The first smartphones were available starting in 1992 but it was not until 2001 that the smartphone, which by then resembled todays cell phones in size and some of its functionality, was capable to connect to the Internet through the then available 3G network (in Tokyo). As the cellular network evolved and improved primarily through higher speeds, coverage, and subscribers, its share of Internet traffic increased considerably through availability of new services such as streaming and the cloud. The development of mobile services can best be seen through the evolution of the cellular networks from the analog 1G network to the current 5G network. These are summarized in Table 1. The telephone network has also undergone considerable changes in attempts to adapt to other modes of communications. This required increases in speed and in bandwidth, both of which were afforded by the introduction of optical fibers to replace copper cables. Although the idea dates back to 1963 [47], it could only become practical once practical optical fibers, with acceptable losses were developed in 1970 [48]. Initial acceptance was slow, primarily because of cost and technical problems including relatively high loss of early fibers and the difficulty of splicing fibers. But eventually these were resolved with the development of the single mode fiber in 1981 and improved performance of ancillary systems. The gradual reduction in losses with each generation of optical fibers increased the bandwidth and allowed higher speeds. For example, third-generation fibers were capable of 2.5 Gb/s whereas second-generation fibers were limited to 1.7 Gb/s [49]. Improvements in methods of amplification and data transmission methods raised that to some 10 Tb/s by 2001. The remaining difficulty was the physical laying of cables. That came first through transatlantic optical cables to carry voice and data followed by gradual replacement of copper wires with optical links to keep up with the data speeds and bandwidths required by newer paid services such as video streaming and general Internet connection. The main bottleneck in this process was and remains the end-connection, that is, the connection to the consumer, especially households. Initially, the consumers had to contend with a variety of modems that could connect to existing copper wiring including PC modems and, later, digital subscriber loop (DSL) modems. With the transition from land-based telephones to Voice over Internet Protocol (VoIP), telephone calls and data transfer shifted initially to cable since Cable TV was available in many homes, some as early as the 1960s. This resulted in hybrid broadband fiber-cable system capable of speeds compatible with
82
N. Ida
Table 1 The evolution of cellular networks Rollout 1979 1981
Main Standards3 Analog GSM, D-AMPS,
1998
Services introduced4
Speed5
SMS
900 Kbit/s
UMTS, CDMA2000
Smart phones, mobile modems, wireless telephony, mobile Internet access, video calls, TV streaming
42 Mbit/s
2009
LTE, WiMax
975 Mbit/s
5G
2010
5G NR
6G6
2021*
6G hyperconnectivity
VoIP, embedded mobile Internet access, gaming, video streaming, HDTV, video conferencing, 3D television, cloud computing, IoT IoT, sensors networks, reduced latency, improved coverage XR, digital twins, connected machines*
Network 1G1 2G2 2.5G 2.75G 3G 3.5G 3.75G 3.95G 4G 4.5G
10 Gbit/s 1000 Gbit/ s*
* Initial steps toward specifications, ideas, expectations, and speculations as of the time of this writing. Notes: 1. The analog 1G network is now obsolete. 2. The interim terms 2.5G, 2.75G, 3.5G, etc., indicate major changes in the corresponding networks through introduction of standards and increases in speed. 3. The standards listed are as follows: Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), Code Division Multiple Access – IS-95 (cdmaOne), Universal Mobile Telecommunications System (UMTS), CDMA200 – secondgeneration cdmaOne, Long-Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), 5G New Radio (5G-NR). 4. The list shows noteworthy services introduced in addition to those available in earlier generations. 5. Maximum download speeds. Actual speeds are about 10–20% of maximum speeds. 6. 6G is a nascent technology at the time of this writing but it is a natural progression from previous generations. It is expected to roll out commercially by 2028.
optical networks. Starting in 2005, many providers started offering optical fiber connections either to the node at which the consumer was connected or in some cases directly to the consumer, allowing ISPs to provide new or improved services such as higher speed Internet and improved streaming. With the changes in the network came also changes in regulations both in terms of content and in terms of equipment. Communication satellites were originally conceived as transponders, that is, they reflect back uplink communication either directly (passive satellites) or after amplification (active satellites). As such, they were mere nodes in a communication network. The first satellite (Sputnik 1) was launched in 1957 but the first true communication satellite had to wait until 1962 with the launch of Telstar in a low orbit and 1962 (Syncom 2) in geosynchronous orbit. Today there are some 2000 communication satellites in various orbits from low earth orbits (LEO satellites),
3
History of Communication and the Internet
83
medium earth orbits (MEO satellites), and geosynchronous orbit (GEO satellites). Many of these are specifically designed to carry services such as TV programming, phone communication, or weather data. Perhaps more valid to the discussion here are the satellite constellations, designed as networks for global communication. These operate in many ways similar to the cell systems in that the communication between two points on earth may pass through a number of satellites. Intended for true global communication, they are still the only way to communicate with remote areas that are not covered by the landbased cellular network. These are satellites in low orbit and because they move fast and are only visible over a smaller area, are launched and positioned in clusters. Examples are the Irridium (66 satellites, deployed between 1997 and 2002) [50] and Globalstar (52 satellites, launched between 1998 and 2000) [51]. These constellations provide mobile voice and data and are particularly important in communication over the oceans and other remote areas such as the poles and Antarctica. Another, newer constellation of satellites is the Starlink constellation [52]. At the time of this writing the constellation consisted of 995 small satellites in low orbit (LEO) but it is intended to comprise almost 12,000 satellites and perhaps many more. The constellation is built specifically for satellite Internet access with envisioned global coverage.
Growth of Communication Networks The introduction of commercial telephone exchanges in 1877 created the first communication network. The first exchanges were manual but were soon followed by automated exchanges [53–55]. But, even with later developments it was a rather simple point-to-point network, that is, the user connected to any user through a node (the exchange). Nodes in the system connected to other nodes but overall it was a linear network, that is, the path between users was unique in that any particular user was only connected to one exchange and, more importantly, once a link was established, the link was dedicated for the duration of the session. Interruption at the exchange meant the user could not connect at all. Although exchanges might be connected to more than one exchange, this was a limited interconnected grid. As well, the telephone network was tightly controlled and regulated by commercial and regulatory entities. Some of these regulations were, later on, detrimental to its use for data communication. For example, it was illegal to connect anything to the lines. This led to the need for use of acoustic modems in early use of the network for data transfer. A number of developments occurred starting in the early 1950s that led to new concepts in communication, leading to what we typically associate with the Internet but also to cellular communication, Wi-Fi, and to personal communication devices (PCD). Given that the telephone network relied on circuit switching, it also meant that users were sharing the circuit for the duration of the connection regardless of how much data or time they actually used for communication. Even though this approach largely sufficed for telephone communication, it was entirely insufficient for data
84
N. Ida
transfer, primarily because of its very limited bandwidth. The introduction of computers and the need to connect them together exposed the need for new ideas in networks and connectivity. The first attempts in data transfer between computers in 1965 using telephone modems pointed a number of issues [56] including the bandwidth of the network, in addition to the basic inefficiency of the network’s structure. Even earlier than that, ideas of large interconnected networks, research into their performance, time-sharing of networks, and a wholly new concept of packet switching [57–60] emerged. The introduction of packet switching in particular [61– 63] made the idea of computer networks a reality. Initial experiments used dial-up modems in attempts to use the existing telephone network. The conclusion from these experiments was to point to the inadequacy of the existing network in terms of speed and bandwidth and the need for packet switching. Early networks were proposed with various goals [53] but it was not until 1967 that a large scale, decentralized computer network emerged whose starting point was the ARPANET [64–66]. Looking back, it seems a modest undertaking; the network was slow (e.g., the initial speed was 2.4 kbps) and was limited in extent (the first test implementation in 1969 counted four nodes), but it was the beginning and it evolved quickly. ARPANET and subsequent developments were funded by ARPA (Advanced Research Projects Agency), which will latter become DARPA (Defense Advanced Research Projects Agency). The primary goal of funding the network was to connect and share the few available large research computers between many researchers at diverse geographic locations. Shortly after its introduction, control of the network passed to the Defense Communication Agency (DCA) but continued to connect both civilian (primarily universities) and military nodes until 1984 when the nonmilitary part of the network was handed over to the National Science Foundation (NSF) to become the NSFNET whereas the military nodes formed the MILNET [64]. The NSFNET was restricted to research and education and precluded any commercial use. As a result many competing networks evolved for commercial use. The original NSFNET was built as a backbone with links or gateways to which networks connected. Initially there were only six gateways or nodes on the backbone, at a nominal link speed of 54 kb/s. By 1994, the NSFNET backbone grew to 21 nodes with 45 Mb/s that interconnected over 50,000 networks across the globe [66, 67]. The need to expand to the broader civilian use and include commercial entities led to defunding of the NSFNET backbone. The NSF’s role was replaced by commercial Internet Service Providers (ISPs) which support the Internet to this day. This transition was inevitable simply because the ever-growing information load had to be carried on physical networks and connections of local networks to the backbone required equipment that had to be developed and, necessarily, paid for. This required an infrastructure that could only be built and maintained by commercial enterprises. For these early networks, including ARPANET and NSFNET to evolve into the Internet, a few other issues had to be resolved. First, it was understood fairly early that multiple networks (local or wide-area, privately or government funded) will inevitably exist and that these networks must be interconnected [68]. These disparate networks may be entirely different from each other, some may be wired, and some may be wireless and may operate with diverse protocols and speeds. Initially, these
3
History of Communication and the Internet
85
were connected through traditional circuit switching but eventually, the need to interconnect individual diverse network (the “internetworking”) became an all-important problem, leading to the idea of an open architecture of networks interconnected through gateways and routers – the Internet. This open structure imposes no restrictions on the internal structure and operation of local networks and has no central control. Packets are passed through routers and gateways and transfer is negotiated at the network-router interface based on established algorithms such as TCP/IP [68]. Various pressures on the Internet started before it actually existed the way it is known today. Some of these were commercial whereas others were geared toward universal access. Since the roots of the Internet were in the research establishment and the initial backbone was funded and controlled by the NSF, commercial activity had to exist outside the network. To do so, a number of private enterprises evolved, creating private and competing networks and network protocols. These networks were often paid services and geared toward more universal and open access. Similarly, because NSFNET was a US-based system and before it could be accessed internationally, other organizations and governments attempted to build networks with various degrees of success. Some of these networks were closer to today’s local area networks whereas others were of much larger extent and can qualify in the same category as NSFNET. It is impossible and unnecessary to discuss all of these but a few samples can be useful, especially since many of the concepts and terms in widespread use originated in these efforts. It should be mentioned at the outset that in time, many of these competing and parallel networks either ceased to exist, were absorbed into other services or became integral parts of the Internet. Even before ARPANET, the National Physical Laboratory (NPL) in the United Kingdom, developed a local area network based on packet switching started in 1965 and operational in 1969 (Dubbed simply NPL). Although a limited effort, it was the first such network and introduced the predecessor of today’s router. In many ways, the credit for the “invention” of the Internet should go to this endeavor, in spite of its limited extent [66, 69, 70]. Another example of early attempts at networking based on packet switching is the CYCLADES network built in France as a research network dating back to 1973 whose intent was to explore alternatives to ARPANET for research purposes. It was specifically designed for internetworking and introduced the concepts of protocols that led to the internet protocol (IP) in use to this date [61, 71, 72]. Following the adoption of the X.25 packet switching protocol by the ITU, a number of public networks evolved in Europe. An early example is SERCnet, introduced in 1974 in the United Kingdom to serve the academia and research facilities, replaced later by the Joint Academic Network (JANET) [73], a network that exists to this day. Other examples are the CERNET, developed by CERN starting in 1984, the AARNet in Australia formed in 1989, the Japanese JUNET started in 1984, and TECHNET in Singapore. These operated with a variety of protocols although, as a rule they eventually gravitated towards the TCP/IP protocol.
86
N. Ida
Governments and government agencies developed a number of networks for specific use. An example is the NASA Science Network (NSN), and the Space Physics Analysis Network (SPAN) which merged in 1989 into the NASA Science Internet (NSI) as an international network. Another example is the Energy Science Network (ESNet) developed by the US Department of Energy. Some of the early networks were developed for educational purposes or with local interests in mind and should be properly viewed as local area networks. Examples are the MERIT (Michigan Educational Research Information Triad) 1971 through 1980, NYSERNet (New York State Education and Research Network), serving New York State and the PEN (Public Electronic Network) in Santa Monica in 1989 whose purpose was to link citizens to local government resources. A unique place in the development of networks and their connection to the Internet is the Minitel project [74]. It was an online service on telephone lines started in 1980 in France and eventually spread to more than a dozen countries around the world. It used dedicated text computer terminals and modems. It was a unique endeavor in that it preceded the World Wide Web but still allowed many of the functions afforded by the introduction of the World Wide Web. It also allowed users to make online purchases, provided mailboxes, and even allowed chats. Given that most individual users in the United States and elsewhere outside educational and research institutions did not have access to the Internet until 1989 and full commercial access was not available until 1995, Minitel was at least 10 years ahead of what we now call the Internet. The service was discontinued in 2012. Corporations and the private sector had specific commercial interests in development of networks. Being excluded from the early networks they had to develop their own means of connectivity that were more open and that could, eventually, generate revenue streams. Looking back from today’s perspective it is hard to even imagine that commercial interests were not part of the internet from the very beginning. Yet, until 1995 there were various restrictions on the use of the Internet backbone for commercial and even for personal purposes. It is not then surprising that various networks evolved to serve these interests. Some early examples are the USENET and UUNET but also TELENET. USENET (User network) was developed starting in 1979 based on the Unix-to-Unix Copy (UUCP) protocol to resemble the then available ARPANET. It offered mail, file transfer, discussions, and messaging requiring only local telephone connection. It was also one of the first networks to be used commercially. UUNET started in 1987 to provide e-mail, access to software, and feeds to USENET first as a nonprofit quickly turned into an ISP providing access to the backbone. TELENET was a commercial enterprise from the very beginning, starting in 1975. Its main role was connectivity to the backbone network, services for which it charged subscription fees. It was eventually absorbed into the Sprint network. There were many others and they allowed the development of early services that today are integral to the Internet. A good example is the e-mail, which, in its very early existence, was a hybrid between electronic transfer and physical hard-copy delivery to users [75]. These services, which eventually led to many others, were first limited by protocols, availability of software, and, in particular, by any form of
3
History of Communication and the Internet
87
uniform access methods. This had to wait until the development and introduction of the World Wide Web.
The World Wide Web The interconnecting of disparate networks to a backbone, created the Internet. It is useful to look at the definition of the Internet as articulated by the Federal Networking Council (FNC) in 1995 [76–78]: According to that definition, “Internet” refers to the global information system that (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/ follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses, or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein. This is in fact the idea of a “Network of Networks” and deals with the infrastructure. It does not address the issue of content simply because content is an entirely separate issue. To get where we are today, the content component had to be addressed, that is, given the infrastructure what can be communicated over it and how is it done. This was done early in the development of the Internet as the World Wide Web. The World Wide Web (WWW) originated at CERN in 1989 and went public in 1993 [79, 80]. It is an information system in which documents, images, and other resources are identified by URLs (Uniform Resource Locators). These are accessible over the Internet through HTTP (HyperText Transfer Protocol). The information may be accessed through software applications (web browsers) and are published by other software applications (web servers). Although in the minds of many Internet and World Wide Web are synonymous, the World Wide Web is the information circulating on the Internet and is an entirely different component. Although the Internet can function (and did in its infancy) without the World Wide Web, it would be a totally different entity without it. The World Wide Web had profound impact on the Internet. It allowed universal access by anyone with a computer and, later, with other devices such as smart phones and gave rise to a vast array of services and commercial entities. Individual access to the Internet was now available to the global population, dot COM companies emerged to take advantage of this access, and an array of web browsers (search engines) catered to all. Internet Service Providers (ISPs), some of which existed before the World Wide Web, now provided access to individuals and corporation and packaged various services including telephone access, television streaming, music, and almost any imaginable form of information from text to the Internet of Things (IoT) whereby anything can be accessed on the Internet provided it has the proper interface and software [81]. The Internet of Things is of particular importance not only because it is ubiquitous but also because it is likely to reshape the Internet. The IoT is a network of connected smart devices capable of communicating and providing data at various
88
N. Ida
levels of richness [82–85]. It is however understood to refer to devices that are not computing devices in the traditional sense. Whereas the traditional access over the Internet was (and still is, to a large extent) dominated by interaction through keyboards or touchscreens, the IoT refers to devices that communicate in other ways: through voice, initiated by sensors, schedules, and the like. That means that any device, provided it has the proper interface, can be brought onto the IoT. A coffee maker or a car, a building or an airplane is equally likely to be an IoT node. When they do, they become “smart devices” through incorporation of sensors, software protocols, and communication interfaces. The ideas of home automation, connected vehicles, self-driving cars, remote monitoring, warehousing, asset management, self-sailing cargo ships, and military applications are only some that have been addressed. The number of devices that may be connected is practically limitless, estimated to be in the range of 30–50 billion in 2020 [86] and likely to increase substantially in the foreseeable future. Many of these devices are connected through existing Wi-Fi local networks and others make use of short- and medium-range communication devices including Bluetooth, ZigBee, and LoRaWan (Long Range Wide Area Network). Others may be connected through the cellular network if conditions and cost justify their use. As with all aspects of the Internet, the Internet of Things offers new ways of interactions between humans and devices and between devices and it also introduces all the issues associated with this convenience including safety and security, data integrity, cost, and ethics.
Communication in NDE The advancements in communication and the emergence of the Internet have led to fundamental changes in all aspects of life, in business, in industry, and, quite naturally, in NDE. Whereas NDE cannot be seen as having initiated any new form of communication or having specific effects on developments in connectivity, it has followed closely, adapted and adopted various forms of communication and connectivity. In that respect, it follows closely the so-called Industrial revolutions in, as of now, four distinct periods designated as NDE 1.0 through NDE 4.0. These periods have seen revolutionary changes in NDE just as they have seen revolutionary changes in industry. Whereas the history of the industrial revolutions and the parallel developments in NDE have been described elsewhere in this handbook, it is useful to juxtapose the developments in NDE with those in communication. The four distinct periods are as follows [87, 88]: NDE 1.0 is commonly designated as the period before 1900. It is noteworthy in that it saw the emergence of NDE and is characterized by use of human senses and traditional forms of communication typical of the period. It also saw the first attempts to improve the basic senses through introduction of simple tools and methods, again compatible with technology of the period. In parallel, industry required new and improved tools for safety and quality monitoring, feeding the developments in NDE.
3
History of Communication and the Internet
89
NDE 2.0 covers the period 1900 through 1950 (some extend it to 1960). This period saw the availability of tools beyond human perception and for the first time NDE could probe into materials. Electric, physical, and chemical knowledge led to introduction of electromagnetic methods including ET, MT, microwaves, infrared methods as well as X-ray, gamma ray, and, later, UT methods. Although communication was still in its classical wire and radio modes, the first signs of things to come started emerging, including computers and early methods of connectivity. NDE 3.0 is typically viewed as covering the late 1960s through the early 2000s, and it saw the introduction of digital tools, the use of digital computers, digital data storage, and computer networks as well as automation and imaging methods. Later toward the closing of the century newer methods of performing NDE including robots, drones as well as improved modeling and connectivity tools became available. The Internet became the tool to adopt and connectivity allowed remote and autonomous testing. It is generally accepted that many of the physical methods of testing have matured during the NDE 3.0 period. NDE 4.0. Although one cannot place a starting date on events while in their midst, it is generally accepted that it all started in 1917 with the introduction of the term Industry 4.0. Technologies developed within the context of Industry 4.0 are expected to enhance existing NDE tools and, in particular, concentrate on data and data processing. Statistical analysis of NDE data is expected to provide clearer and deeper insights into reliability, to help with inspection performance, training, value proposition, in safety, and economy. The concepts of interconnected cyber-physical systems, true networking, and emphasis on data as a valuable product in itself are some of innovations associated with NDE 4.0. These became possible through key enabling tools including information digitalization, new interfaces, artificial intelligence, and machine learning, all enabled by the newly available 5G networks. Perhaps the more important aspect of NDE 4.0 is the promise of providing valuable feedback to industry for the purpose of improving product design. Many of the capabilities in NDE 4.0 were enabled by improvements in communication and connectivity, particularly the introduction of the 5G networks. There are however challenges that remain to be addressed including those associated with additive manufacturing and the need for testing and evaluation of custom or small-batch manufacturing. Other challenges have to do with big data handling and digital concepts such as digital twins and modeling.
Role of Communication Networks in NDE: The Road to 5G and Beyond Communication networks, and in particular the evolution of the cellular networks are the enablers of advancement in industry and in NDE. A cursory inspection of Table 1 shows the gradual introduction of services that enabled specific capabilities. For example, one had to wait for the introduction of 3G before mobile internet access was possible or to 4G before IoT connectivity became a reality. Remote testing, drones and distributed devices could not exist before the introduction of 4G. Similar
90
N. Ida
considerations apply to wired and optical networks, many of these connected with the speed of the networks and the available protocols. To emphasize the importance of networks it is useful to look at the 5G network. In simplistic terms, one can view it as simply a faster mobile data exchange network that naturally succeeded the 4G network. It is however much more than that. It introduces the concept of enhanced mobile broadband communication (eMBB), ultra-reliable low latency (URLLC), and extended mobility. These enable robust data communication over long distances with very low latencies. In addition, the massive machine type communication (mMTC) allows, for the first time connection of high density distributed IoT devices. The 5G network also has the bandwidth necessary for high-speed, real-time remote inspection through a multitude of inexpensive devices. 5G is also a pointer to the future. It is expected that subsequent networks will improve on any shortcomings in 5G. A good example is in the speed of the network but also the introduction of new standards and protocols. Specifically, the 6G network is expected to handle mostly machine-to-machine connectivity through the 6G Hyper-Connectivity protocol.
Summary: Today and Tomorrow All aspects of communication and the Internet are evolving and any reference to them can only be an instantaneous picture. Possibilities, equipment and services that may seem difficult or unattainable at any given point in time may well be common shortly after that. Traffic on the Internet, in all its forms, is constantly increasing and accessibility improves. With the advent of 5G communication, many of the issue at the heart of Industry 4.0 and NDE 4.0 have become a reality. Many more are likely to be resolved in the future and in fact may become central points in the upcoming 6G technology and beyond. At the time of writing of this summary, many aspects of communication, education, commerce, and governance are conducted either entirely on the Internet or use the Internet as an aid. It is not far-fetched to imagine that in the near future, some aspects of these activities and others will only be accessible through the Internet and much of it through the cellular networks. There are however difficult issues that have to be addressed. These include the problem of Net-neutrality, universal access, costs, as well as privacy issues and user rights. Data security and data ownership are also of primary importance. In the fast development of networks and services many of these important issues were either ignored or paid scant attention. These will have to be resolved, some through regulations, others through software and methodologies and, in extremis, by law. It is however likely that for the foreseeable future, this will be a catch-up process in which technology comes first and its consequences are dealt with later.
3
History of Communication and the Internet
91
Cross-References ▶ Basic Concepts of NDE ▶ Introduction to NDE 4.0
References 1. Fahie JJ. A history of electric telegraphy, to the year 1837. London: E. & F.N. Spon; 1884. 2. Marland EA. Early electrical communication. London: Abelard-Schuman Ltd; 1964. 3. Oslin GP. The story of telecommunications: Mercer University Press; 1992. 4. Wenzlhuemer R. The development of telegraphy, 1870–1900: a European perspective on a world history challenge. Hist Compass. 2007;5(5):1720–42. https://doi.org/10.1111/j.14780542.2007.00461.x. 5. Beauchamp KG. History of telegraphy: its technology and application: Institution of Engineering and Technology; 2001. 6. Huurdeman AA. The worldwide history of telecommunications: Wiley-Blackwell; 2003. 7. Lewis C. The telegraph: a history of Morse’s invention and its predecessors in the United States: McFarland; 2003. 8. Ralston A, Reilly ED, editors. “Baudot code”, Encyclopedia of computer science. 3rd ed. New York: IEEE Press/Van Nostrand Reinhold; 1993. 9. Bruce RV. Bell: Alexander bell and the conquest of solitude. Ithaca/New York: Cornell University Press; 1990. 10. Micklos J Jr. Alexander Graham bell: inventor of the telephone. New York: Harper and Collins; 2006. 11. Maxwell JC. A treatise on electricity and magnetism. Oxford: Clarendon Press; 1873. 12. Baird D, Hughes RIG, Nordmann A, editors. Heinrich Hertz: classical physicist, modern philosopher. New York: Springer; 1998. 13. Hertz H. Electric waves: being researches on the propagation of electric action with finite velocity through space: Dover Publications; 1893. 14. Winters SR. The story of Mahlon Loomis – Pioneer of radio: Radio News; 1922. 15. Appleby T. Mahlon Loomis, Inventor of radio. 1967. 16. Bell Telephone System. The Magic of communication. 1954. 17. Bondyopadhyay PK. Guglielmo Marconi – the father of long distance radio communication – an engineer’s tribute. In: 25th European microwave conference; 1995. https://doi.org/10.1109/ EUMA.1995.337090. 18. Marconi G. Wireless telegraphic communication: Nobel Lecture, 11 December 1909. In: Nobel Lectures. Physics 1901–1921. Amsterdam: Elsevier Publishing Company; 1967. p. 196–222. 19. Guarnieri M. The age of vacuum tubes: early devices and the rise of radio communications. IEEE Ind Electron M. 2012;6(1):41–3. https://doi.org/10.1109/MIE.2012.2182822. 20. De Forest L. The Audion; a new receiver for wireless telegraphy. Trans AIEE Am Inst Electr Electr Eng. 1906;25:735–63. https://doi.org/10.1109/t-aiee.1906.4764762. 21. Oberlin S. Some possible forms of phonograph. Electr World. 1888;12(10):116–7. 22. Thompson J, Michael T. Visions of the future: physics and electronics. Cambridge; 2001. 23. Rogers D. The chemistry of photography: from classical to digital technologies. Cambridge, UK: The Royal Society of Chemistry; 2007. 24. https://web.archive.org/web/20150823030506/http://www.kodak.com/ek/US/en/Our_Com pany/History_of_Kodak/Milestones_-_chronology/1878-1929.htm. Accessed 20 Oct 2020. 25. Abramson A. Zworykin, Pioneer of television: University of Illinois Press; 1995. 26. https://en.wikipedia.org/wiki/Heinrich_Barkhausen 27. https://en.wikipedia.org/wiki/Alexander_Stepanovich_Popov 28. Television at the Berlin Radio Exhibition. Television, October 1931.
92
N. Ida
29. https://en.wikipedia.org/wiki/Manfred_von_Ardenne 30. Burks A, B., Alice R. The ENIAC: the first general-purpose electronic computer. Ann Hist Comput. 1981;3(4):310–89. https://doi.org/10.1109/mahc.1981.10043. 31. Moye WT. The Army-sponsored revolution, ARL Historian. 1996. https://web.archive.org/ web/20170521072638/http://ftp.arl.mil/~mike/comphist/96summary/index.html. Accessed 21 Oct 2020. 32. Zuse K. The computer – my life. Springer. 33. Rojas R, et al. Plankalkül: the first high-level programming language and its implementation. Technical report B-3/2000: Institut für Informatik, Freie Universität Berlin; 2000. 34. Sebesta WS. Concepts of Programming languages. M6. 2006;14:18, p. 44. 35. Knuth DE, Pardo LT. Early development of programming languages. Encyclopedia of Computer Science and Technology. 1976;7:419–493. 36. Gertner J. The idea factory: bell labs and the great age of American innovation. New York: Penguin; 2012. 37. Riordan M, Hoddeson L. Crystal Fire: the invention of the transistor and the birth of the information age: W.W Norton & Company Limited; 1998. 38. Kilby J. Invention of the integrated circuit. IEEE Trans Electron Devices. 1976;ED23(7):648– 54. https://doi.org/10.1109/t-ed.1976.18467. 39. Calhoun G. Digital cellular radio. Inc.: Artech House; 1988. 40. Rappaport TS. The wireless revolution. IEEE Commun Mag. 1991;29(11):52–71. https://doi. org/10.1109/35.109666. S2CID 46573735. 41. Flood JE. Telecommunication networks. London, UK: Institution of Electrical Engineers; 1997. 42. Asif S. 5G Mobile communications: concepts and technologies. CRC Press. 43. Golio M, Golio J. RF and Microwave Passive and Active Technologies: CRC Press; 2018. 44. Paetsch M. The evolution of mobile communications in the US and Europe. Regulation, technology, and markets. Boston: Artech House. 45. William C, Lee Y. Mobile cellular telecommunications systems: McGraw-Hill; 1989. 46. Divya AK, Liu Y, Sengupta J. Evolution of mobile wireless communication networks: 1G to 4G. Int J Electr Commun Technol. 2010;1(1):68–72. 47. Nishizawa JI, Suto K. Terahertz wave generation and light amplification using Raman effect. In: Bhat KN, DasGupta A, editors. Physics of semiconductor devices. New Delhi: Narosa Publishing House; 2004. 48. Alwayn V. Fiber-optic technologies. Pearson Education: Cisco Press Hoboken; 2020. 49. Rigby P. Three decades of innovation. Lightwave. 2014;31(1):6–10. 50. Mellow C. The rise and fall and rise of iridium. Air & Space Magazine, September 2004. 51. Dietrich FJ, Metzen P, Monte P. The Globalstar cellular satellite system. IEEE Trans Antennas Propag. 1998;46(6):935–42. https://doi.org/10.1109/8.686783. 52. https://en.wikipedia.org/wiki/Starlink. (make this No. 52). 53. Holzmann GJ, Pehrson B. The Early history of data networks. Wiley. p. 90–1. 1995 ISBN 0818667826. 54. Kempe HR, Garcke E. Telephone. In: Chisholm H, editor. Encyclopædia Britannica. 26. 11th ed: Cambridge University Press; 1911. p. 547–57. 55. Brooks J. Telephone: the first hundred years: HarperCollins; 1976. 56. Leiner BM, Cerf VG, Clark DD, Kahn RE, Kleinrock K, Lynch DC, Postel J, Roberts LG, Wolff S. Brief history of the Internet: Internet Society. https://www.internetsociety.org/internet/ history-internet/brief-history-internet/. Accessed, 17 Oct 2020. 57. Redmond KC, Smith TM. From whirlwind to MITRE: the R&D story of the SAGE air defense computer: The MIT Press; 2000. 58. Corbató FJ, Daggett MM, Daley RC. An Experimental time-sharing system. AFIPS Conf Proc. 1962;21:335–44, SJCC. 59. Jackson JR. Networks of waiting lines. Oper Res. 1957;5(4):518–221. 60. Kleinrock L. An early history of the internet. EEE Commun Mag. 2010;48(8):26–36. https:// doi.org/10.1109/MCOM.2010.5534584.
3
History of Communication and the Internet
93
61. Abbate J. Inventing the internet: MIT Press; 2000. 62. Hui J, Arthurs E. A broadband packet switch for integrated transport. IEEE J Sel Areas Commun. 1987;5(8):1264–73. https://doi.org/10.1109/JSAC.1987.1146650. 63. Cerf V, Kahn R. A protocol for packet network intercommunication. IEEE Trans Commun. 1974;22(5):637–48. 64. https://en.wikipedia.org/wiki/ARPANET. Accessed 24 Oct 2020. 65. Paul Baran, and the Origins of the Internet. https://www.rand.org/about/history/baran.list.html. Accessed 20 Oct 2020. 66. Gillies J, Cailliau R. How the web was born: the story of the world wide web: Oxford University Press; 2000. 67. A Brief History of NSF and the Internet. August 2003. https://www.nsf.gov/news/news_summ. jsp?cntn_id¼103050. Accessed, 10/22/2020. 68. Cerf VG, Kahn RE. A Protocol for packet network interconnection. IEEE Trans Commun Tech. 1974;COM-22(5):627–41. 69. Hey A, Pápay G. The computing universe: a journey through a revolution: Cambridge University Press; 2014. 70. Scantlebury RA, Wilkinson PT. The National Physical Laboratory Data Communications Network. Proceedings of the 2nd ICCC. 1974;74:223–8. 71. Kim BK. Internationalising the internet the co-evolution of influence and technology: Edward Elgar; 2005. 72. Pelkey J. 6.3 CYCLADES network and Louis Pouzin 1971–1972. Entrepreneurial capitalism and innovation: a history of computer communications. 1968–1988. 73. Wells M. JANET-the United Kingdom joint academic network. Serials. 1988;1(3):28–36. https://doi.org/10.1629/010328. ISSN 1475-3308. 74. Stoner M. French connections with minitel: the future has arrived in France. Online. Vol 12, no 2, 1988. 75. Manes S. The complete MCI handbook. New York: Bantam Books; 1988. 76. https://en.wikipedia.org/wiki/Federal_Networking_Council. Accessed 20 Oct 2020. 77. https://www.nitrd.gov/historical/fnc-material.aspx. Accessed 20 Oct 2020. 78. The Federal Networking Council. https://web.archive.org/web/19981202194330/http://www. fnc.gov. Accessed 20 Oct 2020. 79. McPherson SS. Tim Berners-lee: inventor of the world wide web: Twenty-First Century Books; 2009. 80. The birth of the web. CERN. https://home.cern/science/computing/birth-web. Accessed 10 Oct 2020. 81. People, and Technology. https://psu.pb.unizin.org/ist110/chapter/1-4-history-of-the-internet/. Accessed 10 Oct 2020. 82. Mattern F, Floerkemeier C. From the internet of computer to the internet of things. InformatikSpektrum. 2010;33(2):107–21. https://doi.org/10.1007/s00287-010-0417-7. 83. Weiser M. The Computer for the 21st century. Sci Am. 1991;265(3):94–104. https://doi.org/10. 1038/scientificamerican0991-94. 84. Raji RS. Smart networks for control. IEEE Spectr. 1994;31(6):49–55. https://doi.org/10.1109/6. 284793. S2CID 42364553. 85. Perera C, Liu CH, Jayawardena S. The emerging internet of things marketplace from an industrial perspective: a survey. IEEE Trans Emerg Top Comput. 2015;3(4):585–98. 86. Nordrum A. Popular Internet of Things Forecast of 50 Billion Devices by 2020 Is outdated. IEEE Spectr. 2016; 87. Vrana J. NDE perception and emerging reality: NDE 4.0 value extraction. Mater Eval. 2020;78(7):835–51. https://doi.org/10.32548/2020.me-04131. 88. Vrana J, Singh R. NDE 4.0 – a design thinking perspective. J NDE. 2020; https://doi.org/10. 1007/s10921-020-00735-9.
4
Creating a Digital Foundation for NDE 4.0 Nasrin Azari
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitalizing Your Existing Business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitalizing Your Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitalizing Your Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Foundation for NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96 97 98 101 105 106 106
Abstract
NDT businesses are currently investigating NDE 4.0 but confused about how to incorporate complex emerging technologies into their own businesses, which are often a combination of various siloed software programs, manual processes, and paper-based or Excel forms. The first step in moving toward NDE 4.0 is to create a digital foundation for your business that cleans up your operations with two components: 1. A well-formed digital database of important business information 2. Digitalized processes that eliminate paper, create consistency, connect and support digital toolsets, and enable automation Creating a successful digital foundation requires a good understanding of what types of data elements are important for your individual business, and requires a disciplined, step-by-step approach to analyzing and changing processes. Although digitalization work takes some time, benefits are great and continue to build upon themselves over time. N. Azari (*) Floodlight Software, Inc., Cary, NC, USA e-mail: nasrin@floodlightsoft.com © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_45
95
96
N. Azari
Keywords
Digitalization · Software for NDE 4.0 · Digital transformation · Data management
Introduction An “NDE 4.0 Enabled” NDT Inspection company will use technologies such as Big Data, ML/AI (Machine Learning & Artificial Intelligence), IA (Intelligence Augmentation), IIoT (Industrial Internet of Things), and Digital Twins. These technologies help an NDT organization become more proactive and effective at discovering defects, raising their POD (Probability of Detection) for existing defects on assets, and increasing the amount and quality of inspection results they can provide. By taking advantage of these tools, the organization provides more value to the industry (ies) they serve with less effort and expense. This chapter discusses how to begin the journey of moving from a manual, paper-based inspection business to an inspection business capable of incorporating and implementing Big Data, ML/AI, IA, IIoT, Digital Twins, and other advanced digital systems. NDE 4.0 technologies inherently involve software components that import (take in) or export (report out) digital data. Some NDE 4.0 tools and systems also import control instructions (telling the system what to do), and/or export specific results that indicate what should happen next. In order to incorporate and support NDE 4.0 technologies, you need to create and use a software-based business system that supports highly structured digital data and enables digital IO (Input/Output) and communications. Creating a digital foundation for NDE 4.0 begins with digitalizing your data and your existing business processes. Once you have done that, you iterate through a cycle of identifying new digital processes and tools that serve to improve a part of your business, and then incorporate them into your system: Identify a new capability to add to your business
DIGITALIZE YOUR EXISTING BUSINESS
Operate your business digitally
Integrate new tools and processes into your system
Update your business processes / model with new capabilities
Investigate appropriate toolsets that support your goals
4
Creating a Digital Foundation for NDE 4.0
97
In this chapter, we explore the considerations for building a digital foundation that will enable you to integrate emerging NDE 4.0 technologies and capabilities into your organization.
Digitalizing Your Existing Business The process of digitalizing your business involves replacing and/or modifying manual, paper-based, or analog data and processes with digital and/or automated data, data collection, and business processes: FROM: Manual processes Paper-based data storage or processes Analog data Disconnected data or processes
TO: Digital and/or automated data collection. Digitized data Digital business processes Well-formed digital data repository
• “Manual” refers to those processes that are performed by hand, by humans. In some cases, a manual process is actually best performed manually. . . such as providing notes describing why a particular test failed. In other cases, it might be better to replace a task with automation. For example, if you need to track the date and time that a delivery is made, instead of requiring an individual to record date and time manually, the individual could instead interface with a software application that automatically: (a) Captures the date and time (b) Captures the geo-location (c) Prepares itself to capture a photograph of the completed delivery Not only do these actions make it easier and faster for the individual to complete the delivery task, but they also provide a visual record and eliminate the possibility of an individual accidentally entering the wrong date and/or time. • “Paper-based” refers to collecting data onto a physical paper form or using a paper form to guide an individual through a process or task. In most cases, digitalization efforts should eliminate all paper data records and process forms. • “Analog” means data that is not digitized. This includes images, graphs, or written text. You should aim to replace all your analog data elements with digital representations wherever possible. • “Disconnected” refers to data that is collected into a spreadsheet or electronic format but is not connected with a system/database that gives you the ability to search through it alongside other related data. • “Automated” data collection refers to data that is collected without requiring input from a human.
98
N. Azari
• “Digital business processes” refer to process-driven software applications that incorporate automation, intelligent integrations, and digital data management. • “Well-formed digital data” means data that is stored, maintained, and used in a highly structured format. Digitalization refers to the creation of a well-formed digital data repository and the implementation of digital business processes that mirror and support your actual business model, automating repetitive tasks, improving productivity, and providing more business value to your organization.
Digitalizing Your Data There is a lot of highly valuable data in your NDT business. A single inspection project can easily involve thousands of data points. NDE 4.0 requires digitally stored data that can be easily manipulated and used for a variety of analytical purposes. The three main areas of focus when digitizing your data are: (a) What data elements to create – i.e., what data do you want to store (b) What format to use for each data element (c) Where to store your data
Data Elements Identifying data elements can be a lot more involved than it first appears. You should think carefully about your data elements, what information you care about not only today, but what you may care about in the future. Thinking more broadly usually produces a better long-term solution, even if it is more work to create in the short-term. As a simple example, consider a requirement that you might have today to store people’s addresses in the United States. An easy way to do this is to create an unformatted text string called “Address”: Address 123 Main Street, Inspectionville, NC 27606, U.S.A.
Although you may not have this requirement today, in the future it might become important for you to be able to query which addresses are associated with a particular city or state. Anticipating this, you might decide on a slightly more complex structure that breaks up the address into different components: Street Main Street
Number 123
City Inspectionville
State NC
County Wake
Code 27606
Country USA
With the more detailed format, you are able to query addresses within a particular city with ease. In the first structure, you have a single data element, “Address” related to each individual in your database. In the second structure, you have seven different data elements related to each individual in your database. Extracting an
4
Creating a Digital Foundation for NDE 4.0
99
individual’s address is slightly more complicated in the second structure, but this complexity affords you the flexibility to query your database for a variety of different questions: • Who lives in Wake County? • Sort all of my counties from largest to smallest. Show me the 10 largest; 10 smallest • Same queries for Cities, States, and Zip codes • Show me a color-coded map based on population Oftentimes, at this stage of your digitalization journey, you will not realize how many different ways that your data can and will be queried in the future. So, the best approach on data elements is to err on the side of flexibility wherever possible. Create specific data elements that will enable a variety of analytics down the road, even at the expense of more complicated data management today. Even with the most prescient of thought, you will probably not be able to think of everything relevant in your first attempt. Let us say that you have been operating with this database of addresses for some time and then get asked some additional questions: • What county has grown the most over the last 12 months? • What county has shrunk the most over the last 12 months? • Show me a population map of Massachusetts and the changes over the last 12 months? With your database as is, you cannot answer these questions. You have a current snapshot of data, but no historical information to show trends or changes. In order to answer these additional queries, you need to add information about when addresses are added, when they change, what the change was, and when they are “deleted” (and you cannot actually delete the record from your database in order to track this). You cannot answer those questions today, but you want to enable the tracking of historical information so that you can analyze population changes in the future. You add several fields to your data structure to enable this and update your software programs to manage those fields. For all your existing addresses, however, it is unlikely that you will have historical information to populate into your new fields. This simply becomes a limitation in your dataset that you must work around. Most datasets have limitations like this, and it is very important to employ good data management to ensure that analysis functions can take those limitations into consideration. Extrapolating these concepts to NDT businesses, you will want to digitalize data elements for a variety of different types of information: your customers, your jobs/ work orders/projects, dates, equipment, results, costs and profits, technicians, skills, schedules, etc. You may also want to keep track of what methods you perform, when and how you change your processes, how long it takes to perform various tasks, etc., that is, track both concrete datapoints and then meta-data, which refers to information about your datapoints.
100
N. Azari
The best situation is always to have defined your data fields “correctly” in the first place, but it is highly unlikely that you will get it all right initially. Even so, the fewer updates you need to make down the road, the more valuable your database will be. So, think as broadly as possible when creating data elements that describe your data.
Data Format Data format represents how each data element will be recorded and stored. Some of your data might already be digitized. This is generally true when you are collecting data through a digitized tool or entering it into a software application. Other data may currently be collected into a solely analog format. Either way, you should evaluate whether you are satisfied with the existing format, or if you would be better off re-formatting the data. Consider that you will want to query and use your data history in the future for some known purposes, and some yet unknown. When thinking about data format, you want to make sure that it is able to accurately represent the actual data itself, that your data fields can accommodate variations in potential data input, and that your format is as specific as possible to minimize data entry errors and make searching and using as easy as possible. Some of you readers will remember the big “Y2K” scare of year 2000. The fear was that old software programs were going to fail when the year changed from 1999 to 2000 on January 1, 2000, because most software programs at the time stored year fields as two digits, assuming that the full year would always be 19xx. This is a good example of a short-sighted data format problem. The two-digit data field format for “year” was not able to accommodate the breadth of data inputs and was not specific enough to avoid ambiguity once the clocks changed to the new millennium. Revisiting our address example, consider how you might format the “State” in an individual’s address. If you choose to use a variable length text string, you could end up with a list of addresses in your database located within North Carolina that describe “North Carolina” in various ways, such as: NC, nc, Nc, North Carolina, north Carolina, NORTH CAROLINA, North Carlina, North Caralina, etc... As you can see, this makes it almost impossible to query your database and get a definitive list of everyone in North Carolina. However, if you define your State field to be “the two-letter upper-case standard abbreviation” for each state, the only thing you need to search for is: “NC.” The more highly structured your data fields are, the less variance in field values, and the more useful your data will be for future queries. Structure your data as strictly as possible, while still enabling you to capture all the information you require. Storing Your Data Your primary decision on where to store your data is to determine whether you want to store it “on premises” within your own environment, or “in the Cloud,” where it would be virtually accessible through a network connection. A Cloud-based data repository has two main advantages: 1. Your data is accessible from just about anywhere in the world, depending on network connectivity and security controls. Inspectors performing their work in
4
Creating a Digital Foundation for NDE 4.0
101
the field will achieve much greater efficiency and convenience with the ability to read directly from and write directly to your data repository. 2. You do not need to procure the resources (hardware, software, labor, knowledge) required to store, back-up, manage, and protect your data. There are many organizations that provide cloud-based data management. These organizations have the highest level of expertise around keeping your data safe, clean, regularly backed-up, and protected. Although choosing to manage and store your data in your own environment, on premises, does require a lot of overhead (hardware, software, labor, knowledge), you will have the benefit of being completely in control of all your data and access to it. In certain circumstances, the benefit of control outweighs the convenience and ease of cloud data management. The good news about data storage is that it is fairly easy to export/import wellstructured data between data repositories if you decide to change strategies in the future. Given that, make sure to choose specific data storage tools that give you the ability and flexibility to export your data at any point in the future.
Digitalizing Your Processes While your database represents the things that are important for your business, your processes represent the activities that are important for your business. Digitalizing your processes enables greater business efficiencies and serves two purposes: 1. Automating tasks to minimize human involvement, resulting in faster, more consistent performance and more reliable results. Automation is one of the key aspects of NDE 4.0; making use of digital technologies to assist human workers and replace manual tasks, particularly those that are repetitive and/or highly structured. 2. Creating a historical audit trail that enables a future “recreation” of previous events. An ongoing historical audit trail is the part of your digital foundation that enables you to build and maintain a “learning system” that can be used to feed future ML/AI programs, which can then model your business and provide intelligent analysis and assistance. These two digitalization drivers work together to set the foundation for your NDE 4.0 enabled business. Let us delve into how to digitalize your processes to achieve these goals.
The 7-Step Process Digitalization Processes! Your processes define what and how you perform work in your organization. In order to digitalize your work, you need to understand the components of what you do, and how those tasks interface with each other.
102
N. Azari
Step 1: Identify Your Baseline Digitalization is, effectively, a business process improvement. As such, you should always document your baseline so that you can accurately evaluate the success of your improvements. Your baseline includes metrics around the Key Performance Indicators (KPIs) that are most important for your particular business. These KPIs might include: profitability, productivity, utilization, customer satisfaction/renewals, or any number of other measurable business metrics. Document your Baseline by measuring your KPIs over a period of time, taking note of both average values and variance. Your averages give you an idea of where you are on the scale from good to bad, and the variance shows you how much difference there is between your datapoints. The higher the variance, the less consistent/reliable your related processes.
Step 2: Map Out Existing Processes This process analysis starts with documenting how you perform work today. You might start with a swim lane flowchart, as shown below. In this chart, you identify the “actors” (individuals, roles, or departments depending on the level of your process) who perform work along the left side of the diagram. Each individual role gets a separate lane. The tasks that each role performs are placed inside their lanes. Arrows between tasks represent dependencies and/or interactions between actors or tasks.
You will typically start at a high level and then drill down deeper into each component, creating more detailed flowcharts as you go. There are several different types of flowcharts, including the basic flowchart or an event-based flowchart, that you can use to document your processes. You can use whatever combination makes the most sense based on how your business operates and what seems most natural to you.
4
Creating a Digital Foundation for NDE 4.0
103
Step 3: Identify Problem Areas Once you have your processes documented, the next step is to identify the areas that need improvement. These are parts of your process that are unreliable, error-prone, inconsistent, too manual, too slow, or otherwise suboptimal. Having your existing processes mapped out in front of you makes it much easier to see where your biggest issues are. For a digitalization effort, any part of your process that involves manually capturing data that must subsequently be input into a software database or application is a prime candidate for your first pass. Your eventual end goal is an end-to-end system with as much automation as possible and as little human interaction as possible, particularly for repetitive tasks. However, process changes generally happen iteratively, over time, and you want to fix the easiest things first. Oftentimes, fixing some processes, especially through digitalization, exposes other problem areas that can then be addressed more easily. Focus on minimizing errors, easing bottlenecks, eliminating ambiguity, and removing manual or human input. Step 4: Determine How to Fix Problem Areas Once you have identified your biggest problem areas, figure out the best way to fix them. Sometimes, the best solution is a relatively simple process change involving tighter control or restrictions on data entry. Sometimes, opening up communication channels or creating a collaborative environment can release a process bottleneck. In situations where you want to replace a manual process with a digital process so that all of your data is entered and managed digitally, you will be building or bringing in a software program. In these situations, having all your processes mapped out will allow you to evaluate options most effectively. Step 5: Evaluate and Choose Solutions When looking for a product (software and/or other technology) to improve your processes, start with a very good understanding of what you want the product to do for you. Create a list of requirements and paint a high-level picture of how your ideal system would behave. Do research to find options and then dig into each one to find the best fit. The most important factor in your decision-making should be how well the product meets your requirements today and how well you believe the product will meet your future requirements. Regarding cost, which is often a high priority in decision-making, keep in mind three things: 1. It does not make any sense to choose a product that has the lowest price if it is unlikely to meet your most critical needs. 2. Just because a product has a high price tag, does not necessarily mean it is the best fit. Many products have nuances that favor certain types of problems, certain kinds of workflows. The best fit for you is the product that naturally meets your workflow requirements without excessive customization. 3. Your process analysis gives you a lot of knowledge about the costs of your current processes. From here, you can determine an estimated ROI (return on investment) that you would expect to achieve with a new solution. If you can purchase a
104
N. Azari
product that meets your needs for less than the ROI you expect to achieve after implementation, then you should move forward. One very important part of this part of the process is making sure to include all relevant roles when evaluating new tools. A diverse team can help uncover blind spots and stay objective.
Step 6: Build Out New Process Maps Once you have chosen your solution(s), create or update your process maps to utilize them. Estimate expected improvements in your KPIs, and make sure to map out all possible scenarios, which may look different than your use cases of today.
Step 7: Implement and Test Finally, you should implement your solution according to the new process maps that you created in the previous step. Run through your use cases, keeping track of your KPIs. Use the solution in as “real” of an environment as possible to make sure that it works according to plan and ensure that its performance and effectiveness meet or exceed your expectations before deployment to your entire team. The most important thing to keep in mind as you build a digital foundation for your business is that digitalization is an ongoing process. You should expect to continue moving forward, adding improvements to your business, but the changes will take some time.
Audit Trail As you digitalize your business processes, you develop the ability to create a footprint representing a set of tasks or actions that are currently being or have previously been performed. Think of this digital footprint as a memory map that will allow a future individual to “recreate” or “model” something that happened previously. To do this, consider all information that will help a future individual answer the “five Ws (plus one H)” about each task. Mapping out the digital tracking of your processes might start with something like this: Tracking process data for an NDT organization Who? Who performed the Task? What? What actions were taken? What data was collected? What conclusions were made? Where? Where were the actions performed? What were the conditions at the time and place where the task was performed? When? What date / time were the actions performed? Was the Task completed? How long did it take to complete the Task? How? What specific procedures were followed to perform the Task? What machinery was used? What parameters were used? Etc. Why? What is the (customer) request that we are fulfilling? What is being asked or requested of us (the NDT organization)?
4
Creating a Digital Foundation for NDE 4.0
105
This footprint allows you to “remember” not only the results of each inspection you performed, but the conditions that the inspection was performed under, how long it took, the exact equipment that was used, the personnel that participated, etc. Here is a very simple, high-level example of a daily journal: Date 202103-08
Time 7: 02:41
Event Site Check-in
Procedure Performing MT inspection for Customer A Daily requirement Daily requirement Equipment check
Site Site A
Tech Larry
Details Auto-location-stamp, Retina ID capture, . . .
202103-08 202103-08 202103-08
7: 07:30 7: 21:25 7: 25:50
JSA Start
Site A Site A Site A
Larry
JSA process
Larry
JSA data/results
Larry
MT
MT 75398
Larry
MT
MT 75398
Larry
Test data and results
...
...
Weld A1 Weld A2 ...
Equipment calibration test details, Equipment type and SN, . . . Test data and results
202103-08 202103-08 ...
7: 32:22 7: 51:45 ...
...
...
JSA Complete Calibration
Imagine that you have such a journal for every working day of the year, for every inspector you employ. Think about how you might be able to gain some intelligence by querying the data. You might be able to calculate the average time it takes to perform certain inspections, and how that varies from inspector to inspector, or based on weather conditions, or based on asset type, or based on customer. You can use this information to help drive business policies or improvements. You might implement a process change, and then watch the effects on your business performance over the following 3 to 4 months to see whether you achieved your goals. Having knowledge about how your business operates is very powerful and gives you an unbiased perspective of your operations. It allows you to make confident decisions based on trustworthy information. It allows you to prioritize the implementation of new (NDE 4.0) systems and evaluate their effectiveness. It puts you in a position to be proactive and forward-looking about your future potential.
Foundation for NDE 4.0 With a well-formed digital data repository, digitalized business processes, and a growing digital audit trail, you have a strong foundation for an NDE 4.0 business. Your audit trail and historical data repository work together to create a model of your business that can be used for analytics and intelligent connections with emerging software programs, including relevant NDE 4.0 technologies that support Big Data, ML/AI, IA, IIoT, and Digital Twins. As you create an integrated digital workflow that mirrors your business processes, you gain the ability to insert new sources of (inspection) data, analyze historical information to gain intelligence into your
106
N. Azari
operations, and develop new ways to service your customers and provide value to your industry. The result of NDE 4.0 is that we become a forward-looking, proactive industry, anticipating asset failures, and fixing weaknesses before an expensive disaster occurs. For an individual organization, NDE 4.0 may seem overwhelmingly difficult and out of reach. But digitalization is something that you can accomplish today, without an exceptional amount of effort or expense, that enables you to position your company to take advantage of the future opportunity that NDE 4.0 brings.
Summary Although digitalization takes some time to plan and implement, its benefits are immediate and will continue to grow over time. Digitalization sets a foundation for enabling NDE 4.0 and is a necessary step toward staying relevant and competitive in the NDT market, as well as enabling your business to react easier to market pressures such as what we experienced during the Covid-19 pandemic. Along the journey toward NDE 4.0, you will see improvements in productivity, efficiency, and customer satisfaction, and you will gain insights into your business operations that allow you to make further improvements. Being disciplined and intentional about defining your data elements and changing your processes will ensure better results and long-term success. It all starts with a commitment to the process, and taking that first step! • I-SCOOP: Digitization, digitalization and digital transformation: the differences; Link: Digitization, digitalization and digital transformation: the differences (i-scoop.eu) • Enterprise Storage Forum, “Cloud Storage Pros and Cons,” by Drew Robb, October 10, 2018 • Process Digitalization in Digital Transformation, by Pedro Robledo, BPM process management expert, Albatian • How Digital Business Process Management Transforms the Enterprise, by Joe McKendrick, May 15, 2017
Cross-References ▶ Best Practices for NDE 4.0 Adoption ▶ Digitization, Digitalization, and Digital Transformation ▶ Introduction to NDE 4.0 ▶ Registration of NDE Data to CAD ▶ Robotic NDE for Industrial Field Inspections ▶ Value Creation in NDE 4.0: What and How
5
Digitization, Digitalization, and Digital Transformation Johannes Vrana and Ripudaman Singh
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numbers and Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signals and Waveforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Images and Videos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volumetrics and Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stages of Informatization in NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Informatization of Training and Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Informatization of Radiography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
108 109 111 111 113 113 114 114 115 116 117 117 119 120 120 122 122 123 123
Abstract
The world of non-destructive evaluation (NDE) has seen digitization since the third revolution. Over the last decade or so, digitalization has also been observed to a point where it is now ready for digital transformation, in sync with the fourth J. Vrana (*) Vrana GmbH, Rimsting, Germany e-mail: [email protected] R. Singh Inspiring Next, Cromwell, CT, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_39
107
108
J. Vrana and R. Singh
industrial revolution. The intermediary step of digitalization overlaps the third and fourth revolution and can sometimes be confusing. This chapter is aimed at demystifying the stages of informatization, starting with some general life examples, understanding the evolution at the fundamental data set level, which is then used to understand the relevant elements of nondestructive evaluation. Keywords
NDE 4.0 · Industry 4.0 · Digital Revolution · Industrial Revolutions · Digital · Digitize · Digitalize · Digital Transformation · Informatization
Introduction In summer of 2016, one of the authors was trusted to organize a virtual in-company NDE conference – back then, it was called “NDE Council 2.0.” It was designed to be 4 h/day for 4 days, with participants from 15 locations in four continents and seven different time zones (from Asia to USA). At that time, the teleconference tools were not that mature. Luckily, so-called ‘collaboration rooms’ equipped with state-of-theart audio-visual equipment were available at all locations. But those rooms needed to be booked in advance, sometimes almost a year. Moreover, local hosts were needed at every location in addition to a technical support team. This was digitization of a conference. As digitization is the core of the third revolution [1–2], the name “NDE Council 3.0” would have been more appropriate, in hindsight. But now it has become natural for everybody to have video meetings from anywhere in the house, or even on mobile devices while in motion. In fact, social events from birthday parties to funeral ceremonies are happening via online-meeting platforms in compliance with social distancing restrictions in times of a world-wide pandemic. A fall 2020 meeting with the ICNDT group on NDE 4.0 with more than 20 participants from 15 countries using regular computers provided a better user experience than the conference mentioned above. It just took a couple of mouse clicks to plan the meeting and a couple of mouse clicks to dial in. And there is even more today: you can share your computer screen, run chat on the side, use whiteboard and sketch ideas, and save the entire proceedings on the cloud. All this with no need for technical support (mostly) and costs affordable for everybody. Soon this will be combined with virtual or augmented reality platforms. So, what changed in these 5 years: commercial suppliers who not only provide a digital communication platform but also the tools to support our digital process for planning, booking, and execution. The change from digitization to digitalization of conference calls. However, there is still one major issue. There are several commercial platforms and all of them are not compatible with each other. We cannot call from Skype into Zoom the way we are able to call from any phone to any other phone (iPhone, Android, landline, . . .). Organizations have choices and preferences. Each one uses a certain communication platform. Some of them allow their employees to use any
5
Digitization, Digitalization, and Digital Transformation
109
platform, while others restrict use to certain approved modes. Sometimes, computers refuse to cooperate, and we hear things like “technology is great when it works.” Which means, there is another level of consumer experience that can be achieved through digital transformation. A conference where everyone is free to pick a tool of their choice (Skype, Zoom, Teams, or WebEx) and dial into the same conversation. This requires that the system providers put the customer first and change from proprietary systems to open interfaces. Such a step would not only solve the issue mentioned above but would at the same time create a new ecosystem which permits all kinds of new tools for the benefit of the customer. When this happens, we can call conferences as having gone through digital transformation, and call the next conference as “NDE Council 4.0.”
Basics Understanding the conceptual difference between digitization, digitalization and digital transformation is key to the promise of fourth revolution, to the cyberphysical integration [1–2]. Unfortunately, most languages, like German, Spanish, and Japanese, do not differentiate between digitization and digitalization, even though the digitization and digitalization activities have little in common. The only commonality between the two terms (besides the similarity in notation) is that digitalization requires digitization [3]. In simple terms, digitization is the transition from analog to digital and digitalization is the process of using digitized information to simplify specific operations [2, 4]. The digital transformation uses digital infrastructure and applications to exploit new business models and value-added chains (automated communication between different apps of different companies) and therefore requires a change of thought process. Digital transformation requires collaboration to improve customer’s digital experience. There is one more term here – Informatization, which is the process by which information technologies, such as the World Wide Web and other communication technologies, have transformed economic and social relations to such an extent that cultural and economic barriers are minimized [5]. Informatization is the path from analog, digital, and digitalize to digital transformation. Table 1 provides the outcome of a brainstorming session on the evolution from Analog to Digital to Digitalize to Digital Transformation, on the informatization of multiple general life examples. These examples are specifically chosen to represent a class of fundamental data set as defined in the second column. Most of it is selfexplanatory and needs no explanation. During this exercise, we realized a few things. (a) In some cases, the fundamental data set changed with the evolution from analog to digital transformation. The “General Life Activities” rows in Table 1 reflect this through practical application. Just think of the development from an analog phone (signal vs. time) to a video phone (2D + signal vs. time), some augmented or virtual reality call soon (3D + signal vs. time), leading eventually to a future
Music
3.0
FTP, shared drives, networks, screenshare, videocall
LCD TVs, digital displays, keypads
Sales platform, website
Conversation, face-to-face meeting, letters, pneumatic tube, archives, phone
In-Person
Switches, motors, stop watch, CRT, locks
Walk in to retail store or wholesale orders
Connectivity
Conference, Meeting
Home appliances
Shopping
Digital collaboration rooms
Digital phone, videocall
Analog phone
Phone
Email, PDF, SMS, pagers, Fax
Video console games (Pac-Man)
Paper traveler, Hand delivery, phone
Imagination, role play, board games
String of digital images (DVD player) Digital drawings (lines on screen)
Film (Super 8, VHS)
Blueprints (2D representation of 3D objects)
Digital image ( pdf, tiff, jpg, bmp, ... )
Analog wave form (vinyl record)
Film and print
Digital waveform (CD music player, MP3, podcast)
Workflow
General Life Activities
Games
CAD/CAE
Movie
Photography
Spreadsheets, databases (SQL)
Text document (basic text editor), scanned static image
Digital number (string of 0s & 1s)
Digital Technology Universality of Storage
Tables
Book
Structured data (2D+) Audio signal ("0D" vs. time) Image (2D) Video (2D vs. time) Volumetric data (3D) Volumetric vs. time (3D vs. time)
Text document (1 D)
Phone book
Number, letter
Paper based: handwritten, typed, printed matter, …
Value, scalar ("0D")
42
General Life Examples
Data set
What's New Key Feature
2.0 Analog
E-commerce, suggestions, reviews
Self driving vacuum cleaners, home automation, guided cooking (Thermomix), video monitoring, interactive video security (Ring)
Marketplace (Amazon)
Connectivity between video conferencing systems, interactivity, interconnectivity, digital twin IoT, virtual assistant (Alexa), connected cooking, Connectivity between (interactive) video securirty systems and law enforcement
IoT, IIoT, BIoT, Smart Phone with accessories
IoT, virtual assistant (Alexa), motion capturing, remote operation
Call automation, robo-calls, virtual & augmented reality, holograms Social Media (Facebook, Twitter, WhatsApp), Internet, video conferencing, collaborative content development (Google Docs), Smart Phone Video conferencing (Zoom, Teams, WebEx), virtual & augmented reality
Integration into digital twin, IIoT, connected world
Networks, gamification, virtual reality, AI-based chess
ERP, MES, workflow systems, notifications
Integration into digital twin, IIoT, connected world IoT, mixed reality (Pokémon Go), tactile feedback, connection of gaming worlds
IoT enabled accessories and connectivity with services IoT, connectivity between video conferencing systems
IoT, virtual assistant (Alexa)
IoT, digital twin, interoperability
IoT, cloud based, ML driven grammar checker (ProWritingAide), augmented collaborative content development
Digital Transformation Eco-System, Business Models Connected & Collaborative
4.0
3D models, constraints/dependencies, CAE-FEM/CFD
Video streaming (Netflix, YouTube), searchability, personalization, automated playlists, video conferencing (Zoom, Skype)
Integration with location, image manipulation (Photoshop)
Word processor (MS Word), hypertext (HTML), twitter, collaborative content development (Google Docs) apps: spell checker, search/edit, document compare, translator Spreadsheet data processor (MS Excel) apps: graphs, statistical analysis, … Music streaming (Spotify), searchability, personalization, automated playlists, DJ-remix
Digitalize Application for Convenience Specific, Isolated, or Standalone Process Simplification
Table 1 Result of a brainstorming session on the evolution from analog to digital transformation
110 J. Vrana and R. Singh
5
(b) (c)
(d)
(e)
Digitization, Digitalization, and Digital Transformation
111
where it will be possible to operate the devices on the other end of the line (during an VR call) through motion capturing and IoT connectivity of all devices. Such a digital transformation will allow the participants to experience a direct interactivity with the other person while being on opposite sides of the planet. The tools or apps used for digitalization see growing commonalities across different data sets (like searchability, personalization, and use of AI). Digital transformation is slowly becoming a common fundamental providing desirable experience from a holistic integration of discrete digital entities, through IIoT and digital twins. It is easy to distinguish between analog and digital. It also possible to visualize digital transformation in all its glory when you think of connected self-driving cars or shopping through Alexa. But such a transformation happens over a series of steps involving digitalization, which may nor may not be sequential. It sometimes entails iteration or skipping steps. The very first applications or software tools allowed simplification of specific operations (for example searching or sorting) – meaning digitalization started with those first applications. Over the years the advancements of algorithms (including AI) allowed more sophisticated digitalization. The border between digitalization and digital transformation is thus somewhat foggy. Digital transformation starts with the collaboration, with connectivity between multiple players, leading to creation of new eco-systems and new business models. The evolution will not end at digital transformation. Sociology, ethics, and psychology will merge into digital-physical arena. What comes next, is a subject of research and only history will tell when and how this leads to the next step on the informatization path. However, for this chapter we will stop at digital transformation.
Let us discuss the table in context of underlying fundamentals data sets and its application before getting into the NDE.
Digitization Digitization is the core of the third revolution. It encompasses various methods to convert analog information into binary numbers. We are all used to decimal numbers, meaning numbers based on a system consisting of 10 discriminable symbols (0, 1, 2, 3, . . ., 9). For a basic understanding of binary numbers, it helps to have first a deeper look into the working principles of decimal numbers.
Numbers and Text Decimal numbers allow incrementing from 0 to 9 using one digit. Once the number 9 is reached a second digit is needed for further incrementation. This second digit is
112
J. Vrana and R. Singh
placed in front of the “units” position in the “tens” position and the digit in the “units” position is reset from 9 to 0 leading to the number 10. Further increment leads to an increase of the digit in the “units” position. Once 19 is reached the digit in the “tens” position is incremented and the digit in the “units” position reset to 0. This continues until 99 is reached, a third digit is needed for further increment in the “hundreds” position and both the digits in the “tens” and the “unit” position are reset. Binary numbers are basically the same, but they are based on a system consisting of only 2 discrete symbols (0, 1). This came from basic concept of on/off, yes/no, left/right, etc. Which means, that after incrementing from 0 to 1, a second digit is needed in the “tens” position, after continuing incrementation (10 and 11), a third digit is needed in the “hundreds” position, and after incrementation from 100, over 101, 110, to 111 a fourth digit is needed in the “thousands” position. For differentiation to decimal digits binary digits are called “bits” making the number “1,100,101” a 7-bit number. The following table shows this process how to generate both a decimal and a binary number. Table 2 can also be used for the most basic digitization process: the conversation of a decimal number (integer) into a binary number. At this point it should be noted that binary numbers are usually grouped into blocks of 8 bits and those groups are called bytes. If a binary number is shorter than 8 bits, it is usually extended to 8 bits by filling the leading “empty” spaces with zeros. An 8-bit number can cover decimal numbers from 0 to 255 and a 16-bit binary number decimal numbers from 0 to 65,535. Let us identify this as “0D” data (single values). A sequence of “0D” data can used to create text and sentences (1D) and tables or other forms of structured data set (2D). For the encoding of characters, strings, words, sentences, and texts, the so-called UTF-8 (Unicode Transformation Format) or ASCII (American Standard Code for Information Interchange) standard is used. ASCII covers encoding of the Latin alphabet including some special characters like Table 2 Conversion from decimal to binary numbers
DECIMAL
BINARY
DECIMAL
BINARY
0
0
11
1011
1
1
…
2
10
19
10011
3
11
20
10100
4
100
21
10101
5
101
…
6
110
99
1100011
7
111
100
1100100
8
1000
101
1100101
9
1001
…
10
1010
1000
…
… 1111101000
5
Digitization, Digitalization, and Digital Transformation
113
parenthesis; UTF-8 is backwards compatible with ASCII and allows the encoding of almost all alphabets. Both in ASCII and in UTF-8 encoding the string “NDE 4.0” converts to “01001110 01000100 01000101 00100000 00110100 00101110 00110000.” Each of those seven so-called bytes (8 binary digit (bit) number blocks) represents one letter. For example the letter “N” is represented by “01001110” and the letter “” (space) is represented by “00100000.” All digitization methods require encoding and the result of all digitization methods is a string of bytes, of 8-bit numbers. Just by looking at the string of bytes, it is not obvious whether this string contains a waveform or a text. Therefore, the encoding method which was used for digitization needs to be stored together with the actual data. Most files thus have some header information identifying the file type (and the connected digitization encoding).
Signals and Waveforms Analog sensors usually convert a physical feature into an output voltage. For example, a brightness sensor might provide 0 V in a completely dark environment and 5 V in a bright environment. To digitize this data, an A/D converter is used. In case of an 8-bit A/D converter the voltage span from 0 to 5 is divided in 256 values and stored in a single byte. For the conversation of a waveform or the signal needs to be digitized at a certain sample rate. Figure 1 shows the sampling and digitization of a waveform of a sensor. In this example a sampling rate of 2 Hz was chosen and a 3-bit digitization. This can be conceived as “0D” data set that varies with time, or a 1D string of values in some sense.
Images and Videos For the conversation of a photography into its digital representation the picture is divided horizontally and vertically into sections. Those sections are usually 3 2.5
Amplitude
2 Analog Signal Sampling
1.5
3-bit Digitization
1 0.5 0 0
0.5
010
1
110
100
1.5 Time [s]
2
101
2.5
011
Fig. 1 3-bit digitization of a waveform with a sampling rate of 2 Hz
3
010
114
J. Vrana and R. Singh
quadratic and make the individual picture elements (called pixels). For each pixel 3 values (red, green, blue) are measured and digitized (for a grayscale picture it will be 1 value for each pixel). A scanner performs such a measurement using a line unit with a one-dimensional array of sensors and scanning the image in the other dimension in a certain grid. A digital camera directly uses a two-dimensional array. TIFF, JPG or BMP are different ways to encode images. Some of those formats even allow for compression to reduce file size. This is 2D data structure. Movies for the cinema are usually sampled with 25 Hz resulting in 25 still images within one second. For a digital video signal each individual still image needs to be digitized and stored. Due to the vast amount of data created in video recording compression of the video signal is required.
Volumetrics and Games In 3D, two fundamentally different digital representations can be found. One follows the digitalization of images but instead of using pixel elements (pixel) volumetric elements (voxel) are used. Such datasets allow to depict the complete 3D volume. The other possibility is just to describe the surface of an object in 3D space and not its inner structure. This form of a digital representation of an 3D object is usually based on polygons (the object is rendered using a wire frame model) and is often used by 3D Scanners. To display 3D Videos virtual or augmented reality glasses or holographic monitors are used. All 3D displays make use of how the human brain creates 3D images. Human eyes only allow to see 2D images, however by having two different viewing angles at the same time (2 eyes) and by moving the head our brain can reconstruct the 3D information out of the 2D image. This leads to the basic idea of all 3D displays: displaying two 2D images from slightly different viewing angles whether a precalculated (3D video) or in real time (interactive 3D video). There is no real 3D display. 3D glasses work with two 2D images. Holographic monitors use multiple 2D images from different viewing angles. This allows walking around the “object” and not only for one but even for multiple persons. 3D video games create 3D environments (mostly surface data) and use it either to render ONE 2D image for traditional video gaming or they render TWO+ 2D images for virtual or augmented gaming. The volumetric data changes over time to provide an immersive experience.
Digitalization While digitization describes the pure conversation from analog to digital, digitalization is the process of using digitized representation to simplify specific operations. Digitization forms the basis for digitalization. While basic text is digital, the first tools for word processing allowed to simplify operations like search and replace. This was the start of digitalization of text or word
5
Digitization, Digitalization, and Digital Transformation
115
processing. Digitalization continued with spell checkers, cross references, as well as the ability to track changes and compare documents. Just to name some examples. Currently AI based tools being integrated, like translation tools, or grammar checking tools, are still digitalization – even that they are way advanced – but they are still implemented to simplify specific operations. Music: while MP3 and CDs are both just containers for digital music, streaming tools or simple MP3 players allow digitalization by creating playlists, searching, remixing music etc. Photography: taking an image with a digital camera is digitization of photography. Digitalization starts with similar tools to text and music, searching, creating collections, . . . Another idea for digitalization, for simplifying specific operations, for digital photography could be to embed a GPS sensor into the camera and storing the location information in the metadata of the image file. This would later simplify the search for images taken at a certain location. Other ideas for digitalization of photography: upload to a cloud, to an image printing service, or even automatic object and person identification. Another example for the digitalization regarding images and documents is the various ways to create them. For example, for the creation of digital images vector graphics are used in a lot of cases. Such vector graphics are not based on pixels. A vector graphic is an instruction for the computer to paint objects at certain location. For example: Paint a rectangle with 12 13 mm, at location 143,265, rotated by 12 , with a black border color, or: paint the text “NDE 4.0” at location 245,765 in color black with font type Times. The computer takes all the instructions stored in the vector graphic file and uses them to render the image in real time depending on the desired zoom factor. This format usually allows clearly smaller data files than pixelized files. One important factor for most digitalization solutions are good human-machine interfaces (HMI). All the examples here and in Table 1 show that while digitization is a change to a universal recording format (from some form of an analog recording to digital structures, no matter whether they are stored on a CD, a USB stick or in the cloud) digitalization starts with applications, software which helps to simplify the processes. Digitalization is about creating value add apps for convenience and marks the transition from Industry 3.0 to 4.0.
Digital Transformation Both digitization and digitalization refer to products which can be purchased from a single company or which can be built by a single company. Digital Transformation takes the idea of digitalization to a completely new level, making collaboration and connectivity agnostic, opening new business models, and enabling eco-system growth. Up to the moment there are limited number of real-life examples for digital transformation, but there is one device which enabled a digital transformation which everybody knows: the smartphone. It needs to be mentioned: a smartphone
116
J. Vrana and R. Singh
is not needed for a digital transformation, but the smartphone enabled one: due to the connectivity both to the internet and between all the apps it supports creating a completely new eco-system and lifestyle changes. The digital transformation of word and image processing is starting with collaborative content development. However, most of those tools are still isolated solutions that is why the authors see the current state still as digitalization. But the change to digital transformation is quite straight forward – by enabling open connectivity to allow both the integration of and in other tools, by enabling connectivity between the collaborative content development systems of different manufacturers, and by augmentation. To paint a more detailed image of digital transformation, consider this: The digital transformation of photography could mean the use of a digital camera from one manufacturer, combining it with a digital flash of a different manufacturer and a GPS and tilt sensor from a third manufacturer. All those devices communicating wirelessly so that the flash can be positioned freely. The data from all the devices is collected in the cloud (provided, for example, by one of the big players in the IT market). A specialized AI solution within the cloud, produced by another company, identifies objects within the pictures, performs automatic image enhancements, and sorts out bad pictures. The customer of the images can directly access the images using a webpage maintained by someone else, directly order a print on demand photo-album from yet another source and share the image in the social media platform of their choice. For taking some of the images the photographer used a drone-camera, where both drone and the camera are from some other manufacturers. Now imagine all those pictures are integrated into the same post processing and can also access GPS and flash devices. And if the photographer is not satisfied with one of the solutions in use, they can just be exchanged with a similar product of a different solution provider. Solutions like this are slowly getting real in the consumer and home appliance market. Such solutions give the customer choice and flexibility. It maintains a competitive marketplace continuously driving value for consumers. However, they require the implementation of open interfaces so that devices and software solutions of different companies can interact with each other. The common key components for digital transformation are: connectivity by open IoT or IIoT solutions and digital twins for enhanced data processing. Digital Transformation will lead to the situation such that ALL the different aspects will grow together as a single eco-system. The description in Table 1 explains various other items and should make the concept a little bit clearer, although certain pieces are yet to fully mature.
Stages of Informatization in NDE Now that we understand the journey from analog to digital transformation at fundamental data set level, and how it translates into some real-life everyday applications, it should be possible to see this transformation for any industrial application, including NDE.
5
Digitization, Digitalization, and Digital Transformation
117
Table 3 provides the result of a brainstorming session, just like Table 1 for general life examples, on the changes from Analog to Digital Transformation for various perspectives in NDE. Once again, it shows that the more sophisticated digitalization is becoming, the more similarities we find between the various methods. Also, more and more steps required for an inspection are getting integrated (starting with the NDE engineering). This follows what was already discussed by the authors regarding NDE and Industry growing more and more together with every revolution [1]. Even the individual digitalization tools of the players in the NDE eco-system will eventually become one.
Digitization Various NDE methods have their characteristic physical response from material variation that can be recorded in form of a digital signal or an image, using the digitization methods discussed above. • Ultrasonic and Eddy Current: Digital signal from analog response (“0D” vs time) • Radiography: Digital detectors or digitization of film using scanners; resulting in digital images (2D) • Visual, Dye Penetrant, Magnetic particle: Digital cameras for image capture; resulting in digital images (2D) or video (2D vs time) • Thermography: Digital image from IR Cameras; resulting in a video (2D vs time) or images (2D) • Computed Tomography (CT), Synthetic Aperture Focusing Method (SAFT)/ Total Focusing Method (TFM): Multiple radiographic or ultrasonic files used for the reconstruction of volumetric data (3D) Regarding the NDE processes (“NDE Steps”), to the different players within the NDE eco-system, and to NDE training and certification digitization was mostly the step from paper to digital documents (in a lot of cases text (1D)) by, for example: • Scanning of hand-written or printed reports • Creating reports using standard software packages like Office • Submitting the digital documents and data using email, FTP
Digitalization Unlike digitization, which depends on the fundamental data set, digitalization is quite independent of the data set. That leads to the situation that while NDE digitization mainly differs between the different methods, NDE digitalization is pretty much method independent. In contrary the digitization of various NDE processes is similar, but the digitalization of various processes sees some significant differences. Considering the different steps necessary for an NDE inspection and the different players or stakeholders in the NDE eco-system.
Signal ("0D" vs. time) Image (2D) Video (2D vs. time) Volumetric data (3D)
Data set
Paper-based, classroom
Paper-based, in person
Paper-based
Examination
Certification
Paper-based
Training
NDE Training & Certification
Asset OEM Operator-Owner NDE OEM Inspector
PDF
Computer-based
Computer-based training, video, PDF
Computer-based, easier accessibility, support with calculations
Paper based: handwritten, typed, Computer based, easier accessibility printed matter, blue prints marked up
NDE Eco-System: Aggregation of Steps
Text document (1 D)
Manual for QC (Pass/Fail)
Value extraction
Reports
Computer based, easier accessibility
Paper based: handwritten, typed, printed matter, …
Specifications, Text document work instructions (1 D) Manual on digital media for QC (Pass/Fail), quantification
Calculators, Spreadsheets,
3D Volumetric data
String of digital images (DVD player, )
Digital image
3.0
Arithmetic, calculus, log tables, slide rules
Film (VHS)
Film, photo, and print
Analog signal displayed on oscilloscope Signal displayed on digital device
Digital Technology Universality of Storage
NDE engineering (Apps, POD, ..)
NDE Process Steps
CT, SAFT/TFM
Visual, Thermography
X-Ray, PT, MT
Sensor signal UT, ET
NDE Methods
What's New Key Feature
2.0 Analog
Automated for all engineering disciplines and QA: digital twin, automated process improvements, connected world Data fusion, integration into digital twin, IIoT, connected world
Mainly for QA, isolated process improvement: decision assistance, reconstruction, aggregation, trending, prediction, maintenance program optimization Interactive, networked, mixed media, searchable, integrated with workflows, augmented reality
Blockchain, database records
Immersive, interactive, mixed media, remote, collaborative, simulation based, gamification
Mixed reality, tactile feedback Monitoring via cyber-physical loop, IoT, AI-based examinations Blockchain integrated in IIoT, connected world
Single eco-system, connected world, IIoT, digital twin
Data fusion, integration into digital twin, IIoT, connected world
Interactive, networked, mixed media, searchable, integrated with workflows, augmented reality
Probabilistic lifing, FEM, simulation, initial inspection planning Predictive maintenance, maintenance optimization Simulation, electronic planning NDE workflow systems, machine assistance
Digital twin, IIoT, cloud computing, edge computing
Data fusion, integration into digital twin, IIoT, connected world
Digital Transformation Eco-System, Business Models Connected & Collaborative
4.0
Computers, simulation, intelligence augmentation, CAD
Remote NDE, automated inspections, automated device settings, intelligence augmentation,
Digitalize Application for Convenience Specific, Isolated, or Standalone Process Simplification
Table 3 Result of a brainstorming session on the development from analog to digital transform
118 J. Vrana and R. Singh
5
Digitization, Digitalization, and Digital Transformation
119
The total inspection process is more than inspector using NDE equipment. It begins with NDE engineering, creating work instructions, running inspection tasks, gathering data and extracting data value, and documenting all the reports. Those are processes and all of them can benefit from digitalization solutions. This could mean the use of a digital workflow system for resource scheduling (both manpower and equipment), sending jobs to the individual inspectors, giving the inspectors the possibility to create structured reports using tablets or computers directly in the workflow system, tracking changes in the reports, and providing the reports to the customer without the need of using email/phone/text. Such a system could be extended for example by certification and vision-test tracking or by automatic quote and invoice creation. Digitalization of NDE could also mean the use of enhanced algorithms, like AI, for defect characterization and sizing. This is on top of digitized NDE. Besides engineers and inspectors’ various other stakeholders, within the NDE eco-system [6], such as Asset OEMs, NDE OEMs, Owner-Operator, gain value from digitalization: For an asset OEM, tools like probabilistic lifing, FEM, or simulations are critical to determine and optimize the lifetime of their products. Those are the asset OEMs main digitalization tools. What this mean for NDE (both classical NDE and NDE sensors) is that it can be a good data source for design optimization tools. An owner-operator strives for a cost optimized maintenance planning – meaning their digitization path leads them to predictive maintenance. Once again, both classical NDE and NDE sensors are a valuable data source for maintenance planning. NDE OEMs path for digitalization leads them to tools which simplify and optimize equipment or system development and design. This ultimately helps the consumer of NDE systems and devices.
Digital Transformation While each stakeholder sees economic value in digitization through efficiencies and effectiveness of systems, the real value of digital transformation comes when they are all a part of the same ecosystems where they breathe from the same data environment. A vision for the digital transformation of NDE would be to use a robot from one company, with the NDE source from second company, an NDE detector from third company, a data acquisition system from fourth company, a data postprocessing system from fifth company, uploading all the data to a cloud (sixth company), integrating the inspection workflow within a software solution from seventh company, combining the NDE results with the results of some pre-existing destructive testing, performing a statistical evaluation of the data to enhance the design and lifing calculation from yet another set of organizations. All in all, there could be dozens of organizations involved creating value through digital transformation of NDE. As discussed before key to such a digital transformation is the IIoT to establish open communication and digital twins as basis for enhanced data processing. The
120
J. Vrana and R. Singh
tools used by the stakeholders of the NDE eco-system (like predictive maintenance or FEM) will become integral to the digital twins and will communicate using the same IIoT. Therefore, the NDE eco-system in particular and the Industry eco-system in general will grow together into one big eco-system. This digital transformation also has a huge potential for manufacturers of automated systems as it will get easier to integrate all kinds of hardware and software without the current need for implementation. Also, the integration into database systems will be possible using standard interfaces. Digital transformation can also be applied to the hand-held inspection of a component: The job information is submitted using an NDE workflow system, which automatically shows the inspection instructions, sets the NDE instrument appropriately, and if indications are detected stores the image visualizing the indication together with some tracking/location information in the workflow system from half a dozen different providers. All those results can be used by design experts to enhance the products or the production or the more accurately calculate the remaining lifetime. And the certification status of the inspector is automatically checked by accessing the certification blockchain. Digital transformation is at the heart of the fourth revolution and should be taken stepwise with expert engagement. It is not a “do it yourself” (DIY) scenario yet. The number of technologies is plenty, and their integration creates so many opportunities that it can soon get overwhelming. Eventually digital transformation will lead to a situation where all the pieces of the great puzzle will come together to create a portrait of a better eco-system and they all grow together.
Informatization of Training and Certification The informatization of Training and Certification started with the digitization of all the documents used, it continued with digitalization tools enabling immersive, interactive, simulation-based trainings and examination, including gamification to maintain attention. With digital transformation, all those tools will get integrated to create a cost effective and reliable training and certification system. The opportunities include realistic virtual training environment using the IoT, AI based examinations, monitoring of the on-job performance to evaluate the real training need of a particular inspector, a certificate stored tamper-proof in a blockchain accessible through the IIoT, etc. All these will ensure that training is always current and complete, and only certified inspectors get assigned to the tasks.
Example: Informatization of Radiography The section above provided in depth perspective of various elements in digital transformation of NDE. Let us examine radiography as an example through this lens and complete the understanding.
5
Digitization, Digitalization, and Digital Transformation
121
Digitization of Radiography The digitization of an X-Ray can either be achieved by scanning the analog x-ray film or by directly using a flat panel detector. The digital signal is a monochrome image. For each pixel one value is stored. In a lot of cases, this will be 16-bit integer number. Those images can be stored and displayed on any computer and transported by using the known methods for file transfer. Digitalization of Radiography While digitization is the pure conversion from the analog signal into a digital image digitalization is the improvement of single processes. Some ideas to improve the processes of X-Ray inspection by digitalization: • • • • •
Digital identification of the object to be tested Digital inspection job description Automated setting of the source and detector settings Automatic storage of the results Support analyzing the results, for example • Image recognition of the anomaly • Data Trending • Digitalization of the workflow • Automated submission of the results to the customer
Digital Transformation of Radiography For the digitalization isolated processes were improved. Mainly inhouse. Digital Transformation continues this journey, but on a different level. Imagine an x-ray inspection using digital RT machine or film and a scanner for digitalization. The customer issues an PO from their ERP system. The workflow system of the inspection company and the various ERP systems of the customers communicate with each using open interfaces. Meaning, no need for emailing a PO – the PO directly appears in the workflow system and no more changing the workflow software for the various ERP systems at the customers. The communication just works – and not only submits the PO but also all the connected documents (like the inspection requirements). Within the workflow system, an inspector is assigned and gets the details including all documentation. Another software takes the inspection requirements from the customers and uses them to set source and detector to the appropriate setting. The inspector is guided through all the components/positions requiring inspection by an augmented reality platform. The system automatically stores the results appropriately and an additional software will automatically start with some processing so that once the inspector is finished, the system automatically provides suggestions. At some point, if the inspector requires engineering support, a TeleNDE session is started, the engineer automatically sees all the relevant information and can provide support within minutes. This all helps the inspector to focus on the main task: find if there is anything undesirable in the inspected artifact.
122
J. Vrana and R. Singh
Finally, the results are automatically provided back to the customer so that they can use them internally for process improvements. Similarly, the results are used to identify potential training needs of the inspector, or design improvements for the asset. All of this is happening with data transferred by the IIoT, data stored in the cloud, computing happening in cloud or edge based digital twins. The digital twin may be offered and maintained by another provider as a service and data as a tradable commodity. Over a period, the net worth of the data continuously increases.
Summary Digitization is the core of the third revolution. It is all about converting analog things into series of 0s & 1s digits, making the fundamental element universal. Digitalization marks the way to the fourth revolution with means to process the 0s & 1s into meaningful value for convenient consumption. Digitalization improves specific processes. Digital Transformation crosses the borders between companies and systems permitting the combination of all sorts of diverse digital and digitalization solutions. It helps build or select the best solutions to specific needs and even conveniently and seamlessly combine them with each other for enhanced value, because the fundamental element is so universal that it covers digital content and context. This unification of things improves the processes and functionality that we may not have conceived before. From NDE perspective, digitization translates almost every method into capture of text, signal, image, video, or volumetric data in digital form; workflow, analysis, and reporting in digitalized manner; and leveraging the outcomes as digital feedback loops to optimize life cycle cost or product design in the spirit of digital transformation. This digital transformation will eventually lead to a complete digital twin incorporating all digitalization solutions into one virtual eco-system. All for the benefit of the customer.
Outlook Digital transformation means that every company can and should focus on its core competency and give everybody in the value stream the choice to digitally transform their business to their needs, because now they can engage using universally acceptable string of digits. This mindset will allow improvement of processes across company boarders by establishing open interfaces and data transparency. More in short than midterm customers will require products which they can embed into their digitally transformed environment. Companies which want to keep selling must think about new models for customer retention Proprietary data formats and interfaces will prevent customer retention. Good products, easy to use and to connect devices will bring customer retention.
5
Digitization, Digitalization, and Digital Transformation
123
At some point in future, we will be adding another column to our understanding, representing the next step in informatization. But before that, we must work together to embrace digital transformation in a manner that is good for NDE eco-system, the society, and the planet.
Cross-References ▶ Industrial Internet of Things, Digital Twins, and Cyber-physical Loops for NDE 4.0 ▶ Introduction to NDE 4.0 ▶ Value Creation in NDE 4.0: What and How
References 1. Vrana J, Singh R. The NDE 4.0 – from design thinking to strategy. J Nondestruct Eval. 2021;40: 8. https://doi.org/10.1007/s10921-020-00735-9. 2. Vrana J. NDE perception and emerging reality: NDE 4.0 value extraction. Mater Eval. 2020;78(7):835–51. https://doi.org/10.32548/2020.me-04131. 3. Vrana J. Digitization, digitalization, digital transformation, and informatization. YouTube. 2020. https://youtu.be/8Som-Y37V4w. Published 21 July 2020. 4. Bloomberg J. Digitization, digitalization, and digital transformation: confuse them at your Peril, Forbes. 2018. https://www.forbes.com/sites/jasonbloomberg/2018/04/29/digitization-digitaliza tion-and-digital-transformation-confuse-them-at-your-peril. Accessed 27 Sept 2020. 5. Kluver R. Globalization, informatization, and intercultural communication. Am Commun J. 2000;3(3) 6. Singh R, Vrana J. NDE 4.0 – why should ‘I’ get on this bus now? CINDE J. 2020;41(4):6–13.
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets Chris Udell, Marco Maggioni, Gerhard Mook, and Norbert Meyendorf
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Purpose and History of Portable NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Status of NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Skills Shortage and an Aging Workforce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generation Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complicated Data Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Incomplete Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complex User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Obstructed Data Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current State of Mobile Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Processing Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5G and Wireless Data Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Local and Cloud Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overcoming NDE Shortcomings with Mobile Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ergonomics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Screen Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
126 127 128 128 129 130 130 130 130 131 131 131 132 132 133 133 134 135
C. Udell (*) Voliro AG / Proceq SA, Zurich, Switzerland e-mail: [email protected]; [email protected] M. Maggioni Proceq SA, Schwerzenbach, Switzerland e-mail: [email protected] G. Mook Universitaet Magdeburg, Magdeburg, Germany e-mail: [email protected] N. Meyendorf Chemical Materials and Bio Engineering, University of Dayton, Dayton, OH, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_53
125
126
C. Udell et al.
Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Visualization: Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Business Models with Connected Devices (Adapted from [11]) . . . . . . . . . . . . . . . . . . . . . . . . . Hardware as a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Blockers to Adoption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The New Ecosystem for NDE and How Smart Devices Can Trigger New Business Cases . . . Cellphone for NDE Teaching in Schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Student Ideas for New NDE Business Cases for “NDE at Home” . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
136 137 137 138 139 140 140 141 142 145 148 148 149
Abstract
Smartphones, tablets, and wearables play a major role in the development of consumer and industrial technology. This chapter will show the potential of using a connected mobile device with the increase in versatility, the enhancement in productivity levels, and the ability to make the latest technology for NDE affordable. It begins with a critical review of the NDE industry and the risks and dangers of standing still while technology advances. The chapter will then analyze current innovations in the consumer market and show how, with simple adaptations, these innovations can be brought to improve the NDE industry, throughout the value chain, and to increase the acceptance and worth of NDE overall and opens the door for new business cases. Keywords
NDE 4.0 · Nondestructive testing (NDT) · Nondestructive evaluation (NDE) · Ultrasonic testing · Eddy current testing · Wireless · Wi-Fi · Future of NDE · Automation · Industry 4.0 · Smartphone · Industrial tablet · XAAS · Cloud · IIoT · Big data · E-learning · 5G
Introduction Since the launch of touchscreen-based smartphones in the late 2000s, mobile devices have taken the world by storm (adapted from [1]). As the hub for modern communication, smartphones and tablets have created a new standard for communication, whether it is person to person or person to application. Mobile devices have allowed a new era where everyone expects a highly personal experience that grants access to instantaneous information. In order to keep up with these consumer expectations, mobile has become the primary starting point for new and evolving industrial technologies and the main operating system to manage all aspects of communication. As the power of mobile continues to bring emerging technology to consumers, its ability to integrate the Internet of Things (IoT), cloud connection, augmented and virtual reality (AR and
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
127
VR), blockchain and artificial intelligence (AI) will push society into the next generation of connectivity and communication. The adaptation of this consumer technology to NDE sensors and devices has several advantages related to postponed obsolescence, high computing power, higher flexibility, portability, and improved productivity levels especially when the mobile phones or tablets are connected to the Internet and can be accessed from any world location. At its core, NDE 4.0 is powered by connectivity. Without connected devices, NDE 4.0 would be impossible.
Purpose and History of Portable NDE Commercial instruments and transducers to perform nondestructive testing on materials were introduced in the 1960s and 1970s. Their purpose was to confirm that there were no defects above a target size, to identify and monitor significant defects, and to measure the size of these defects. Ultrasonic, eddy current, and radiographic NDE testing are three examples that are still frequently used across many industries including steel plants, manufacturing, power, aerospace, oil and gas (O&G), and transportation sectors. The gathered information about the component being inspected can serve as an input to predictions of remaining life based on fracture mechanics to ensure the material is manufactured to an acceptable quality and/or to monitor the integrity of the structure through its useful life. Since the introduction of these instruments, the inspection has always involved the interpretation of a data plot by an expert operator, whether that is a live A-scan display, or Lissajous plot or a radiograph image. To this day, this remains mostly unchanged. NDE is a highly standardized, proven technique that has been working as expected (and advertised) for decades. Of the three techniques, digital radiography and tomography are leading the change to digitization, however they still have not fully replaced traditional film radiography. The growth of NDE closely tracks technological progress in electronics, displays, and computer processing power. For example, with manual conventional ultrasonic testing, the last big change in the history of products was the move from analogue systems to digital ultrasonic flaw detectors. This shift made it possible for users to operate truly portable, handheld equipment. It also enabled operators to reduce errors and save time by providing them with useful features, such as gates, or the automatic calculation of the indication locations – even accounting for skip distances. Performance was affected by this switch, with early digital products having slower response times and poorer signal to noise. If we take an approach to inspection focused on the “jobs-to-be-done” [2], the customer of the inspection service needs a reliable result: for example, to confirm that their asset is suitable for use for a defined period, or to understand the extent of the defects found and so decide on repairs needed – or to scrap the part. Typically, this result is in the form of a paper-based report that needs to be archived for a long period of time.
128
C. Udell et al.
If the sensitivity for the inspection is set according to a common procedure, then the result should be the same with any equipment used, analogue or digital. If not specified in the procedure, the choice of instrument to use is down to the largely subjective personal preference of the inspector. The customer of an inspection does not typically care which instruments are used, or not even which personnel performed the inspection. The customer needs a result, typically to the minimum standard of quality, and as cheaply and quickly as possible.
Current Status of NDE We have identified and addressed six major areas of concern associated with end-toend NDE inspection processes as described above. We have done so by reviewing relevant literature, observing and interviewing NDE equipment users in manufacturing and service environments [3, 4].
Skills Shortage and an Aging Workforce The first issue is industry wide and related to the skills shortage of experienced inspectors. This situation is expected to worsen, as a large percentage of skilled inspectors with field experience and difficult-to-replicate tacit knowledge accumulated in decades of work approach retirement age. However, the “outgoing rate” of expertise from the global pool of inspectors, while unavoidable, is not the only challenge for the NDE services industry. Far more acute, yet also entirely addressable, is the enduring issue of a low “incoming rate” of expertise. Namely, the rate of replenishment of the global pool of inspectors is subject to the entry barriers imposed by the long and costly amount of training and experience required for an inspector to be qualified. For example, for an ultrasonic level 2 inspector the minimum lead time to become certified in the PCN program is 9 months of experience [4]. Furthermore, the likelihood of people pursuing such certifications is also subject to the attractiveness of the inspector job, which itself is impacted by macroeconomics and industry dynamics. These factors are entirely outside of an inspector candidate’s control, yet have a severe impact on his employment outlook and earning potential. And, while the mostly unabated ongoing rise in global GDP has also been an advantage for the NDE industry, the recent downturn in the O&G industry has been a blight on it, resulting in lower demand for NDE jobs, which nowadays seem less and less appealing to new users in advanced economies. Figure 1 shows age profile statistics of qualified NDT operators compiled by BINDT (British Institute of Nondestructive Testing) [5]. With a large shortage of qualified inspectors, the industry needs to change, either by fully streamlining workflows so that experts spend their time more efficiently or by de-skilling certain aspects of inspection jobs so that the entry level for a new stream of inspectors is lower or both.
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
129
Fig. 1 Age profile of NDE operators taken from BINDT; the situation in the next 10 years is particularly concerning in the UK and Germany, with many inspectors approaching retirement age and too few new entrants replenishing the expertise and experience pool
Generation Shift We are now witnessing a historical demographic shift in the workforce. The millennial worker is now the majority generation of worker and they are beginning to occupy important positions. The millennial generation, born between 1981 and 1996 are a notably large and diverse group, so it is important to note that they cannot be homogenized. However, there are common characteristics of this group in that millennials will disrupt how the world reads, writes, and relates. Millennials are disrupting retail, hospitality, real estate and housing, transportation, entertainment, and travel, and they will soon radically change NDE. We must adapt education, communication, and changing attitudes to work to meet their demands. The millennial generation is unique compared to previous generations, for example, in terms of their expertise in technology, especially smartphones. The way millennials communicate is now real time and continuous. This dramatically affects the workplace because millennials are accustomed to constant communication and feedback so much so that sharing and harmonization of data is becoming a key requirement. Millennials have attention spans that are shorter than the previous generation. Of course, this makes their communication style and way of life different, preferring bite-sized information to lengthy traditional day-long PowerPoint training sessions.
130
C. Udell et al.
Complicated Data Interpretation In analyzing measurement results of NDE technologies, competence in analysis and therefore the quality of the findings can vary greatly between operators. While experts often rely on unprocessed data to reach experienced conclusions, nonexperienced users prefer processed and graphical images to interpret the results. An example can be provided by the use case of contemporary tube testing equipment with eddy current technology. These devices struggle with the influence of steel support plates within the field of detection, leaving a lot of room for interpretation errors up to the user.
Incomplete Traceability Until today, NDE equipment operators have had to manually document measuring procedures in such a way that they can prove the required guidelines have been followed, and the instruments and the probes being used have been properly calibrated and verified. So far, this type of activity has been entirely manual. Furthermore, it has suffered from fragmented sources of information, a lack of overview of the step-by-step workflow of the operator, and of any changes or deviations from the original procedure. These issues lead to a lack of traceability along the process. The generation of online forums is helping and supporting the new operators in their development, thanks to the sharing of experience and data that can be used to improve their performance.
Complex User Interfaces Most of the NDE equipment available looks complex and is also complex in use, as it has typically been designed by experts for experts, with the mentality that simplification equates to “dumbing down” the product. Not only the measuring process itself but also the presets intended to ensure the correct setup prior to the measurement exceed, in some cases, the understanding of the user. Harmonizing NDE interfaces with consumer applications used daily by people (for example, YouTube, Google Maps, and the smartphones camera application) would be a good approach for the simplification and reducing the learning curve.
Obstructed Data Sharing One of the most time-consuming tasks today is to process the results after a day of collecting data in the field. Observations have disclosed that the time spent on this task can be a factor between one and two times the actual time used for the measuring task. Communication of results has become an important element when interacting with colleagues on a large investigation site, the back office, suppliers, or
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
131
customers. In addition, the amount of data collected, especially when using technologies such as radar, ultrasonic, or eddy current array solutions, has significantly increased. Thus far, data has been typically stored on paper, in the NDE device itself, or on removable storage. Some devices have relied on manual export of data using said storage, followed by post-inspection analysis of the data after importing it into tools on a PC.
Current State of Mobile Technology The smartphone, web-based email, wireless connectivity, and on-demand streaming are consumer products we mostly all take for granted. Mobile devices of the latest generation have shown how operations can be simplified to a level where even children or elderly persons can make use of them. We will show how NDE can use today’s technology to remove those issues the industry is facing listed above and to enable the inspector’s role to be focused on their value-adding operations that ultimately support the goals of the customer. The side effect is less effort expended on secondary, low-value-adding aspects of their role, such as reporting, and data storage and management. We will go into depth in the following sections on how this is or could be realized.
User Interfaces The user interfaces in many of today’s commonly used applications, social media networks, and websites, like Facebook, Dropbox, Zoom, and Amazon, have many layers of options for customization and optimization to cater for a wide variety of user needs. That is also why these products development teams have designed their products’ user interfaces to allow for a simple intuitive usage without any training required – despite the multitude of applications and use cases enabled. And while a more detailed setup of advanced settings can be done in interfaces dialogs hidden away, new users can start using these tools immediately, without any prior familiarity with the product itself, and without any need to read printed documents, such as leaflets with operating instructions or thick user manuals.
Processing Power In terms of processing power, modern smartphones reach far beyond that of the first portable digital ultrasonic equipment of the mid-1980s – and such devices keep getting smaller and more powerful every year. As of today (2020), a typical smartphone or tablet has the same processing power as the latest TFM-enabled portable phased array ultrasonic system and equivalent to a laptop PC from 2015 [6]. This high performance increases every 8–12 months with the release of a new smartphone or tablet.
132
C. Udell et al.
Where can this extra processing power drive us in the future? Equipped with artificial intelligence techniques, today’s systems can teach themselves to perform tasks almost as well as humans can. As trends evolve, new device use cases and experiences are requiring more complex compute workloads, higher quality, low latency video streaming, augmented and virtual reality, and improved productivity which can be achieved through 5G-ready, always-connected smartphones and tablets. Demands for higher performance on captured NDE data will increase each year, improving applications and use cases, such as automatic defect classification, sensor fusion, and integration of 3D data into BIM (building information modeling) tools.
5G and Wireless Data Streaming Roughly every 10 years since 1979, each newer generation has changed how we communicate with one another, further improving our way of life. As smartphones were beginning to become popular around 2008, the demand for faster data and increased network capabilities was possible with 3G technology. What made 3G revolutionary was the ability to surf the Internet, to send emails and stream music on mobile. 4G provided high-quality video streaming/chat, fast mobile web access, HD videos, and changed the way we communicate; because of this, the smartphone market started to really boom. Devices like the Samsung Galaxy S4 sold 80 million units worldwide. Current 4G wireless services already provide sufficient performance to support most types of video content commonly streamed today but 4G has just about reached its capacity in terms of data transferring speeds. The vast majority of the future NDE devices utilizing a smartphone or a tablet will rely on either GSM, Bluetooth, or Wi-Fi to communicate, depending on the use cases. As described in Ref [4], 5G will bring ultra-reliable low-latency communications (URLLC) allowing robust real-time data connections (latencies 500 km/h). The massive machine-type communications (mMTC) allow the connection of high-density devices (1 million/km2) and cheap, low, complex mobile implementations. 5G’s largest impact will be the promise of gigabit data transfer and video streaming. It will soon be possible to stream 4 K ultrahigh definition (UHD) video, allowing, for example, detailed visual surface examinations from state-of-the-art RVI videoscopes to be carried out remotely.
Local and Cloud Storage Mobile devices can make use of secure cloud storage solutions to share data globally in real time and the results collected on-site, and create a permanent, secure, and traceable record of the inspection. Together with wireless communication through either a cellular network or Wi-Fi, this has become a powerful tool to immediately sync the data with other collaboration partners or to distribute reports to external parties. Additionally, a browser-based software product allows access to the data
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
133
independent of location, time, and hardware platform. Within the secure network consisting of transducers, mobile devices, and a cloud storage, raw data is exchanged. Predefined templates for common export file formats such as PDF or CSV are used to share results outside the secure ecosystem. Direct report generation and immediate access to the investigation data have been proven to enhance not only collaboration but also to result in significant time savings. The cloud will be disruptive to the traditional way of performing inspections at regular intervals of a component’s lifetime. Alternative methods, such as condition monitoring, will allow for embedded sensors to take measurements continuously. An embedded system does not need to be trained to the extent of a level 2 operator but in some cases, the end result for the customer will be the same. Some ultrasonic products and services will become obsolete. Some reduction in demand is expected by the adoption of new technologies. When the total running costs of a structure are included, NDE and condition monitoring together are key to maximizing value and extending useful asset life.
Overcoming NDE Shortcomings with Mobile Technology In the following sections, we will go into the details of how NDE can use today’s technology to address the issues mentioned. It will also be demonstrated how addressing these issues can allow the inspector to concentrate on value-adding operations that ultimately support the end customer’s goals. One side effect is less effort spent on secondary, low-value-adding aspects of their role, such as reporting, and data storage and management. Another, farther-reaching side effects are the beneficial impacts of those ideas, concepts, and countermeasures on the role of the inspector itself, as well as its attractiveness, future orientation, and by extension on the trajectory of the inspector pool and the wider NDE industry.
Training Any change of equipment or process subjects its target adopters to a learning curve. Many instruments used in NDE are complex in use and require days, weeks, and even months of training. If you are not using this complex bespoke software often then the user has to go through the training notes again and through the learning curve to familiarize themselves and understand how to create a setup to start inspection. Nowadays, online videos represent an instant source of knowledge for millions of people. If one video is not enough, there will surely be another in the sidebar (of, e.g., YouTube) that could fill any gaps in knowledge. Quick-and-dirty how-to videos have greatly accelerated the process of skill acquisition. The video classes are far more engaging and interactive in nature than the print media tools. Numerous subjects that are difficult to understand are made easy through videos. It is easier to learn quicker visually, and it has been shown that
134
C. Udell et al.
Fig. 2 An ultrasonic simulator from EXTENDE. The operator can simulate inspecting a large database of welds. The exact A-scan from this simulation is displayed on a tablet-based ultrasonic flaw detector
viewers retain 95% of a message when they watch it in a video compared to 10% when reading it in text [7]. Therefore, videos can be found on the student-inspector’s smartphone or tablet. Even in the field of NDE education, mobile devices are replacing bulky textbooks and inconvenient classroom training. Newest-generation mobile devices feature video, audio, and on-screen drawing capabilities that enable efficient collaborative analysis between inspection team members. Modern smartphones and tablets allow the user to practice and familiarize themselves with upcoming inspections based on real data and real defect signals (not machined notches or perfectly circular drilled holes) through the use of simulators, as shown in Fig. 2, and these systems could form part of a “trade test” to qualify operators before they are mobilized on-site.
Ergonomics As with all jobs in which an operator uses equipment for hours at a time, operator comfort is linked to improved inspection performance and productivity in NDE, too. Thinking about how the device is handled must dictate design to a significant extent. Manual ultrasonic and eddy current operators hold the sensor using their dominant hand; in the other hand, they hold and control their display device. This requirement on ambidextrous operation means that only some of the screen is easy to reach with the thumb when the device is held in one hand. This area is called the thumb zone. It varies between devices based on the screen size and is shown in Fig. 3.
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
135
Fig. 3 Thumb zone of a tablet-based ultrasonic flaw detector for a right-handed operator, who is holding the iPad in his left hand
An inspector sometimes works with the NDE device for 12 h per day and is thus exposed to physical strain. Here, the electronics have been split away from the screen, so the operator can put the heavy parts like the battery and base unit away on the belt or a backpack and carry only the iPad (which weighs as little as a can of soft drink), therefore taking strain off the wrist. There has typically been a trade-off with existing devices; either you could have a big screen, but it would be heavy, or a low-weight device but the screen is tiny, causing eyestrain. The modularity of using a mobile device allows for a big screen with reduced weight.
Screen Sharing The virus pandemic starting late 2019 has accelerated digitization initiatives and has put increased emphasis on remote working. A growing number of NDE organizations are using digital technologies to bring level 3 inspectors to an inspection virtually in order to witness and verify the quality and integrity of data to company procedures or industry standards. Remote technology, using proven and readily available smartphones and tablets, allows inspections to be streamed live online from distant locations using freely available apps, like Zoom, Skype, WebEx, and TeamViewer. Remote technology can quickly get that second opinion on a spurious indication and reduce the overall chance of a false call. Live streaming of inspections and surveys from distant locations also lowers travel costs, risk, and damage to the environment. The application of remote inspection has increased efficiency by connecting the client’s contact person with an operator located at the asset and an NDE specialist who can explain remotely through all the steps of the inspection and received data. The mobile device simply needs an Internet connection (Wi-Fi, 4/5G) to connect all stakeholders, wherever in the world they may be in real time.
136
C. Udell et al.
Traceability Mobile devices connect most of the human population and can be tracked for location and usage, making the fleet management of an inspection service provider more insightful. Recent commercial activity trackers have transformed the analytics available to runners. Advantages can be seen if this technology and ideas are transferred to the NDE market by logging all activities and relevant change of settings performed by the user. Figure 4 shows how this “Logbook” works in the Proceq Live products. Information stored by the logbook includes user identification, settings, all measuring data, and changes and can be complemented by geolocation, pictures, and audio comments. This functionality allows a supervisor to retrace the complete measuring process, if necessary, and ultimately to check data consistency and prevent data manipulation. As NDE reports are legal documents, they typically require an approval, and this is typically from a highly trained and qualified level 3 inspector. This stakeholder cannot oversee all activities when an inspection is undertaken, especially if he/she is responsible for a large team. Therefore, the approval signature requires a lot of good faith. With activity tracking, the level 3 inspector can have extra confidence that the inspection was carried out to procedure and the customer has an inspection to the required quality. We may also observe a “Hawthorne effect” [8] where improvement in performance is achieved since the inspector has the knowledge they are being tracked and observed. Despite the right tools and procedures existing, recurring certification and traceability scandals still haunt the NDE industry. Kobe Steel lost over $300million of its
Fig. 4 Logbook example of a Proceq Live product which tracks the measuring process. The comprehensive logbook describes the device and names the user, it logs settings and measurement parameters, and records readings and exclusion of readings. Moreover, it permits adding photos and notes
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
137
value [9] and is still losing value on its brand due to falsifying inspection data in 2017 and SpaceX, one of the most well-funded start-ups of our generation, had an issue with a supplier falsifying inspection reports [10]. In combination with mobile devices and activity tracking, blockchain can play an important role in identifying inspections carried out unethically, which potentially compromise the safety of the public. As an emerging technology, blockchain is being explored for various industries including health care, manufacturing, insurance, banking, and education. Recently, discussions for the use of the technology in the inspection industry to solve the industry’s problems associated with trust has also gained momentum. Blockchain would allow a digital report and signature that could be identified, validated, stored, and shared and to ultimately provide inspection data authentication. Blockchain may make some asset owners adopt NDE 4.0 earlier due to the increased trust gained and to limit financial damages in the event of a failure.
Advanced Visualization: Augmented Reality One of the most time-consuming tasks today is to communicate a result in the field. Communication of results has become an important element when interacting with colleagues on a large investigation site, the back-office suppliers or customers. A Lissajous plot or weld sketch with the information on 3.5 mm equivalent reflector size (ERS) is often insufficient and confusing to a welder, who must carry out the follow-up repair activity. This confusion can lead to mistrust of the NDE activity performed. Visualization gives an intuitive and easy-to-understand visualization of measurements and radically boosts on-site data interpretation and communication. Colourcoded C-scans to represent inspection data have existed on high-end equipment for some time and have helped the communication of results radically. Recent advances in tablet technology can take this a step further, by allowing for an augmented volume to represent the C-scan via an iPad screen as shown in Fig. 5. In this particular GPR application, the results show the rebar under the surface. This projected image can be given to an individual untrained in analyzing complex data, but clearly shows them where and where not they should drill a hole, to avoid hitting a rebar.
Reporting Direct report generation and immediate access to the investigation data has been proven to enhance not only collaboration but also to result in the time savings. The time to create a report varied significantly depending on the number of defects to be reported. Users with an automated reporting template were significantly quicker. With connected devices, we can also change the approach to reporting. Typically, reports are written after the inspector returns to the office, meaning the asset owner
138
C. Udell et al.
Fig. 5 Under surface rebar displayed as augmented reality through a tablet screen. Here the position of the rebar under the surface can be clearly and unambiguously analyzed
waited until they knew if their asset needed repair work or if it could be returned to service. A connected device could report any findings directly at the point of inspection, and even at the detection of an indication. While the preparation time to the measurement is very similar, the time to do verification and actual measurements are greatly reduced using latest onboard technology. Reporting and IoT significantly reduce the time to reporting as well as ease of data sharing. With a connected device, the reporting of indications can be instantaneous, so that the asset owner can be informed of any issues quicker and can start any rework activities much sooner. The time saved on reporting by the inspector can be helpful in allowing to increase the working capacity of existing level 2 and level 3 inspectors.
Big Data Storage space on mobile devices is huge compared to just a few years ago. The latest generation of iPad has up to 1 TB storage, the previous generation came with as little as 8GB. This means we can store more data, either locally on the device or in the cloud. As a lot of data on inspections is currently lost, filtering down to a single page report for a full day’s work in some cases, smartphones and tablets can be considered the interfaces that enable us to acquire a considerable mass of inspection data on assets. This leads to a field known as big data. It refers to large, diverse sets of information from a variety of sources that grow at ever-increasing rates. With big data, synergies of NDE inspection services can be found with online monitoring solutions like condition monitoring and predictive maintenance. Smart sensor-based monitoring systems provide permanent, real-time data continuously informing the asset owner of part integrity and fitness for service.
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
139
Fig. 6 The continuous improvement model for big data from NDE devices. Components will be made stronger, safer, and easier to inspect. Components can be designed for NDE
Gathering far more data will lead to powerful cases for extending life limits on parts, or at what interval parts can be tested. Smart condition-based maintenance programs, specifically tailored to the individual parts’ condition and history and predicted mechanism of failure, will move away from an inefficient and often costly scheduled maintenance program. Furthermore, as shown in Fig. 6, with big data generated by modern equipment, suppliers can develop new components to avoid specific failures and eliminate unused features, so that service inspections are required with less frequency. Components can be designed for NDE, making them simpler to inspect. The capture and storage of large amounts of data does bring new issues that need to be solved, those of ethics and security. Ethical issues come to the forefront because sensitive information, like part geometry of nuclear power plants, photos of military equipment, and acceptable existing defects in aircraft, should be stored somewhere. The asset owner needs to trust companies to use their data ethically and pay close attention to how they manage and protect their data, especially if in a remote cloud.
New Business Models with Connected Devices (Adapted from [11]) Connected devices that can enable a subscription-based pricing model have great advantage for the user. It can be possible to manage the stock and activate or disable subscriptions depending on the annual demand forecast over the next years. The maintenance schedules and annual testing of the connected devices are viewed as a significant added-value task, which led to a reduced product downtime. The companies analyzed had limited cash flow, so a large upfront cost to upgrade equipment requires several approval signatures by senior management, with long delays and at high handling cost. If large inspection providers’ procurement teams have a more comprehensive understanding of the fleet’s actual use and utilization,
140
C. Udell et al.
better buying decisions can be made with less time and cost for approvals. Subscription-based NDE solutions incorporate measures to help them maximize the use of the equipment and to achieve maximum cost savings. Processes for purchasing NDE equipment have traditionally represented a one-time, upfront capital expenditure (CapEx) payment. The process to purchase this equipment can be costly, time consuming, and ripe with approval bottlenecks focusing on the one-time expense instead of on the long-term value the equipment’s use will generate for the customer. In recent years, new business models have been introduced into the NDE equipment market. These make it possible for the user to purchase equipment (hardware) for a price significantly lower than the traditional one-off approach, and at the same time subscribe to the usage of the equipment on a time-based basis (e.g., annually). The result is that capital expenditures are minimized, and the value-providing uses of the equipment are treated as an operating expense (OpEx). Subscriptions are nothing new; they have existed for many decades in the consumer market, e.g., with newspapers, premium TV, mobile telephones, and gyms. Most of us are no strangers to subscribing to services. In fact, the average cost consumers are spending on these subscriptions is increasing significantly, [12].
Hardware as a Service Hardware-as-a-Service (HaaS) is a subscription-based business model that has been transforming the IT industry in recent years. Under HaaS, customers pay for services, not things, as with a typical one-time purchase model. It has been shown that the way individuals and businesses consume hardware products is changing. A study by McKinsey & Company [13] revealed that business owners increasingly prefer subscriptions over traditional methods, thanks to a subscription’s flexibility and reduced costs. For NDE equipment purchases, the opportunity to change to a subscription model is much more complicated than for the consumer market. Legacy equipment can be specified in maintenance manuals, and users can become accustomed to the specific way such equipment works, raising barriers against change.
Blockers to Adoption While the step to today’s technology can be established rather easily everywhere, speed of adaptation has been too slow in the industries requiring NDE services to keep the industry healthy and competitive, and established workflows require updating. Across hundreds of interactions, typical roadblocks to the uptake of such powerful and versatile digital platforms in NDE were found to be:
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
141
• Regulations; e.g., usage of wireless data transmission in oil & gas, and nuclear environments • Restrictive company rules; e.g., companies blocking smartphones or cameras • Long-standing habits of users and stakeholders; e.g., maintaining inefficient outdated workflows • Security concerns regarding external data storage; e.g., blocking cloud-based or even USB backup solutions • A general mistrust of technology-driven systems and services • Productivity and speed will play an increasing role in the customer’s needs. Early adopters will not only need to see their immediate benefits, but also understand the advantages of adopting associated new business models
The New Ecosystem for NDE and How Smart Devices Can Trigger New Business Cases The new affordable hardware will not only help to improve the present NDE business but opens the opportunity for completely new business cases. NDE science is still trying to understand the physical mechanism of nondestructive inspection methods using isolated systems usually not capable of communicating and exchanging data with other production devices. However, everyone is familiar with cellphones and tablet computers making the world’s knowledge and a large amount of data available to anyone at any time and place. The highly powerful but widely available electronic devices, such as tablet computers and cellphones, incorporate various sensors in the form of cameras, microphones, vibration sensors, and accelerometers. Other smartphone-attachable tools are available for purchase like IR cameras [14], ultrasound pulser/receiver units [15], terahertz arrays [16], and eddy current transduces [17]. Smartphones and tablets can become themselves affordable and easy-to-use measurement and NDE devices for everybody. Mostly, this is being applied – or even reinvented – without being aware that this is NDE. This means that the new hardware must not only be the NDE frontend but the NDE system, easy to handle and affordable for everybody, that can be used for household application, not considered in NDE so far. The use of these tools is as simple as downloading an app from the App Store and attaching the removable device to the phone. That is literally everything that is necessary to start taking measurements. This will make the whole world’s accumulated knowledge (that is, a large amount of data) available to anyone at any time and any place. For the younger generation (a generation, unfortunately, not much involved in NDE jobs today), this technology is self-evident and they possess a natural flair for it. Merging the highly specialized knowledge of the NDE techniques with today’s technology will open new markets for NDE 4.0. These new handheld devices will be applied to perform NDE. As a benefit, product inspection at home can become an additional component of monitoring the life cycle of a product. This might
142
C. Udell et al.
significantly increase the acceptance of NDE 4.0 by solving new inspection problems for everyday service. In the following, an example for such a “cellphone NDE device” will be presented. This was been successfully used to introduce NDE to undergraduate students and inspire them to create new ideas on how NDE can gain broader acceptance.
Cellphone for NDE Teaching in Schools Using affordable and almost self-explaining apps can help make more students aware of the potential of NDE and improve NDE training in lab classes. The following is an example for eddy current demonstration and training. Eddy current inspection may be learned directly on a tablet or smartphone. The students work with their beloved devices they know best. This makes the contact to a complicated inspection method not only less painful but even creates a certain enthusiasm [18, 19].
Hardware An Android smartphone running Lollipop (API 21) or higher is required. The sound system and the processor capability define the results significantly. Best results were obtained with Samsung Galaxy or Note and Moto G series. The eddy current transducers directly plugged to the audio jack of the phone. For that, the transducer is equipped with electronics simulating a headset. The pin-out of the audio plug should match the selected phone. The app for the Android smartphones uses the audio jack as probe interface. The eddy current probe is fed by the audio output ranging in frequency up to 20 kHz. The field strength can be controlled by the phone’s volume. The received signal is connected to the microphone pin of the audio interface. All signal processing is done digitally and uses the high capabilities of current devices. Figure 7 shows the kit. Fig. 7 Smartphone as eddy current instrument. The kit contains a probe and reference pieces for many eddy current inspection tasks
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
143
The cellphone hardware allows one to generate and receive frequency up to 20 kHz. Therefore, the frequency range between 1 kHz and 20 kHz was chosen for this application. The provided probe is tuned to this range. Foreign low-frequency probes of transformer type are attachable using a special adapter cable with some electronics in it. The field strength of the probe is adjustable by the volume control of the phone. Overdrive will be indicated.
The Eddy Current App Only the transducer has to be inserted into the earphone jack of the smartphone. The eddy current app turns the phone into a NDE device with the display as user interface. The software accomplishes signal generation, demodulation, amplification, phase rotation, and filtering. The demodulated signal is shown as a flying dot on the xy-plane. The user can adjust all essential settings directly from the touch screen. The dot movement may be recorded and afterwards modified in gain, phase, and filter setting. Additionally, a yt-indication is possible (Fig. 8).
Fig. 8 Eddy current signals of different materials in the xy-plane on the smartphone screen
144
C. Udell et al.
Figure 9 illustrates how to balance the instrument by tapping the balance circle at the origin of the xy-plane. The origin itself may be shifted according to the signal behavior to one of the nine highlighted points. This way, the area is best used for displaying the whole signal. While the origin in Fig. 10 is located in the middle, it was shifted in Fig. 10 to the lower right corner because the signals are oriented to the upper left-hand side. All inspection parameters can be set and modified at the touch screen. For example, wipe gestures set gain, y-spread, and phase. The corresponding values
Fig. 9 Left: Balancing and offset may be adjusted by tapping dedicated areas in the xy-plane. Right: Gain and phase are set by wiping gestures on the screen. Automatic lift-off setting is initiated by long tap on the phase button. While lifting the probe the flying dot crosses the phase circle and the phase is turned accordingly
Fig. 10 Left to right: Surface crack inspection: Slots of different depths in aluminum simulate surface cracks in highly conductive material. Slots in austenitic steel represent cracks in poorly conductive but slightly ferromagnetic material, wall reduction in aluminum at 4 kHz, all reductions are easily detectable. The phase spread is low. At 12 kHz the phase spread is significantly better but due to lower penetration not all reductions can be detected
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
145
are displayed in the xy-plane as well as on the buttons. Lifting the probe, the signal path is turned when it crosses the phase circle. Other implemented features are: • • • • • •
High and low pass filters Threshold Default setting Recorder xy- and yt-mode Bluetooth and Wi-Fi transmission of the demodulated signal
Like other commercial eddy current instruments, EddySmart provides the opportunity to display the y-component of the signal in time mode. Here, the filter setting may be carefully trained. This mode prepares the student for work with automatic production line equipment. As visible in Fig. 8, the EddySmart device is delivered with optimized test specimens for demonstration of different inspection tasks. Materials Characterization (Sorting) The round blanks and the aluminum body of the kit represent a broad conductivity and permeability spectrum. When the probe approaches the blanks, so-called lift-off lines of different length and orientation are drawn. Figure 9 shows these lines. Based on these two parameters the material can be identified. Defect Detection (Surface Cracks and Wall Reductions) Narrow slots represent surface cracks. Two references from different materials provide the opportunity to compare crack signals. One of them is made from highly conducting aluminum and the other from poorly conducting but slightly ferromagnetic stainless steel. For both materials optimal inspection parameters have to be found. Figure 10 (left) shows the signal pattern. The reference pieces may be removed from the aluminum body and be flipped to simulate hidden cracks. Eddy currents at low frequency are used to obtain high penetration depth. The supplied reference piece has milled grooves on the back side. The student learns to select a suitable frequency according to the penetration depth and phase spread. This is the basis for remaining wall thickness assessment. Figure 10 (right) shows this situation. With only a very brief introduction to eddy current and without any instrument training the undergraduate students in a laboratory class were able within minutes to operate this instrument and perform the above-illustrated NDE tasks. Even more, they were interested to develop new projects for “NDE at home.”
Student Ideas for New NDE Business Cases for “NDE at Home” Student groups were formed in an NDE basic class to create product ideas that illustrate the potential for this new generation of cellphone-based NDE instruments
146
C. Udell et al.
assuming NDE apps will be available. The following are examples created by student teams form the Iowa State University [20].
Self-Inspection of Used Automobiles Purchasing a used vehicle comes with many risks. Especially following a large natural disaster such as a hurricane or flood, the used car market experiences an influx of vehicles with flood damage [21]. A student team suggests introducing a mobile phone application that allows a user to run various NDE tests and to perform a self-inspection, converse with NDE experts to help identify problems, and access the full maintenance and inspection history of a vehicle. For instance, the new feature using the microphone on the phone to listen to and record the sound of the engine running is innovative. As the car is running and driving, the phone collects data about sounds. This data is then compared to the sound of a similar brand and model of cars running at peak performance, as well as with various problems. Problems the audio inspection could detect include, for example, transmission issues such as bad gears or clutches, braking issues such as squeaking or bad antilock brake systems, and engine issues such as bad alternators, timing belts, fans, and valves. The students suggest to develop an app that includes the ability to speak with an NDE expert. The NDE expert would be able to assist the consumer in inspecting the vehicle. By using both the front and rear-facing camera, as well as being able to see the display on the mobile app, and being able to have audio and visual chat with the consumer, the expert can guide the user with placement of transducers and probes, as well as what to look for on the display. The students even suggested a global NDE service network for detection of flood damage in consumer vehicles. Finally, the database on repair and inspection history allows consumers to know as much about the car as possible. Similar to CARFAX, the repair history is saved by VIN. With this app the previous inspection history is also saved. This allows consumers to recheck flaws found in previous inspections to determine the deterioration rate of the vehicle. House NDE Applications Using Cellphone-Attachable Infrared Imaging Households have various areas where NDE can help prevent loss of efficiency or dangerous situations. A cell phone app would be a useful way to track these various areas that are being monitored. Permanent sensors and cameras could be used in conjunction with handheld sensors and cameras installed on phones. The student team identified six different areas where this hypothetical app would be useful. Each area was given a specific NDE method (or in some cases, multiple methods) to carry out analysis. Three examples for which cellphone-attachable infrared cameras were suggested: • Heat loss detection. This would identify heat leaks and reduce energy bills, which is, of course, a standard application for thermography. By using the capabilities of a smartphone, this information could immediately be shared with specialists who can give advice on ways to improve the heat loss.
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
147
• Electrical overheating. This is one of the most common causes of house fires in developed countries. This type of overheating may or may not be the fault of the user, and is difficult to detect visually [22, 23]. • Faulty ventilation. Faulty ventilation of a house can cause numerous problems, such as leaks, moisture, or dispersal of dangerous gasses like carbon monoxide, natural gas, large levels of carbon dioxide, and radon.
Assisting Visual Inspection of Glass Defects with Cellphones Visual inspection uses our individual senses of sight and touch to identify defects, primarily surface breaking. The students suggest using the camera of a cellphone to support visual inspections to grab, magnify, improve, and analyze images and to use the flashlight on phones, if extra light is required. Special macro lenses for magnification are available. Camera apps are already available to take HDR photos as are photo-editing apps to enhance defects via contrast, sharpness, etc. An additional feature that has been suggested would use polarized light to visualize local stresses in transparent objects (stress optics) [24]. The only equipment required to perform this test would be a white light source and two polarizers. As an add-on, a cell phone camera can be used to observe and record the image. An app could also be developed to further analyze the image. Unmanned Arial Systems for Pipeline Leak Detection and Inspection The idea presented by the next group of students is already in the industrial application stage. Government regulations require leak detection capabilities in pipelines for environmental and safety concerns, and companies want to find leaks to avoid losing the material they are transporting. To accomplish this, an unmanned aerial system (UAS or drone) will be used to automate the inspection process. UAS have been used in NDE before but are typically uses for visual inspection of bridges or manufacturing plants [25]. By attaching a thermal camera to the drone, any leak, or even a lower wall thickness, can be detected in the pipeline via the variances in thermal profile when a leak is present. The main drawback of the UAS is that it only uses the visual inspection techniques in order to detect defects, although new products are coming to market with the ability to make ultrasonic thickness measurements with a drone [26]. To provide more complete coverage in the most critical areas, it is also proposed that the drone deliver a rover to each weld to inspect it using eddy current and ultrasonic techniques. The rover would magnetically attach to the pipe and complete one rotation around the pipe, thereby inspecting the whole weld. Meanwhile the drone would inspect the next segment of pipe both thermally and visually. Once both have completed their tasks, the drone would return to the rover, pick it up, and deliver it to the next weld. These are only some examples and ideas from undergraduate students resulting from their first lessons in NDE. Established NDE specialists will be inspired by such new ideas that can create a significantly larger NDE market than we have today. No doubt, these inventions will happen, but they can be more efficient if new technologies are used with the competence of NDE experts.
148
C. Udell et al.
The authors acknowledge the following students for providing the ideas presented in this paragraph: Spencer Tinker, Nathan Chapdelaine, Ashton Mellinger, Amanda Nicole Kewitsch, Sam Conley, Connor Born, Matthew Fuller, Olatunji Odesanya, James Edwards, Marlin Francksen, Joseph Mason, and Henry Owusu.
Summary Humans change. Technology changes. NDE must change. We have illustrated the large number of advantages that the combination of NDE and the latest advances in consumer technology yield and how these technologies will be disruptive to the traditional way of performing inspections throughout the value stream, from training and preparation to ergonomics and collaboration while undertaking the inspection, of managing the life cycle of collected inspection data to new business models and making the latest technology affordable for all. As such, early adopters and fast followers will develop a learning advantage and reap earlier the cumulative benefits of IoT-enabled NDE. We are already beginning to see speech recognition, AR, and pattern recognition of streamed data being integrated into mobile platforms, with the opportunity to further improve the usability and reliability of ultrasonic inspections. The IoT will be disruptive to the traditional way of performing inspections at regular intervals of a component’s lifetime. When the total running costs of a structure are included, NDE and condition monitoring together are key to maximizing value and extending useful asset life. The time saved and productivity increases from using the mobile NDE devices by the inspector can be helpful in allowing to increase the working capacity of existing scarce level 2 and level 3 inspectors. This can help alleviate the impending skills shortage. The last example of the EddySmart for students training demonstrates how cellphones can make NDE more affordable and easier to apply. This can make NDE attractive for everybody for everyday’s use. NDE can leave the niche and become a mass product, a big business. This will happen. However, if the NDE community will not step into the business others will make profit from it, and certainly without being aware that this is NDE. And finally, because all cellphones and tablets are in networks and have access to databases and cloud computing this can be an example where all resources for NDE 4.0 can be used.
Cross-References ▶ Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0 ▶ Introduction to NDE 4.0
6
Improving NDE 4.0 by Networking, Advanced Sensors, Smartphones, and Tablets
149
References 1. Felice M, Heng I, Udell C, Tsalicoglou I. Improving the productivity of ultrasonic inspections with digital and mobile technologies. NDE India 2019 CP227. 2. Christensen C, Hall T, Dillon K, Duncan D. Know your customers’ “Jobs to be done”. Harvard Business Review, September 2016. 3. Meier J, Tsalicoglou I, Mennicke R. The future of NDT with wireless sensors, A.I. and IoT. APCNDT Conf. Proc. ID273/3962, 2017. 4. PCN GEN issue 19: General Requirements for the Certification of Personnel Engaged in NDT. 5. Sinclair C, et al. A landscape for the future of NDT in the UK economy. www.bindt.org/ downloads/Materials-KTN-Future-of-NDT-in-UK-economy.pdf 6. https://arstechnica.com/gadgets/2015/11/ipad-pro-review-mac-like-speed-with-all-the-virtuesand-limitations-of-ios/4/#h2 7. https://yansmedia.com/blog/55-video-marketing-statistics/ 8. https://en.wikipedia.org/wiki/Hawthorne_effect 9. https://www.bloomberg.com/news/articles/2017-10-17/kobe-steel-is-said-to-have-likely-fakeddata-for-over-a-decade 10. https://www.wsj.com/articles/spacex-rockets-were-imperiled-by-falsified-reports-prosecutorssay-11558576047 11. Udell C, Tsalicoglou I, Felice M, Heng I. Making the shift from capital expense to operational expense for Ultrasonic Flaw Detector Equipment. Singapore International NDT Conference & Exhibition, 4–5 Dec 2019 (Since 2019). 12. https://www.growthbusiness.co.uk/subscription-nation-9-10-uk-consumers-now-subscribers2552406/ 13. Manyika J, et al. The Internet of things: mapping the value beyond the hype. McKinsey Global Institute; 2015. 14. Systems, Inc. FLIR. How does an IR camera work? FLIR Systems. N.p., n.d. Web. 03 Apr 2017. 15. Boyle R. Terahertz-Band cell phones could see through walls. Popular Science. https://www. popsci.com/technology/article/2012-04/terahertz-band-cell-phones-could-send-faster-textsand-see-through-walls. 18 Apr 2012. 16. PCUS mini. 17. Mook G, Simonin J. Eddy current tools for education and innovation. In: 17th world conference on nondestructive testing, Shanghai, China, 25–28 Oct 2008. 18. Mook G, Simonin Y. Smartphone turns into eddy current instrument. In: Proceedings of 12th ECNDT, 12th European conference on non-destructive testing in Gothenburg, Sweden, June 11–15, 2018. 19. Mook G, Simonin Y. Education in eddy currents – from single probes to arrays. In: Proceedings of 12th ECNDT, 12th European conference on non-destructive testing in Gothenburg, Sweden, June 11–15, 2018. 20. Meyendorf N. Re-inventing NDE as science – how student ideas will help adapt NDE to the new ecosystem of science and technology. AIP Conf Proc. 2018;1949(1):id 020021. https://doi. org/10.1063/1.5031518. 21. Avoiding Flood Damaged Cars. DMV.org. N.p., n.d. Web. (7 Apr 2017). 22. Kitchen and Electrical Fires. Hoechstetter Interiors. N.p., 25 Sept 2009. Web. (03 Apr 2017). 23. Electrical thermal inspections electrical infrared imaging (IR). Electrical Infrared Inspections in Kansas, Oklahoma, Missouri, Electrical Thermal Inspections. N.p., n.d. Web. (03 Apr 2017). 24. Redner AS. Back to basics: nondestructive evaluation using polarized light. Mater Eval. 1995;53(6):642–4. 25. https://www.drone-thermal-camera.com/drone-uav-thermography-inspection-pipeline/ 26. Watson RJ, et al. Deployment of contact-based ultrasonic thickness measurements using overactuated UAVs. In: Rizzo P, Milazzo A, editors. European workshop on structural health monitoring. EWSHM 2020. Lecture notes in civil engineering, vol. 127. Cham: Springer; 2021. https://doi.org/10.1007/978-3-030-64594-6_66.
7
Value Creation in NDE 4.0: What and How Johannes Vrana and Ripudaman Singh
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDT Value Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bird’s-Eye View of the Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . More Stakeholders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Value Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . More Value Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Thread in the NDE Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Weave in the NDE Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyber-Physical Value Creation in NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Cyber-Physical Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE Event Loop for Asset Inspectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maintenance Loop for Asset Owner-Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Design Loop for Asset OEMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE System Design Loop for NDE OEMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . More Loops in NDE Ecosystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE Engineering Within the Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Markets and the Connected Industry 4.0 World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
152 153 154 155 155 157 159 160 161 162 164 164 166 167 168 169 172 172 173 173 174 175 176 176
J. Vrana (*) Vrana GmbH, Rimsting, Germany e-mail: [email protected]; [email protected] R. Singh Inspiring Next, Cromwell, CT, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_41
151
152
J. Vrana and R. Singh
Abstract
Digital technologies provide significant efficiency gains to NDE (Nondestructive Evaluation) procedures. Digitalization of NDE further enhances its effectiveness and simplifies the processes. Digital transformation at the NDE ecosystem level can create unprecedented value for multiple stakeholders simultaneously. The aspects of technology, informatization, and various use cases have been discussed by the authors in these three chapters. This chapter identifies primary stakeholders in the broad NDE ecosystem, connects them through key value streams, and delves deeper into how the data flow along various cyber-physical loops creates value for those in the value stream. Note: This chapter is based on “Cyber-Physical Loops as Drivers of Value Creation in NDE 4.0” [1]. Keywords
NDE 4.0 · Use cases · Value proposition · Advanced NDE · Future of NDE · Automation · NDT 4.0 · Industry 4.0 · Cyber-physical loop · Digital twin · Digital thread · Digital weave · IIoT · Industrial revolution
Introduction Nondestructive Evaluation (NDE), by name, is a way to interrogate material for any peculiarities without altering its form. And that is the value it creates in simple terms, assuring quality of raw materials and finished goods as well as safety of in-service assets. It gained significance along with industrial revolutions to avoid fatal accidents. With advent of computers and better understanding of damage growth, the value creation began to include life cycle cost optimization through managed inspection and maintenance programs. As product design and manufacturing improved and safe operation became the norm, some of the value perception has eroded. Come to think of it, when we fly, we discuss the quality of inflight entertainment or peanuts, because we are so sure of a safe landing. That was not the case in early days of aviation. NDE-based materials evaluation and feedback to improve design deserves the credit for such improvements in safety and performance. The historical evolutions of NDE, to the present-day’s emergence of NDE 4.0, were discussed in ▶ Chap. 1, “Introduction to NDE 4.0”. We also discussed almost a dozen use cases with promising value propositions. ▶ Chapter 5, “Digitization, Digitalization, and Digital Transformation,” discussed various aspects of digitalization and how they create value at elemental level. Digital transformation of NDE, which engages multiple entities, takes value creation to a whole new level. To fully grasp and exploit the opportunity, we must first appreciate the NDE ecosystem for safety and quality assurance, various stakeholders in it, and how they connect to create a value stream today. We can then begin to see how the data can rapidly flow
7
Value Creation in NDE 4.0: What and How
153
back upstream as a cyber-physical loop [2], and provide even more value, which was not perceived or practical up until now. The larger the loop, the more likely are the number of beneficiaries, and their individual benefits [1]. We assume that you have read the three chapters mentioned above or are familiar with the concepts of NDE 4.0, Industry 4.0, IIoT (Industrial Internet of Things), Cyber-Physical Loops, Digital Twins, and informatization. If not, it might help reading those first. Before getting into the value creation in NDE 4.0 this chapter takes a closer look on the NDT (Nondestructive Testing) value perception evolution to its current state.
NDT Value Perception In the early industrial days, humans naturally lacked the necessary experience of how to safely process raw materials, design and manufacture components and systems, and operate various machines and modes of transportation, which resulted in severe accidents. This is where NDT developments took place. NDT identified potential material imperfections leading to a massive increase in machine reliability. NDT also became a central part of early day feedback loops by identifying potential design and production improvements, through additional knowledge. Growing experience and knowledge in engineering continues to make the world a safer place while creating economic prosperity through innovation and revolutions. In the beginning, the business case for NDT was straightforward. At that time, the companies were able to differentiate themselves from competitors by technological performance benefits for the customers. But this position changed over time as competition became harder leading to price wars, and every company looked for savings, everywhere. What does this mean for the current day business case for NDT: • A traditional business case for NDT considers the potential cost which would have accrued in case of accidents. The cost of a single accident can easily be a seven-digit number – not even considering the cost of the loss in reputation. Such costs are way higher than the cost of years of NDT. Most NDT professionals see this traditional business case and are therefore astonished how other groups start to question the cost for any investments for NDT. • Over the years, as the number of accidents has dropped to a lower level, the credit is being attributed to good quality of design, production, and maintenance, with NDT getting the seat behind the scenes. Therefore, business administrators start to neglect the costs of potential accidents which results in the fact that they do not see a business case for NDT anymore. They only see that NDT must be conducted due to standards and regulations, without the “why.” In an extreme case, the value attributed to prevention of accidents completely disappears when the number of accidents drops to zero or if the time span between accidents is longer than the typical employment span of decision makers.
154
J. Vrana and R. Singh
• The realistic business case is akin to an insurance model and will be somewhere in between. We believe it is incorrect to assume safety without actions required to assure safety even that the potential risk for accidents has dropped. The impact of accidents should be considered, perhaps in the context of “not doing NDT.” Suddenly the NDT will appear to be a cheaper and more respectable option. This will in the long run help everybody. Those different points of view are the reason why NDT professionals believe that NDT is a value center (preventing accidents), while their customers, asset original equipment manufacturers (OEMs), and asset owner-operators believe that NDT is just a cost center [3]. In some industries this situation has developed to a point that NDT is becoming undesirable. This explains some of the end user pains, perceptions, and remarks captured by a survey [3]. • “You are like my mother in law, I don’t need you. . . hate it when you are there. . . you create extra work for the rest of us and I end up paying a ****load of money.” • “It’s all smoke and mirrors; costs too much; bottle neck; non-value-added; only represents negative issues.” • “If NDT becomes mandatory, our product will be too expensive for the market.” • “NDT does not have any value at all. It only sorts out parts, that in reality are good. I don’t want it and I would never ever do it, but my customer insists on it. I’d prefer spending the money into further improvement of my production!” So – how can the NDT Industry ensure its future? Through NDE 4.0! The NDE 4.0 business case creates value twofold: • It makes NDE more effective (reliable) and efficient (streamline process). It makes inspections more affordable to existing customers and the realistic business case, discussed above, worthwhile. • It opens NDE to additional customers. NDE data becomes an asset which in itself will carry value.
NDE Ecosystem NDT and NDE are all about the asset to be inspected and evaluated. The NDE Ecosystem is asset centered. The value of NDE for the various stakeholders is defined by the information/knowledge they can retrieve with NDE about the asset. This can be, for example, a GO/NO-GO decision for quality assurance (traditional NDT business case) or the use of the NDE results for engineering purposes (example for an NDE 4.0 business case). The more information stakeholders can retrieve the more valuable NDE becomes. This is in short how NDE 4.0 creates value.
7
Value Creation in NDE 4.0: What and How
155
Bird’s-Eye View of the Ecosystem If you keep an eye on the asset, you can see that multiple parties contribute to its safe and economic operation, in line with the primary purpose of NDE 4.0 [4]. The term asset is used as a generic reference to a physical item – machine, vehicle, system, plant, or a piece of infrastructure that needs inspection for safety and performance assurance. Figure 1 shows the four key stakeholders in the inner circle and the supporting entities in the outer circle. Airplane Asset is just an example.
Key Stakeholders There are four key stakeholders, as shown in the inner circle in Fig. 1, presented in the following. Three of them are businesses and we chose to identify inspectors as an individual stakeholder because their personal and professional life is impacted significantly.
Asset OEMs Asset OEMs design, manufacture, assure quality, and prescribe the in-service inspection program, along with standards and procedures for compliance. They leverage R&D out of universities and other research establishments to continuously improve their assets. The Asset OEMs including the supply chain have the primary responsibility of delivering a product or a system that is safe to operate, affordable over the life span, and requires minimum maintenance. They essentially compete on product performance, cost, and customer experience. To accomplish this, almost every asset is adopting IIoT, from as small as an electric switch controlling a light bulb to aerospace defense, and home appliances in between.
Asset OEM
NDE OEM OwnerOperator
Universities, R&D Center Regulatory
Inspector
Inspection Service Providers
Fig. 1 NDE ecosystem including its major stakeholders [4]
Certification Bodies
Inspector Training Schools
156
J. Vrana and R. Singh
Asset OEMs are usually bigger players. Therefore, multiple departments should be considered independently – each of them could be a customer for NDE and NDE data. A new product design is usually created by the engineering department, the components are ordered by supply chain management, inspected and machined by suppliers, quality assured, and assembled, and the final product commissioned. The traditional NDT customer is QA. With NDE 4.0 all departments could become customers/consumers of the results of NDE (the NDE data in addition to binary decisions). With the commissioning, the product is transferred to the owner-operators.
Owner-Operators Owner-Operators either provide a service to the public using products produced by Asset OEMs or they use those products to enhance their personal life. Examples for the first group: the airlines fly people, oil and gas plants provide energy, and theme parks provide an entertainment experience. Examples for the second group: car owners. Owner-operators have the primary responsibility of assuring the safe and continuous operation of the asset, in an economically viable manner. They employ asset inspectors or engage them through a professional inspection service provider to guarantee the safety of the assets throughout the operation. They make every effort to optimize inspection programs for maximum asset availability, minimum lifecycle maintenance cost, improve inspection reliability (reduce false calls), stay in compliance with all regulations and inspections prescribed by asset OEMs and regulatory bodies. We have come across industry peers who believe that safety is the responsibility of the regulatory bodies. It is an unfortunate perspective where ethics are being kept aside because there is a legal recourse to undesirable incidents in form of regulatory compliance. Asset Inspectors Asset inspectors perform the physical act of looking at the material to detect anomalies, imperfections, or damage that could lead to failure or performance shortfall. Inspectors are trained by their employer, specialized training service providers, and the manufacturers of the inspection equipment (NDE OEMs). They need certifications as prescribed by regulatory and compliance bodies. They are expected to follow the inspection program and procedures as defined by the Asset OEMs. Asset inspectors have the primary responsibility of testing the materials or structure for any indication that may cause a failure. They are expected to follow validated and documented procedures, maintain their inspection skill level indicated by certification, calibrate equipment using prescribed standards and intervals, and maintain the NDT equipment health as prescribed by the NDE OEM. They must see the benefit of adopting new technology and developing new skills. They make every effort to make their job simpler, faster, less physically stressful, improve inspection reliability (reduce false calls), comply with all regulations, and inspections prescribed by asset OEMs.
7
Value Creation in NDE 4.0: What and How
157
NDE OEMs NDE OEMs design and manufacture the inspection equipment (or system), prescribe equipment calibration, and provide application training to inspectors. They also leverage R&D out of universities and other research establishments to continuously improve their equipment. NDE OEMs have the primary responsibility of delivering an inspection system that is easy to learn and operate, affordable over the life span, requires minimum maintenance, and most importantly delivers dependable inspection outcomes. They essentially compete on equipment performance, cost, and user experience. Before Covid-19, we used to say that NDE 4.0 provides a competitive advantage in terms of cost and speed through remote access, superior visualization, and data interpretation. Now it is becoming significant with travel limitations, social distancing, and low touch requirements.
More Stakeholders The four key stakeholders discussed so far collectively assure safety to the asset consumers (individuals or businesses) and inspectors. They are supported by a few others who can be viewed as a part of the inspection ecosystem.
NDE Research Establishments Universities, small business research companies, national labs, corporate R&D centers, and defense research centers develop new physical methods, digital technologies, and integration logics to enable NDE OEMs to create systems. Industry 4.0 has created a new pull for research, graduate work, publications, patents, intellectual property, and funding opportunities. Inspector Training Schools and Certification Bodies Inspectors need training to develop skills and field experience to get certification. NDE 4.0 means a range of new content, courses, and curriculum, possibly leading up to another set of certifications. These high-tech skills also mean financially attractive programs, which can be delivered in novel ways, in place onsite, just in time, leveraging extended reality. Regulatory Bodies NDE in certain industries is highly regulated. Regulations and innovation work in opposite directions. In general, the regulatory demands for compliance are not easy to meet, where NDE 4.0 is revolutionary in nature with little to no precedence or data-based evidence to back up the value propositions. Regulatory bodies generally do not enjoy innovation, particularly the revolutionary type. It is also hard to develop regulations when the technology is still maturing, the ill effects are not yet well understood, and there is not enough data to address public health and safety concerns.
158
J. Vrana and R. Singh
Similar is the case with certification bodies. There is not enough evidence to ascertain the level of training, practice, and performance to certify. The NDE community is yet to develop clarity around certification process, levels, domains, etc. The regulatory bodies can make a proactive effort to understand the value of NDE 4.0, challenges and risks in adoption, and work cooperatively with innovators in a manner which is good for society and humanity. The Standardization Challenge Standardization assures that the same mistake is not made twice but standards should not be used as an excuse to block innovation. Some existing NDE standards were created years ago. They do prevent “mistakes” but with requirements describing a historic or outdated state of the art, at times designed for NDE 2.0 era – analog equipment, film RT, hand signed reports, manually operated NDE equipment, and for visually performed data interpretation. This hinders innovation, prevents the implementation of NDE 4.0 and its use cases, and disables potential quality increase in inspection technology. Therefore, existing standards need to be revised regularly and need to accommodate new opportunities. Even more important today is to revise the standards development, acceptance, and governance process to enable adoption of rapidly changing technologies and business models. First, the technology standardization around data connectivity, exchange, security, analytics, synthesis, and interpretation is still evolving. In fact, some argue that continuous change is the new normal. The underlying technology may just always stay in a state of continuous flux. The German Society for Nondestructive Testing (DGZfP) is making serious effort toward standardization or acceptance thereof with sources from the IT industry for data exchange protocol [2, 5, 6]. Soon, we will come to accept one of the interface standards, because this acceptance is a cornerstone for the industrial success of NDE 4.0, just like in the third revolution, when the community adapted HTML in 1990–1991 to enable the explosive growth of the Internet, originally born in 1969.
Communities and Societies National bodies and societies, such as ASNT, DGZfP, JSNDI, and ISNT, have a major role to play to serve the community as a part of their mission to bring professionals onto a common platform for exchange to ideas, requirements, and shared solution. IT Infrastructure Providers As data storage, transfer, security, analysis, and display become prominent part of NDE, the cloud storage, SAAS providers, and hardware maintenance services will take prominent place in any operational unit. Most of these businesses are in high growth mode currently as everyone is getting into digital transformation. For them NDE 4.0 is yet another customer persona. The same applies for Digital Technology Training Schools.
7
Value Creation in NDE 4.0: What and How
159
Consultants and Coaches There is a substantial business opportunity for freelance consultants, coaches, or small firms, specializing in NDE, digital technologies, and entrepreneurship. They can bring Industry 4.0 perspectives, digital knowhow, and innovations, from other domains into NDE. Authors belong to this element of the NDE ecosystem now and this publication is an effort to bring awareness around Why, What, and How of NDE 4.0 to all other elements of the ecosystem. Still Unknown Every revolution has created new business models and additional stakeholders. This one will not be any different. New business models will emerge as data shows promise. The structured data amenable to information extraction can become a commodity with a price tag for data owners because it has value for product performance service life improvement. Who owns the data is a matter of business discussion across asset OEM, asset owner-operator, inspection service provider if different than asset owner, or even the NDE OEM. Industry will shake this out. There may even be another stakeholder emerging when data is traded as a valuable commodity. Just like in the third revolution where wealth was in the form of company stock, and mutual funds and stock exchanges became major players, in the fourth revolution, data becomes an asset, Data-Exchange the most profitable business transaction, and data traders and enrichers new stakeholders.
Key Value Streams The value streams connect the stakeholders with each other in manner to deliver and satisfy the customer. Let us look at the three key value streams.
Asset Value Stream Generally, the idea and the design for an asset are created by an asset OEM, the individual components are produced and inspected by suppliers, and final inspection done by the OEM, before handing it off to operators (see Fig. 2). The owneroperators start using the product according to the specifications of the asset OEM, including service inspections (NDE) at certain intervals to guarantee problem-free operation. At some point the product reaches its end of life (EOL) and the question regarding reuse, repurpose, or recycle arises. This is a great new business area for NDE to support finding the right decision. Besides the NDE inspections at the suppliers, during operation, and to enable the circular economy nondestructive evaluation with sensors may be performed Asset OEMs & Supply Chain Idea
Design
Raw Material Sensors
Component Sensors
Owner-Operator
EOL
Production Inspection
Assembly
Operation
Service Inspection
Circular Economy
NDE
Sensors
SHM & CM
NDE
NDE
Fig. 2 Asset value stream [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
160
J. Vrana and R. Singh
throughout the manufacturing and service, as planned or as required by circumstances. This could also include sensors monitoring the process parameters within the supply chain, during assembly and during the initial performance tests or sensors used for structural health monitoring (SHM) or condition monitoring (CM), all these generating data of value.
NDE Personnel Value Stream The personnel performing and supervising the NDE inspections and the personnel writing the standards and procedures, also go through a similar value stream (see Fig. 3). At some point a person decides to become an NDE professional and starts with training, both theoretical and on the job. After the qualification examinations, inspector gets certified and starts working. This could be inspections during the production of an asset, service inspections for owner-operators, or circular economy inspections. After a while (usually 5 years) NDE personnel need to demonstrate that their knowledge is still up-to-date and that they can perform the inspections according to the procedures (recertification). NDE System Value Stream As shown in Fig. 4 the value stream for NDE system including any software used for NDE is similar to asset value stream, as NDE equipment is essentially a product, another form of an asset. The main differences are that it is produced by NDE OEMs, that the owners and operators of NDE equipment are the companies providing inspection services, and that the service inspections of NDE equipment (recalibration) are usually performed by the manufacturer in yearly intervals.
More Value Streams There are probably a few more value streams. In fact, if a new stakeholder such as data traders emerge, there may be new value streams that are hard to conceive at present. Training & Certification Training
Qualification
Inspectors
Certification
Experience
ReCertification
Succesion Circular Know-How
Fig. 3 NDE personnel value stream [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
NDE OEMs Idea
Design
Supply Chain
Inspection Services Assembly
Operation
ReCalibration
EOL Circular Economy
Fig. 4 NDE system value stream [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
7
Value Creation in NDE 4.0: What and How
161
The following gives two additional examples which are important for the NDE ecosystem today.
Regulatory Value Stream The regulatory value stream (see Fig. 5) starts with the need for a new standard. After a standard is designed, written, and issued, it documents the state of the art which should be used by all stakeholders in the ecosystem. After gaining experience with a standard, they are revised, normally in 5-year intervals. Research & Development Value Stream The R&D value stream starts (see Fig. 6), like all value streams with the initial idea. After the design of the experiment, including hypothesizing, the experiment (practically or theoretically) is conducted and analyzed to prove the hypothesis. The results of public research are usually published in peer-reviewed journals. This value stream seems to be shorter as it does not contain an operation or experience part. Instead, those values stream usually require iterative work with loops from design to analysis and back. Moreover, scientific work usually should consider the research published before.
Digital Thread in the NDE Ecosystem The digital representation of the value streams is the digital thread – a time lapse story of digital twins for its users [2].
Digital Thread of the Asset Since the purpose of NDE 4.0 is around reliability, safety, and economics of an asset, it is best to associate the digital thread to a unique asset. Like the Asset Value Stream, the digital thread of an asset will start at the Asset OEM and continue through the Asset owner-operator. First the idea for a new Regulatory Idea
Design
Writing
OEMs & Inspectors Issuance
Experience
Succesion
Revisioning
Circular Know-How
Fig. 5 Regulatory value stream [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
Research & Development Idea
Design
Experiment
Analysis
Publication Circular Know-How
Fig. 6 R&D value stream [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
162
J. Vrana and R. Singh
Fig. 7 Digital thread of the asset [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
product is born, the product is designed, raw material produced, individual components manufactured, the components inspected, assembled to a product, operated including multiple inspections until the product reaches its end of life (EOL). After its end of life, the product may be disassembled, and the material may be recycled. Events during each of those stages can be captured by a digital twin, which has evolved and used data from the previous event and configuration. All the digital twins over the lifetime of a product are connected to form the digital thread as shown in Fig. 7. The digital twins during lifetime may come from various companies. Raw materials and components will usually be produced by suppliers, and assembly will be performed by an OEM, operation by an owner-operator, and the activities after EOL by specialized companies. This means the access to the digital thread needs to be handed from one company to the next during the lifetime of an asset. The digital thread ownership and transfer model needs to evolve with associated business value. It should be envisioned that sale of a product will include its latest digital thread as an accessory. The lease of the product comes with an obligation to feed data into the digital thread. That means as the asset ages and depreciates in physical value, the digital twin gets richer and appreciates in virtual value.
Digital Thread of NDE System Once again, every NDE device could have its own digital thread, just like an asset discussed above. Over time this digital thread will include digital twins of all inspections performed as discrete events with outcomes. It will be used by NDE OEM to improve the inspection equipment. It can be used by asset owner-operator to optimize maintenance plans, re-calibration, or even replace the equipment/technique with other options (will be detailed later in the text). NDE Personnel Digital Thread This connects the digital twins for all inspection events performed by an inspector (refer also to NDE Personnel Value Stream). Such a thread can be used for training and certification of individuals. It can also be used for training of AI systems (will be detailed later in the text).
Digital Weave in the NDE Ecosystem The subject of digital twins is still evolving. In the NDE 4.0 ecosystem, we can clearly see multiple threads from various perspectives of different stakeholders.
7
Value Creation in NDE 4.0: What and How
163
These connect at discrete events – manufacturing or inspection in our simplified representation. Think of Fig. 8. The horizontal axis shows the digital thread of an asset and the vertical axis shows the thread for the production machines, which cross the asset twin at a point of manufacturing event. Both the asset and machine may have their nested twins to capture subset details. This leads to a 2D digital weave. After production, the asset will encounter an inspection event which also belongs to the thread of inspection equipment. The digital weave now has a thread of different color. The different instances (DTI) of different asset serial numbers (but of the same type (DTP)) constitute a third dimension orthogonal to the paper [2]. As not necessarily always the same machine was used for production or inspection, the digital twins of the various events will interact with different branches of the nested production-related digital twins. This creates a 3D digital weave, a little hard to visualize. Another dimension could be the abstraction layers. Starting with the asset, the functional integration, the user interaction, and business value propositions, as proposed in Reference Architectural Model Industry 4.0 (RAMI 4.0) [2, 7]. The digital weave is a digital thread for the ecosystem. Each of the stakeholders in the ecosystem should only be concerned with relevant digital threads associated with their area of interest, and not sweat about the entanglement. That ability to pull out the relevant thread from the multidimensional twins is what makes the concept of Digital Weave superior to the RAMI 4.0 model [2, 7], which is asset centric, considering single instances only, unable to handle interaction with other assets and aggregation.
Fig. 8 2D digital weave (on the vertical axis nesting of digital twins is shown and on the horizontal axis the digital thread) [4, 5]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
164
J. Vrana and R. Singh
Cyber-Physical Value Creation in NDE Basic Cyber-Physical Loop The value creation in the fourth revolution comes from closing the cyber-physical loop, with the IIoT and the digital twin as the core contributors or enablers (see Fig. 9) [2]. The sensors in the physical world bring digital data, which is then converted into information by semantic interoperability, combined with other information (from the digital thread and weave), and processed by the interconnected digital twins to create knowledge, which finally leads back to actions in the physical world. Human-in-the-loop is a matter of technology maturity, and acceptance. We will continuously see more automation in the loop. Just like we have gone from cruise control to self-driving cars in about 30 years; we will go from human-in-the-loop to human-on-the-loop and eventually human-out-of-the-loop. Over the life cycle of a product, there can be a number of loops providing a wide range of value to various stakeholders of the NDE 4.0 ecosystem. As shown in Fig. 10. The cyber-physical loops can stay within one value-creation step, like NDE, they can expand within one value stream including multiple companies and stakeholders, and they can even connect multiple value-streams. Starting with section “NDE Event Loop for Asset Inspectors” some of those cyber-physical loops will be discussed, beginning with small loops within one value-creation step and getting bigger within
Fig. 9 The digitally transformed cyber-physical loop [4, 5]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
Design
Supply Chain
Qualification
Certification
Operation
Experience
ReCertification
Inspectors
Assembly
Sensors
NDE
Circular Know-How
Circular Economy
EOL
Sensors
Assembly
Succesion
ReCalibration
Inspection Services
Component
Sensors
Production Inspection SHM & CM
Operation NDE
Service Inspection
Owner-Operator
Fig. 10 Cyber-physical loops within and across value streams [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
Training
Training & Certification
Idea
Design
NDE OEMs
Idea
Raw Material
Asset OEMs & Supply Chain
EOL
NDE
Circular Economy
7 Value Creation in NDE 4.0: What and How 165
166
J. Vrana and R. Singh
the paper. PS: To avoid too much confusion the descriptions are limited to simple feedback connections for key stakeholders.
NDE Event Loop for Asset Inspectors The straightforward idea for inspectors and inspection companies are systems which offer the digitalization of the NDE workflow out of the box. This helps optimize resource scheduling, tracking of inspections and inspection results, etc. The cyber-physical loop in this example (see Fig. 11) starts with the digital order from the customer, translated into a scheduled inspection plan by the supervisor (human with machine assistance), which is converted digitally into a job for a certain inspector at the time and place of inspection. The inspector gets his/her task instruction on the system (say a mobile device). This is where we leave the cyber world and get into the physical world: the inspector performs the physical inspection and stores the results back into the digital system, which connects the report with the original instructions closing the loop. Nowhere in the loop did anyone use a paper, or other means of digital communication outside the predetermined pathway such as an email or popular MS Office programs (Excel, Word, or email). Such workflow systems can be seen as simple digital twins of the process, they collect information, offer some data processing, and visualization opportunities.
Order
NDE
NDE Workflow System
Fig. 11 NDE event loop for asset inspectors [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
7
Value Creation in NDE 4.0: What and How
167
An evolution of this will be when inspection data captured as an event into the Digital Twin becomes a part of one of the larger loops discussed later. Other ideas to extend such a system include: • Interface with the customer’s IT systems for receiving orders and submitting the results so that the customer can integrate the inspection into their cyberphysical loops • Interface with certification and training agencies • Interface with blockchains capturing the experience, the hours, and the certification of an inspector • Interface with eyesight testing laboratories (optician) • Interface with inspection equipment OEM so that the settings of the instrument can automatically be applied and that, at least, some screenshots of the results can be stored together with the report • Blockchain to assure tracking of any changes to the data • Augmented reality to bring any the real-time information to the inspector • Enhanced algorithms for inspector support • Ability of the inspector to connect with experts or management for help A similar loop (see Fig. 12) can occur for asset manufacturers within their production setup. These cyber-physical loops are the smallest ones and are entirely within the world of NDT/NDE event. In the subsequent sections, we are going to expand the loops step by step and see the increasing value of digital twins with the size of the loop.
Maintenance Loop for Asset Owner-Operators The main goal of an owner-operator of assets, like powerplants, oil and gas plants, aircraft, trucks, or trains, is to maximize availability/usage, as smooth and as long as possible, minimize operating cost, and maximize the return-on-invest. Such a desire led to the development of various operating philosophies: • • • • •
Reactive: Fix once it is broken Preventive: Maintenance at regular intervals Condition-based: triggered by a conditional parameter Predictive: Predict when it will fail, and proactively prevent Prescriptive: Identify asset-specific near-term actions Asset OEMs & Supply Chain Idea
Design
Raw Material Sensors
Component Sensors
Owner-Operator
EOL
Production Inspection
Assembly
Operation
Service Inspection
Circular Economy
NDE
Sensors
SHM & CM
NDE
NDE
Fig. 12 NDE event loops [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
168
J. Vrana and R. Singh
A major trend for owner-operators is predictive or even prescriptive maintenance. Most implementations for predictive analysis use sensors attached to an asset in operation to predict the remaining life/the point in time requiring the next maintenance by statistical evaluation. Typical sensors include vibration, acoustic emission, acceleration, speed, oil pressure, infrared, etc. Those sensors all work nondestructively and provide a digital sensorial information at predetermined short time intervals – so they should all be considered NDE sensors. If the data of those NDE sensors gets combined with the results of classical NDE inspections, a holistic predictive maintenance can be reached which will allow for an even more accurate prediction of the next potential failure and an even more economic operation of the asset [8]. The cyber-physical loop (see Fig. 13) starts with the inspection event record. Statistical evaluation of these records combined with engineering data and in consultation with Asset OEMs can help replan the preventive maintenance, start doing predictive maintenance, and even take some asset-specific prescriptive actions. The loop will close with a revised plan feeding the smaller loop discussed above. This loop integrates the world of NDT/NDE with the world of maintenance planning.
Design Loop for Asset OEMs Assets operated by the owner-operators are manufactured by OEMs (like train, turbine, aircraft, boiler, machine, car, truck, boat, and amusement ride manufacturers). Usually OEMs design the asset, get components or sub-assemblies manufactured by suppliers, assemble the asset, test, and deliver the asset to the customer (owneroperator), with accountability and responsibility as per their contracts. Asset OEMs perform extensive quality assurance, including NDT/NDE to assure that the assets will meet their performance over a promised life. Most of those quality assurance measures are conducted as early as possible in the value-creation chain. Meaning, most of the NDT/NDE is performed at the component suppliers (forging/ casting shops, steel manufacturers, . . .) and at suppliers specialized in joining operations (welding, gluing, . . .). As suppliers are usually highly focused on a narrow product range automated NDT/NDE systems are often used. They may have their own NDE 4.0 loop for the NDE event, at the smallest size/level or the innermost loop. Asset OEMs & Supply Chain Idea
Design
Raw Material Sensors
Component Sensors
Owner-Operator
Production Inspection
Assembly
NDE
Sensors
EOL
Operation
Service Inspection
Circular Economy
SHM & CM
NDE
NDE
Fig. 13 Maintenance planning loops [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
7
Value Creation in NDE 4.0: What and How
169
The next outer value loop could be the data coming from the quality assurance in the supply chain used by the OEMs to enhance the design of the components. For example, by analyzing typical defect locations or by incorporating defect size distributions into their lifing calculations [9]. There is also a possibility of a cyberphysical loop taking the data from the NDE during production to enhance the design and the production of the next variant of the components. The outermost loop integrates the data from owner-operators, both the quasicontinuous data from NDE sensors gathered during operation and the classical NDE inspections gathered during maintenance inspections. This makes the cyber-physical loop even bigger and the results (the potential improvements of design, production, and lifetime) truly remarkable and desirable for a competitive product. This can also enhance the value of NDE engineering in asset development cycle, discussed below. Such loops (see Fig. 14) are not new. High-tech industries such as aerospace and nuclear have been using them, by choice or through regulatory forces. They are generally very weak in terms of data quality and speed of execution. They become highly effective when there is an undesirable incident. With the fourth revolution, they can be a lot more effective and efficient, and possibly proactive, and even continuous. The open interfaces and frameworks will enable much faster implementation, reaching the higher hanging fruits, and OEMs do not have to do it painfully in response to a demand for enhanced safety. They can do it proactively for both safety and economic value.
NDE System Design Loop for NDE OEMs Companies producing automated inspection systems, no matter whether they are automated ultrasonic, digital radiography, computed tomography, thermography, or automated visual inspection systems, need to integrate automation hardware (like robots for automated component feeding and component handling during inspection), NDE instruments, NDE sensors, sources and detectors, system supporting sensors, software, database access, existing IT infrastructure, etc. The mechanical and electrical components of the system need to be designed and the complete system installed at a customer site, with proper ICT (Information and Communications Technology) connection. Asset OEMs & Supply Chain Idea
Design
Raw Material Sensors
Component Sensors
Owner-Operator
EOL
Production Inspection
Assembly
Operation
Service Inspection
Circular Economy
NDE
Sensors
SHM & CM
NDE
NDE
Fig. 14 Design loop for Asset OEMs [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
170
J. Vrana and R. Singh
The system design and integration process can be supported by a digital twin, allowing the simulation of NDE, mechanics, electrics, interaction, IT connectivity, etc. Such a digital twin needs information of all components to be integrated – preferably in an open format which can be loaded into the digital twin directly. Such a digital twin could also be used to show the customer the design and integration of the system into their production and allows easy in-factory and onsite customization of the system depending on the customer feedback. Ideally the digital twin of the system relates to the digital twin of the manufacturing environment it is supposed to be integrated so that issues can be determined during the design phase. The cyber-physical loop in this design phase (see Fig. 15) starts with the designer of the system working in a virtual environment, continuing with simulations of the NDE process and the system, visualization of the simulation results, and actions how to improve the system. All this to ensure a proper inspection. The finalized design and the information regarding all the components can be used to automatically issue purchase orders to suppliers and enable automation for assembling the system. This is the point where a digital twin of the type is converted to the digital twin of the event [2]. Once the system gets installed physically, the digital twin of the event can be integrated into the nested digital twin structure of the production floor. This enables an automated storage of the results of the inspection within the IT landscape of the customer. Moreover, if information from the installed system is fed back to the NDE system OEM the data can be used to improve the hardware and software of the system. This opens more possibilities arising by aggregating the information from multiple systems. All this leads to improved design of the system, to a better integration, easier assembly for the benefit of the System OEM and the customer. In principle, all these considerations for NDE System OEMs can be adopted as is to any non-NDE System OEM.
Asset OEMs & Supply Chain Idea
Design
Raw Material Sensors
NDE OEMs Idea
Design
Supply Chain
Component Sensors
Owner-Operator
Production Inspection
Assembly
NDE
Sensors
Inspection Services Assembly
Operation
ReCalibration
EOL
Operation
Service Inspection
Circular Economy
SHM & CM
NDE
NDE
EOL Circular Economy
Fig. 15 NDE system design loop for NDE OEMs [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
7
Value Creation in NDE 4.0: What and How
171
NDE Equipment OEMs The ideas for cyber-physical loops and digital twins in production of NDE equipment are similar to NDE systems. With the main difference: most systems are highly customized, and most equipment is mass produced. However, a good digital twin will also allow a higher degree of customization for mass produced equipment. Like with all devices in industrial manufacturing, the customers will need to integrate the equipment into their existing infrastructure, no matter whether they are owner-operators, asset OEMs, component OEMs, or system integrators. This will lead to the situation that NDE equipment OEMs will have to implement open, standardized interfaces, and data formats. Cyber-Physical Loops for Predictive Maintenance of NDE Equipment Data from the NDE systems and equipment from all the different inspections during the lifetime of a product can also be used to obtain some information on the equipment status. Imagine an ultrasonic instrument which is used every day with the same set of probes. The probes are calibrated every day. By performing a statistical analysis on the calibrations and some trending it should be possible to see if the instrument itself is running out of its optimal state of operation. By designing automatic self-tests perhaps even the yearly recalibrations could be replaced by a cyber-physical loop. In addition, the digital twins can be used for regular checks of probes. Such systems would have the great benefit that equipment errors could be identified nearly immediately and not at the regular intervals required by standards. This would clearly improve the reliability of the inspections. This will be the predictive or prescriptive maintenance of the NDE equipment (Fig. 16).
Asset OEMs & Supply Chain Idea
Design
Raw Material Sensors
NDE OEMs Idea
Design
Supply Chain
Component Sensors
Owner-Operator Assembly
Operation
Service Inspection
Circular Economy
NDE
Sensors
SHM & CM
NDE
NDE
Inspection Services Assembly
Operation
EOL
Production Inspection
ReCalibration
EOL Circular Economy
Fig. 16 Cyber-physical loops for predictive maintenance of NDE equipment [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
172
J. Vrana and R. Singh
More Loops in NDE Ecosystems Other stakeholders in the NDE 4.0 ecosystem may not have a full loop of their own, but they are a part of the loops we discussed for each of the key stakeholders. They will perform the necessary R&D (universities and other labs), train all groups within the ecosystem (training schools), assure standardization while keeping innovation possible (regulatory bodies), communicate and provide platform to collaborate (communities and societies), establish the fitting IT infrastructure, and support companies on their digitalization projects (consultants and coaches). Since they all support the Journey to the World of NDE 4.0, they can all benefit from value coming out of digital twins and feeding back more value into the cyber-physical loops of their primary customers. Figure 17 shows some ideas for training, qualification, certification, and re-certification, enabling equivalent of predictive or prescriptive re-certification of the individual inspector.
NDE Engineering Within the Loops One more group which needs to be mentioned is the NDE engineers. NDE engineers are part of most of the value chains, are employed by nearly all the stakeholders discussed above, and have a significant role to play in the design and outcome of an inspection process. NDE engineering is responsible to establish the most appropriate way to inspect an asset in collaboration with other engineering disciplines and to create the fitting specifications and procedures, both for owner-operator and for asset OEMs. For this task, the cyber-physical loops/digital twins enabling the simulation of the inspection process, the inspection physics, and the inspection reliability can be of tremendous help. Those loops need validated simulation tools, data about the component to be
Asset OEMs & Supply Chain Idea
Raw Material
Design
Sensors
Training & Certification Training
Qualification
Certification
Component
Owner-Operator Assembly
Operation
Service Inspection
Circular Economy
NDE
Sensors
SHM & CM
NDE
NDE
Sensors
Inspectors Experience
EOL
Production Inspection
ReCertification
Succesion Circular Know-How
Fig. 17 Personnel loops in NDE ecosystem [4]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
7
Value Creation in NDE 4.0: What and How
173
inspected, and a good database of all previous inspections as a baseline. Such tools could for example be used to compare the reliability of classical, conventional UT, to phased array, to TFM, and to SAFT before performing the inspection or to evaluate the effect of certain material or design parameters before manufacturing of the components. Once the components are manufactured and tested this can be used to validate the results. Such a system will over time create growingly more accurate results.
Challenges Data Markets and the Connected Industry 4.0 World The various cyber-physical loops show the value of fused data sets. Data itself becomes an asset. There is a market for data and it is important to use it. The way to this market is the interfaces discussed in [2, 5]. How to make this market safe, how to connect data between different companies, and how to establish a data market is discussed in this section. The key focus for a data-driven economy and new business models is on linking data. (International Data Space Association)
In the future, it will be possible to buy data independent of suppliers. As discussed above this will bring new stakeholders (like data traders and enrichers) and value streams into the NDE 4.0 ecosystem. The aim is to prevent illegal data markets, to create data markets according to crucial values (like data privacy and security, equal opportunities through a federated design, and ensuring data sovereignty and ownership for the creator of the data and trust among participants), and to ensure that companies that have generated the data also benefit from their value and not just a few large data platforms. The International Data Space Association (IDSA) has set itself this goal. IDSA develops standards and de-jure-standards based on the requirements of IDSA members, works on the standardization of semantics for data exchange protocols, and provides sample code to ensure easy implementation. One of the key elements IDSA is implementing is the so-called IDS connectors [10] which guarantee data sovereignty and clarify data ownership (see Fig. 18). Both the data source and the data sink have certified connectors. The data provider defines data use restrictions. The data consumer connector guarantees that the restrictions are followed. For example, if the data provider defines that the data consumer is allowed to view the data once the data will be deleted by the consumer connector after the data was viewed. This enables also the producer of the data to decide which customer can use the data in which form as an economic good, for statistical evaluation or similar. Most companies are very reluctant to share data and information with other companies or with cloud systems. As of today, this is reasonable as technical data
174
J. Vrana and R. Singh
Legend:
IDS
Data Usage Constraints
IDS Connector
Non-IDS Data Communication
Data Mark ketplace Marketplace
Industrial ndu ustrial Data a Cloud Clou ud
Interne et of Internet T hings C loud Things Cloud
IDS IDS
IDS
Open n Data Sou urce Source
IDS IDS
Enterprise C loud Cloud
IDS
Company 1
IDS
Company 2
IDS
Company n
IDS
Company n+1
IDS
Company n+2
Fig. 18 IDSA: Connected Industry 4.0 World [14]
is only protected by individual contracts. However, it hinders the development and usage of multiple Industry 4.0 technologies (like IIoT, Digital Twin, AI). The IDS connectors, as shown in Fig. 18, create the needed network of trust by ensuring data sovereignty and ownership. This enables the connected world and eventually data markets. For many, marketing the data will be a new business model. For NDE it is the opportunity to move from the position of an unnecessary cost factor to one of THE data suppliers. This will create a new, larger business case.
Decision Requirements In [3] several challenges were identified – including standardization, which is also discussed in section “The Standardization Challenge.” On top several interconnected challenges are getting crucial regarding NDE as data source for cyber-physical loops: • In a variety of current-day standards, specifications, and procedures the requirements for decision are based on perceptions and experience rather than data. Damage detection of any type causes a fear of failure in the minds of operators and a perception of neglect in the eyes of their customers and consumers. This results, for example, in requirements classifying ALL detected indications as rejectable. By enhancing NDE capabilities more indications would be found which would lead to higher scrap rates. This discourages the use and the development of more sensitive NDE techniques – including the measures discussed in section “Data Markets and the Connected Industry 4.0 World.” It also prevents the implementation of longer maintenance intervals.
7
Value Creation in NDE 4.0: What and How
175
• Vice versa some standards, specifications, and procedures require NDE methods which do not provide a sufficient reliability for design engineering’s requirements. • Cyber-physical loops promise better designs and products by harvesting and fusing data. This leads to the fact that a better sensitivity and the (automated) reporting of as many indications as possible would be beneficial. Those challenges currently contradict each other and hinder cyber-physical loops employing NDE. Moreover, they are a further burden to the general value appreciation for NDT (as discussed in section “NDT Value Perception”). To resolve this, several steps become crucial: • Differentiation between reporting and decision limits – All indications should be (automatically) reported so that they can be used for cyber-physical loops. – Decision limits need to be based on design engineering requirements to ensure that quality assurance decision limits are as tight as needed and as loose as possible. • Sensitivity requirements – Design engineering needs to define sensitivity requirements and NDE engineering needs to prove that those requirements can be met with the selected methods. • Consumer education – Asset operators and users need to be educated on engineering analysis and damage tolerance philosophy. They need to be convinced that it is OK to have indication of minor anomaly, where the risk is extremely low and acceptable to continue to use the asset, and monitor is needed.
Summary NDE 4.0 is all about the purposeful cyber–physical ecosystem. We all have seen digital technologies and physical methods continuing to evolve, mostly independently and sometimes interdependently. The real power is in the concurrent design of inspection systems through an appreciation of cyber-physical loops and digital twins. If data is the new crude oil, then information is the new refined oil, NDE is the new oil rig, IIoT the new pipeline, digital twin the new motor, and the cyber-physical loop the new machine. This makes the NDE 4.0 ecosystem like the new energy infrastructure.
176
J. Vrana and R. Singh
These provide an ability to capture data directly from the materials and manufacturing process to usage and in-service maintenance, across multiple assets. The data can then be used to optimize maintenance, repairs, and overhauls over the lifetime of an asset, and even feed back to the original equipment manufacturer (OEM) for design and production improvements. NDE 4.0 is the chance for NDE to move from the niche of the “unnecessary cost factor” to one of the most valuable data providers for Industry 4.0. However, this requires the opening of data formats and interfaces. The insight that the protectionism lived up to now will have a damaging effect on business in the foreseeable future and will decide on the future of individual companies. For companies that recognize the signs of the times, NDE 4.0 is the way to new customers and eventually to the data market, to a completely new business models for the industry.
Cross-References ▶ Are We Ready for NDE 5.0 ▶ Digital Twin and Its Application for the Maintenance of Aircraft ▶ Digitization, Digitalization, and Digital Transformation ▶ Estimating Economic Value of NDE 4.0 ▶ Industrial Internet of Things, Digital Twins, and Cyber-physical Loops for NDE 4.0 ▶ Introduction to NDE 4.0
References 1. Vrana J, Singh R. Cyber-physical loops as drivers of value creation in NDE 4.0. J Nondestruct Eval. 2021;40:61. https://doi.org/10.1007/s10921-021-00793-7. 2. Vrana J. The core of the fourth revolutions: industrial internet of things, digital twin, and cyberphysical loops. J Nondestruct Eval. 2021;40 https://doi.org/10.1007/s10921-021-00777-7. 3. Vrana J, Singh R. NDE 4.0 – a design thinking perspective. J Nondestruct Eval. 2021;40:8. https://doi.org/10.1007/s10921-020-00735-9. 4. Singh R, Vrana J. NDE 4.0 – why should ‘I’ get on this bus now? CINDE J. 2020;41:6–13. 5. Vrana J. NDE perception and emerging reality: NDE 4.0 value extraction. Mater Eval. 2020;78(7):835–51. https://doi.org/10.32548/2020.me-04131. 6. Vrana J. ZfP 4.0: Die vierte Revolution der Zerstörungsfreien Prüfung: Schnittstellen, Vernetzung, Feedback, neue Märkte und Einbindung in die Digitale Fabrik. ZfP Zeitung. 2019;165:51–9. 7. DIN SPEC 91345:2016-04. Reference Architecture Model Industrie 4.0 (RAMI4.0). Berlin: DIN; 2016. 8. Vrana J, Singh R. NDE 4.0: NDT and sensors becoming natural allies by digital transformation and IoT, Proceedings Volume 11594, NDE 4.0 and Smart Structures for Industry, Smart Cities, Communication, and Energy; 1159404. 2021. https://doi.org/10.1117/12.2581400. 9. Vrana J, Kadau K, Amann C. Smart data analysis of the results of ultrasonic inspections for probabilistic fracture mechanics. VGB PowerTech. 2018;2018(7):38–42. 10. International Data Spaces Association. Reference Architecture Model, IDSA, Version 3.0. 2019.
8
From Nondestructive Testing to Prognostics: Revisited Leonard J. Bond
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE and SHM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDT to Prognostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prognostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Models of Degradation Accumulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnostics and Damage State Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prognostics from Precursors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uncertainty Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technology Demonstrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
178 182 189 191 196 196 197 197 197 199 200 201 201
Abstract
Nondestructive evaluation (NDE) is seen as families of mature inspection methodologies which have benefited from major advances in technology, sensor manipulation, and data analysis in the past 20 years, including new types and materials for sensors, and most importantly, leveraging advances in computers for data capture, processing, and modeling. Life management approaches which use such data have also evolved and condition based maintenance (CBM) is now routinely applied to many active components (e.g., pumps, valves, and rotating machines), with in some cases the use of algorithms (prognostics) that estimate remaining useful life (URL). The same period has seen deployment of structural health monitoring (SHM) but, at a slower rate of development, when applied to passive components in high-tech industries including aerospace, wind turbines, and nuclear (e.g., airframe, blades, pressure vessels, and concrete). New L. J. Bond (*) Iowa State University, Ames, IA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_34
177
178
L. J. Bond
approaches to prognostics are now offering emerging opportunities for deployment, bring together more traditional NDE data and that given by SHM, for passive structures, using a common metrics concept, integrated with using technology that utilize new sensors, wireless data transmission, robotic deployments, and advanced data processing, including artificial intelligence (AI). As new approaches to RUL determination, including more big data, become available for prognostic analysis, there are potentially opportunities to operate within the framework of the internet of things (IoT) and what is becoming known as NDE 4.0. Keywords
Nondestructive Evaluation (NDE) · Prognostics · Remaining useful life · Structural Health Monitoring (SHM) · Common metrics
Introduction Nondestructive evaluation (NDE), which is commonly seen as a more quantitative implementation of nondestructive testing (NDT), are, in large part, a family of mature measurement technologies which form a multibillion-dollar annual enterprise that seeks to assess the condition of parts, so as to ensure quality, reliability, and safety at various points in an item’s life cycle. It is an approach which uses welldefined procedures to inspect the condition of a material or measure a characteristic of an object without creating damage or, in many cases, needing disassembly. These measurement technologies and their implementation provide an assessment of an item, in terms of identifying detectable anomalies, at a point in time.The effectiveness of a particular NDE methodology is determined by multiple factors including the physics of the measurement, sensor and instrumentation performance, the nature of the targeted anomaly and human performance. Performance is commonly evaluated in terms of a comparison of the response seen in data between that of a reference indication (e.g., for the case of ultrasound a flat bottom hole) and the anomaly using methods as set out in codes and standards. One critical issue is NDE performance that is seen in terms of both reliability and repeatability. Such assessments of NDE performance are not estimating just the smallest detectable indication, but the largest significant anomaly missed, and this performance is a statistical process commonly defined in terms of a probability of detection (POD) [1]. Such performance and reliability of characterization data are then used in assessing system operational reliability and needed inspection intervals. Historically items were manufactured, operated with some maintenance (e.g. lubrication), and in many cases just run to failure and then basically repaired or replaced after some set life. However, for some systems this approach to life cycle management was unacceptable and particularly starting during World War II (~1940s) various forms of inspection (early NDT) started to be routinely deployed in a range of high-tech industries (e.g., manufacturing and process plants, defense
8
From Nondestructive Testing to Prognostics: Revisited
179
Prognostics & Health Management (PHM)
System Availability
Conditioned-Based Maintenance (CBM) Total productive Maintenance Reliability Centered Maintenance Computerized Maintenance Management Systems (CMMS)
Preventive Maintenance Run to failure
1930
Inspection
1950
1990
2000
Fig. 1 Evolution of maintenance (with the dates being for nonnuclear industry deployment) [2]
systems, aerospace, and later nuclear). In subsequent years various types of maintenance strategy have become central to engineering system life-cycle management. The main life-cycle management approaches that have been developed and the timeline for their evolution is shown in Fig. 1 [2]. This development process is again being revisited and new advances are being made as big-data and the internet of things (IoT) and NDE 4.0 are providing new opportunities and capabilities. In developing comprehensive life cycle item management protocols the data from such periodic NDE assessments, together with that which records operational stressors, can be complemented with that give by sensors to provide continuous monitoring for both condition based maintenance (CBM), structural health monitoring (SHM), and now it can be used for prognostics. Such concepts bring NDE and SHM together to make life predictions are not new (e.g. [3, 4]), but big data and IoT capabilities are facilitating new and more integrated capabilities. For both active and passive components in many cases, even with data available, there then remains one addition step needed to estimate remaining useful life (URL), which is the application of a prognostic algorithm that provides a reliable prediction of future condition. This process can be achieved using one of several classes of algorithm which can be defined as that which is capable of reliably giving an advance indication or portent of a future event, in this case failure. Prognostics is not just a single capability. It can be considered in several different groupings, including in two general families of methods defined by the systems to which they are applied: (i) active systems, which include valves, pumps, and a wide
180
L. J. Bond
array of rotating machinery, that generally are related to application of CBM and (ii) passive systems, which includes non-moving parts and structure. For example, for a wind turbine the passive elements include both the tower and the blades, and in nuclear power plants it includes the concrete containment, and the primary pressure vessel and major piping. For aircraft the structure passive system includes aircraft wings and fuselage. The management of many of these passive items is now being considered for SHM, and also potentially use of some prognostics methodologies. To differentiate between active and passive items and in the approaches employed, some extensions to the prognostics lexicon have been used. For many items where NDE is commonly employed the term “material damage prognosis” [5] has been used and for nuclear power plants “proactive management of material degradation (PMMD)” is the terminology adopted [6]. In such approaches an assessment of a material and structure brings together (i) initial and then damage state awareness, (ii) physics-based life prediction, and (iii) autonomic reasoning, which can be in various forms [5]. In looking at the various inspection processes there are a series of discrete steps, illustrated with Fig. 2 [2], where data from each activity feed’s into and enables higher order analysis and response. For many systems with moving parts (active systems), such as valves, pumps, motors, and jet engines, online monitoring for CBM, and in some cases prognostics, is now routinely deployed [7]. For example, in many cases, in a pump body or a jet engine ports are provided where sensors are or can be inserted and wireless systems, and in some cases satellites, are used to transmit the data to a monitoring location. Not least, for many passive parts and structural elements, even when data are given, there remains a disconnect in terms of how to combine that provided by NDE
Fig. 2 A hierarchy of data analysis and responses [2]
8
From Nondestructive Testing to Prognostics: Revisited
181
and SHM systems [8] into an integrated remaining useful life (RUL) assessment. In this context, prognostics is now emerging as an engineering discipline focused on predicting the time at which a system or a component will no longer perform its intended function. This lack of performance, which needs to be avoided, is most often a failure beyond which the system can no longer be used to meet desired performance. The process of moving the engineering community from NDE as a workmanship standard and defect characterization tool into advanced metrology which is giving data for use in a prognostics is ongoing. The challenge is implementing integrated NDE/SHM technologies and then combining the resulting data, and going on to provide a remaining life (prognostic) strategy for structural assessment. However, advances in computer and communications technology, the internet of things (IoT), and management of big data, with NDE 4.0, are all starting to offer the digital infrastructure which can be combined with new sensor systems to enable SHM/prognostics to be applied to structural materials, and give assessments in near real-time. This process is however not trivial, particularly in terms of managing uncertainty, in loading/stressors, and performance, and some challenges and opportunities which assess and demonstrate how NDE and SHM can potentially be revolutionized in the age of Industry 4.0 are ongoing [8]. To meet the growing needs in life usage management a variety of trends have emerged: within NDE there have been advances to move beyond detection of smaller and smaller discrete defects to include understanding of “allowables,” those local material variations or anomalies which do not immediately impact performance, and which are the product of random manufacturing material or fabrication inhomogeneities, such as local grain size variation, and small voids in additive material or other types of features at interfaces and in composites. At the time of fabrication material characterization is increasingly needed and there is seen to be a need to provide tools to nondestructively enable full volume material state awareness (MSA), which looks beyond discrete defects to a mapping of a material’s local structure and properties [1]. With aging systems and periodic inspections there has been interest to move beyond data given through NDE assessment to incorporate it in prognostics for the prediction of a remaining safe or service life [2–5, 9]. These are the trends to better manage defects and move beyond “find and fix” using periodic NDE to more integrated life management approaches with tracking and managing structural degradation (aging)? Almost two decades ago the nuclear power community, specifically through the US Nuclear Regulatory Commission, started to look at issues related to life and license extension of nuclear power plants, to address the issue of moving beyond “reactive management of materials degradation,” and “the inspect find and fix mentality.” To better manage these aging assets, in their case civilian nuclear power plants, two processes were envisioned: (i) implementation of actions to mitigate or eliminate the susceptibility to material degradation and (ii) implementation of effective inspection, monitoring, and timely repair of degradation [6]. These concepts sought to bring together processes that operated throughout a system’s life cycle, from initial design and fabrication, through inspections and
182
L. J. Bond
on into service, with both periodic inspections and continuous monitoring (AKA SHM). The concepts that relate to prognostics and Proactive Management of Materials Degradation (PMMD) were developed and both active and passive components were considered [6]. For example, in some cases, material properties are changed by neutron irradiant and temperature or pressure (like aging but not degrading) without creating anomalies but moving properties away from the parameters used when a component was designed. For example copper alloyed steel in nuclear facilities can change under irradiation: at the grain level copper can migrate. This nascent program included an assessment of the then state of the art for NDE, monitoring and prognostics, as well as providing an outline for a potential path forward. At about the same time, with aging aircraft and a variety of civil infrastructure (in the USA) various other groups were looking at similar issues that relate to “managing other types of aging structures,” with activities given names such as material damage prognostics [5], and these programs were all in addition to those for rotating machinery and CBM [7, 10]. In looking at these various measurement modalities, whether they be NDE, CBM, or SHM, two key issues need consideration: there must be an understanding of the effects of both local material variation and that seen in the applied stressors on an items condition or life-utilization. In many cases, most probably during 90%, or even more, of the life for a structural item it will be in a condition where no “anomaly” is detectable using conventional NDE tools. The lack of measurement sensing sensitivity presents challenges, so a focus has been on trying to monitor at least some stressors. These data are then combined with data given using the various damage assessment technologies that can give data for prognostics and proactive management of material degradation, with a focus on those needed for passive structure in nuclear power plants. These have been summarized in terms of (i) laboratory methods for detection of crack nucleation sites, (ii) methods for detecting crack precursors, (iii) candidate methods for continuous monitoring of component degradation, and (iv) commercially available NDE methods and systems [11]. For each field of application families of measurement technologies, with varying degrees of robustness and performance can be identified. Then when looking at the data given in specific applications new life and data management methodologies were and are still needed for the variety of applications.
NDE and SHM Current SHM has, in general, limited coverage capabilities for larger structures and flaw sizing capabilities. In many ways SHM and NDE have the same goal, but they differ in their implementation and capabilities; they are complementary and not competitive methodologies and technologies [4]. That said, implementations for SHM to give an acceptable POD is still generally a research activity. In addition, as SHM seeks to provide much broader coverage research and development (R&D) is seeking to demonstrate flaw sizing capability, not least to potentially determine how to combine SHM and NDE data. NDE is also seeking more effective
8
From Nondestructive Testing to Prognostics: Revisited
183
inspections, particularly for new materials (e.g., additive materials) and hard to access structure. Various forms of predictive maintenance have been considered [3] and more recently these have involved using capabilities which are emerging with NDE 4.0 [8], SHM and NDE data as they are being combined and analyzed in a variety of ways, including leveraging artificial intelligence. In looking at RUL estimates and prognosis both SHM and NDE are needed. Yet it is however demonstrating, the effectiveness, reliability, and robustness of the integrated process, in terms of managing uncertainty and sparse data, data fusion, and then ultimately giving an RUL, which may well be considered to be the biggest challenge. The relationship between how NDE considered and evaluates crack data and that from SHM are shown in schematic form as Fig. 3. Although complementary in large part the two processes are not related or interconnected and data are typically given in different forms. The approach and implementations of technologies in the two communities is different, with various NDE codes and standards that define responses, either in terms of quantitative defect dimensions (e.g., crack length, depth, shape, and orientation), where as many SHM implementations generally indicate a “change” in a sensed parameter (e.g., acoustic emission signals) or an increase or decrease in a measured signal (e.g., with guided waves), without the capability to give a reliable and quantitative defect characterization. SHM in general is simply sensing changes in condition, which, although useful, does not give data that can be easily combined with that provided from an NDE assessment. To help better understand these processes there is increased use of both forward models for NDE processes, including using commercial codes, such as CIVA, and a range of models for SHM response, but the direct correlations for data are in general lacking. There are also a range of activities to advance CBM and system life cycle management. Various groups have sought to look at trends relating to the application of MSA and how it correlates with CBM and system life cycle management [14].
Critical crack size for unstable
Defect size
SHM Data:
SAFETY MARGIN Crack size limit for fitness for purpose
Change?
BASIC SAFETY
VS
Detection?
Acceptance criteria for Quality assurance
Location? Recording Threshold
Size? Structural and other noise
Fig. 3 Schematic showing the relationship between crack size and NDE acceptance criteria versus unknowns in most SHM data. (After [8, 12, 13])
184
L. J. Bond
In looking at MSA a few years ago a workshop occurred which was structured around three focal topics: (1) advances in metrology and experimental methods, (2) advances in physics-based models for assessment, and (3) advances in databases and diagnostic technologies [5]. These technologies in general remain a work in progress. In specific fields, such as for examples with wind turbines, particularly those for off-shore, CBM and prognostics are now routinely employed for the rotating machinery. NDE is used for inspection of major items, such as blades, but SHM with embedded sensors and wires are a challenge, both due to the potential effects of lightning strikes on the system and the costs of fully instrumenting blades with SHM capability [15–17]. Most recently drones with cameras have become a technology that has dramatically reduced the cost of inspection, and these units are reducing the time and costs of climbers engaging in blade inspection. But again combining the SHM and NDE date is still in its infancy. How both NDE and SHM data are used is also seeing changes. Merging NDE data and engineering analysis (CAD) into digital twins, together with moves from focusing on discrete defect anomaly detection, location, and then characterization (giving parameters for use in a structural analysis) to material state awareness (MSA), combined with increasing use of models and model-assisted probability of detection (MAPOD) are all providing major new challenges. There are opportunities to improve designs and achieve enhanced performance in both systems and NDE/SHM, while at the same time maintaining quality, safety, and reliability all in the context of life cycle management, and moves to give prognostic capabilities [9]. As already stated CBM for rotating machinery is well established and being effectively deployed for a diverse range of systems, which range from jet engines, to many types of pumps and wind turbines [7]. It cannot be stated too often; what has been largely missing has been the “structural” part of health monitoring, and how data are managed and integrated with that from NDE assessments. One initial step in looking at data combination is to understand the spatial and temporal relationships between NDE and SHM, as illustrated in Fig. 4. To address the passive structure and structural components in many engineering systems the requirement is a need to get smarter with the sensing and to look to collect and combine both NDE and SHM data [4]. For example, to monitor pipe corrosion, sol-gel transducers can be permanently applied in a sparse array of ultrasonic transducers to give continuous local NDE measurements (or SHM) for thickness gages on piping systems in a refinery [18] and guided waves can be used in long-range measurements to detect global changes in other regions. Both sensing technologies can be implemented with either wired or wireless data transmission. Combining these technologies can give nearly complete and continuous coverage of high temperature piping, without the requirement to remove insulation that is an expense faced when currently performing periodic inspections. The use of combinations of NDE and SHM as tools, together with integrated approaches to life cycle management, does appear to be on the cusp of major changes. Data volumes can now start to be managed in near-real-time, and the resulting fused in the context of NDE 4.0 and prognostics data used to better predict RUL [8].
8
From Nondestructive Testing to Prognostics: Revisited
185
Fig. 4 Illustration of the fundamental differences between NDE and SHM data faced by the community. (After [9])
In the evolving integrated approach to life assessment it is important to identify and understand measurement sensitivities to critical parameters and relate them to life cycle, particularly when trying to sense early damage and then bringing together systems that measure and utilize the operational parameters and the NDE/SHM data. When working to bring together NDE and SHM approaches, and prognostics with life cycle management, a first step is to improve initial component NDE assessments [19]. For a part in use applied stressors also need to be understood, such as effects of temperature, stress, and corrosion, and the effects of such “loads” (stressors) on defects or manufacturing anomalies. Good material damage evolution models are needed. For in service assessment it is then all about being smart about the what, the when, the where, and how to make measurements used for the sensing of evolving part condition. For success in moving beyond simply data collection in SHM to prognostics there is a need to measure and understand the impact of stressors. Such understanding needs to be able to respond to the impact of changes in use (the applied stressors) and the need to modify or replace damage/degradation accumulation models. One example of a deployed life management system is that for the F-35 where it is reported that the structural prognostics and health management system [20] has just two corrosion sensors installed on each aircraft. In addition, there are between 10 and 13 strain gages installed and recording time histories, together with over 150 operational parameter time histories. Many defense systems do now have diagnostic/ prognostic capabilities on rotating machinery (e.g., helicopter motor systems), but there are still few examples where structural condition is integrated into the monitoring and assessment. In many cases it is simply not however clear how NDE data are integrated with that from SHM into assessments.
186
L. J. Bond
Another example from the NDE side which illustrates the challenges presented by big data is axel inspection for a high-speed train. Systems have now been developed for both wheel and axel inspections. For one axel there is typically 1.6 GByte of data obtained with seven transducers. In Germany there are 32 axels per train and six inspections are performed per year. The estimated life for a wheel set is 10 years and with a conservative estimate this gives 3 TByte of data, just for the axels for each train. Data volumes can be increased if inspection frequency is modified and it is reported that Deutsche Bahn has now reduced inspection intervals from 250,000 to 60,000 km of travel following an accident in Cologne [21]. Managing and manipulating such data sets over many years is presenting the NDE community with new classes of challenges; how to correlate and trend data across multiple inspection, particularly where indications may not correlate with discernible cracks. The leap forward will be when the digital twin is merged with NDE data, that from ongoing SHM, operational parameters, and all used to give a reliable prognostic prediction, in reasonable time. When assessing the state of the internet of things (IoT) and NDE 4.0 is being projected that software agents and advanced sensor data fusion will occur in a physical world web over the next decade. The IoT will potentially enable continuous monitoring of manufacturing processes, such as when additive manufacturing increasingly presents unique parts to be inspected and monitored [21]. In looking at the current state of SHM, in terms of deployment, it remains a work in progress. For application to passive elements using SHM the technologies and methodologies still face major challenges in terms of moving from a research curiosity to a deployed technology [22]. The current state of the art is summarized in Table 1. In looking at currently deployed SHM capabilities there are various papers which analyze one application area, in vehicle health monitoring (IVHM) systems for PHM, in the aviation and automotive industries. These show that there are three elements used: (i) sensors, (ii) transfer of data from the asset, and (iii) use of the data. It has been found in a recent assessment, illustrated with Fig. 5, that many of the most advanced units are in the automotive industry, and even then this capability is still less that those envisioned as what could be achieved by the Society of Automotive Engineers (SAE) [23]. In this context ensuring sensors functionality, reliability, and drift/aging are recognized as increasingly important. For example will a sensor embed in a composite used in an aircraft provide reliable performance over a 20 year life for that aircraft? The status for deployment can be further seen by looking at various recently reported aero-space examples of advances in CBM/SHM commercial systems which were summarized with examples in a recent paper [8]: • In January 2018, Avitas Systems, a General Electric (GE) Venture, collaborated with Limelight Networks, Inc. for its next-generation, automated inspection platform [24]. • In July 2019 Honeywell’s Forge helps with predictive maintenance across components in 11 Air Transport Association (ATA) chapters, identified in the
8
From Nondestructive Testing to Prognostics: Revisited
187
Table 1 A classification of SHM techniques. (After [8, 22])
Type of SHM
Availability of standards
Type of measurement
Applications to real systems
Machine condition monitoringa
Many
Mainly passive
ü Multiple ü Routinely applied
Global monitoring large structure (e.g. bridges)
Some
Mainly passive
ü Increasingly common but not mature ü Many trials
Large area monitoring for local damage. Full coverage typically requires multiple systems
Limited
Mainly active
ü A few commercial application ü Many trials
Localized damage detection, for example, cracks and corrosion
Limited
Mainly active
ü A few specialist applications ü Many trials
Numbers of sensors for coverage Increasingb
a
CBM not strictly related to structural health Large structures are using multiple sensors, point measurements, e.g., optical fibers, acoustic emission and imaging, e.g., video cameras and vibration signature extraction. Large numbers of sensors needed for large areas – each sensing local region
b
common referencing standard for commercial aviation documentation, including avionics, auxiliary power units (APUs), mechanical, electrical, hydraulic, and environmental systems. The tool supports six aircraft models, Boeing 737s, 747s, 777s, and 787s and Airbus A320s andA350s. For landing gear, Forge tracks hard landings and, when available, sensor data on temperatures in wheels and brakes [25]. • Rolls-Royce says its health-monitoring systems now touch upon 70% of an airline’s direct operating costs [26], with much effort focused on the Trent family of jet engines. For the aero-space community aircraft and engine health-monitoring solutions are at the heart of predictive maintenance and the “on-condition” maintenance model. Looking further ahead, the next generation of aircraft may incorporate structural sensors to warn of impending damage or weakness in wings and fuselages. Original equipment manufacturers (OEMs) are reported as already using a variety of such sensors in testing and research and have proved the viability of embedding strain gauges into materials such as carbon fiber [27]. The future of engine health monitoring (EHM) capability and techniques is moving beyond current preemptive
188
L. J. Bond
Automotive
Commercial aviation
Defense aviation & ground vehicles
DIAGNOSIS & REPAIR AUGMENTED BY PROGNOSIS & PREDICTIVE ANALYTICS
MANUAL DIAGNOSIS & REPAIR PROCESS PERFORMED BY TECHNICIAN
Limited On-Vehicle Warning Indicators
Enhance Diagnostics Telematics Providing Component Level Real-Time Data Using Scan Tools Proactive Alerts
Integrated Vehicle Health Mgmt.
Self-Adaptive Health Mgmt.
Fig. 5 Assessment of current deployed IVHM system on SAE capability levels. (After [8, 23])
features, and can potentially include engine performance anomaly detection, advanced vibration analysis, nonintrusive blade health monitoring (possibly eddy current based), online oil condition analysis, and electrostatic sensors to detect debris in the inlet, and exhaust areas [28]. The next issue when considering NDE, SHM and prognostics is the assessment of the evolution of damage and the challenge of what and when degradation can be detected in-service. It is typically only in the last phase of, for example crack growth, when there is relatively rapid growth in a crack that NDE, and SHM, can usually be expected to find the degradation. This then leaves three questions: (i) when is NDE performed, (ii) what is the in-service performance of NDE, and (ii) at what point in a life cycle do SHM techniques start to detect damage? As an item is assessed in service the question of do the materials properties now deviate from the design conditions? When “damage” is reliably detected the final question then becomes how much remaining life is there at that point and how does this integrate into a reliable and actionable prognostic? Technologies such as prognostics health management (PHM) systems that help advance the state of the art of diagnostics and prognostics are important for controlling operations and maintenance (O&M) costs by providing awareness of
8
From Nondestructive Testing to Prognostics: Revisited
189
component or equipment condition and predictive estimates of component failure. In the nuclear power field these technologies are being customized for each class of advanced small modular reactor (AdvSMR) and the modalities account for the specific operational history of the unit. Such information, when integrated with plant control systems and risk monitors, helps control O&M costs by enabling lifetime management of significant passive components, relieving the cost and labor burden of currently required periodic in-service inspection, and informing operators of O&M decisions in real-time to target maintenance activities [29]. One current open issue is that there is no regulatory relief from currently mandated NDE inspection when SHM is deployed.
NDT to Prognostics The NDE science and technology that is now employed in advanced life management has been developed over a period of more than 50 years [12]. In the 1970s, as more advanced methods and systems were designed for use in high-risk technologies such as nuclear power, the oil and gas industries and advanced aerospace, it was recognized that there was a need to better understand the effects of increasingly severe and hostile environments on materials and the significance of defects, in terms of potential for failure [5, 6]. A science base for the theory and measurement of equipment aging, including the use of various accelerated aging programs, was established. In addition, it was seen that the capabilities of then available nondestructive testing (NDT) were limited, and there was a lack of an adequate science base for NDT to become a more quantitative assessment using nondestructive evaluation (NDE). It was necessary to improve the reliability of inspection, to relate size and types of defects to their structural significance and the potential effect that they have on performance or loss of structural integrity, and ultimately implement risk-based reliability assessments. Several major research programs were initiated to provide the required science base, including one sponsored by the United States Air Force-Defense Advanced Research Project Agency (USAF-DARPA), which considered the development of quantitative nondestructive evaluation (NDE) to meet the needs of the aerospace community [30]. The integration of materials, defects, and inspection was also facilitated in combination with the advent of fracture mechanics, which was greatly enhanced through the ever-improving capabilities of finite element analysis, which was in turn largely aided by the availability of ever-more-powerful computer systems. The philosophies of damage tolerance and retirement-for-cause were developed and applied in the 1970s and early 1980s to critical aircraft engine components, at all phases of the life-cycle design, manufacture, and then maintenance [31]. At the same time, other groups of engineers and scientists were considering equally challenging problems of ensuring structural integrity in the nuclear power industry [32] and in the oil and gas industries, in particular for structures to be used in the North Sea and in Alaska [3, 9].
190
L. J. Bond
During the 1970s and 1980s great progress was made in both materials science and quantitative NDE, in terms of providing the much needed and greatly enhanced science base, together with technology including new sensors, instrumentation, and data analysis tools for application at both the time of manufacture and during periodic inspection of some types of items in service [30]. The initial focus of much of the research within this emerging community was on metals. This is now expanding to include advanced composites, ceramics, and materials produced using additive manufacturing. The range of fields of application has also now expanded into additional fields of engineering, including civil engineering, which presents its own unique challenges. Novel integrated design approaches such as unified life-cycle engineering (ULCE) were proposed and partially applied in various forms of concurrent engineering [33]. The full power and potential of this approach was, however, then still limited by available materials science, particularly understanding of materials degradation and response to stressors and the computation power needed to perform many of the design optimizations at a reasonable cost and within a reasonable time. In the 1980s and early 1990s, it was increasingly recognized that structural assessment, including quantification and evaluation of defect and defect populations, was not all that was required to evaluate the remaining safe or useful-life for complex systems [3, 9, 12]. It was necessary to identify and characterize discrete defects, such as cracks and corrosion, and determine a rate of crack/degradation growth, investigate the probability of occurrence, and probability of detection (POD). It also became clear that it was necessary to provide measurements of changes in bulk material properties, or material state, caused by the aging of the item and the accumulation of damage. The development of the science for damage mechanics and tools to quantify the properties of critical structures became a priority. In the current century NDE has seen major transitions in terms of both equipment and modes of deployment, and this sweep of changes and current challenges is reported in several places [34, 35]. Some examples of this can be seen in sensors, with new and improved piezoelectric materials that have provided more sensitivity and with sol-gel transducers now used for “permanent” high temperature deployment. Ultrasound capabilities have themselves been transformed through the increased use of computer capabilities giving new and expanded families of modalities [1]: • • • • •
Phased array ultrasonic testing Long range ultrasonic testing (guided waves) Internal rotating inspection systems (IRIS) Time of flight diffraction (TOFD) Ultrasonic backscatter techniques, for microstructure characterization
These tools are seeing new and improved approaches, and implementation giving both improved capabilities and reduced costs. Some earlier measurement modalities that were previously considered to be novelties are now seen as increasingly conventional:
8
From Nondestructive Testing to Prognostics: Revisited
191
• Dry-coupled ultrasonic testing • Air-coupled ultrasonic testing • Laser ultrasound New combinations of mechanical scanning and improved deployments, combined with data analysis, that are starting to incorporate both artificial intelligence (AI) and other tools, to provide inspector assistance to highlight anomalies, are all impacting NDE implementation • • • •
Rapid ultrasonic gridding (RUG) PIGS – pipeline inspection DRONES – for example, for wind turbine inspection Multi-axis robotics, including rail mounted system with water jet coupling
Finally, there is computed tomography (CT) and digital detectors that have transformed radiography. CT, combined with digital detectors, has become the de facto gold standard for many 3-D inspections of items, including those for materials fabricated using additive manufacturing. The transition from film to various forms of digital radiography is impacting data capture, enabling news forms of image enhancement and analysis and data archiving. The ability to revisit and reanalyze x-ray images going well beyond just looking at an old film has been transformational. The final two aspects of both NDE and SHM which require to be mentioned are models, those used for inspection optimization, POD estimation, and then data analysis, including combinations of NDE data into computer aided design (CAD) [36]. The resulting data then needs to be considered in terms of not only integrated with a finite element (FE)/CAD analysis, but also potentially integrated into a prognostics assessment for remaining useful life (RUL) estimation. These changes in NDE and SHM, together with computer facilitated advances, are all coming together in what it termed NDE 4.0.
Prognostics Prognostics can be defined as the prediction of a future condition, including the effect of degradation on a systems capability to perform its desired function, and remaining safe or service life, based on an analysis of system or material condition, stressors, and degradation phenomena (e.g. [5, 7, 37, 38]). In looking at how to provide a prognostic for a structural material, in essence the process can be summarized in the series of modules shown in Fig. 6 [9, 39]. In looking at this process to go from an initial state to an expected lifetime would be a “done deal,” IF the necessary input data were both complete and the models were sufficiently complete and able to address issues related to limited accuracy, noise, and measurement errors. Such models also need to be made computationally efficient and perform assessments in sufficient detail and with sufficient computational
192
L. J. Bond
Fig. 6 Conceptual model for life models and inputs. (After [9, 39])
accuracy, and at the same time manage the inherent uncertainties in both the initial and ongoing measured data. In general, however the needed data sets and models are not available, so quantification of what can be achieved is a current goal. In terms of the barriers to being able to predict expected lifetime there is: (i) Missing information. The measurement methods used do not currently determine the initial state of individual components/structures/systems with high precision and in many cases adequate resolution. In many legacy systems then there has not traditionally been adequate monitoring of the operating environment of individual components. Also damage progression models have traditionally been empirical (e.g., Paris Law), and it would be difficult to incorporate the missing information even if it were available into current models. (ii) Uncertainty. There will always be uncertainty and noise in the input data, including signal-to-noise and sensor drift/aging effects. (iii) Variability. Even if it were possible to eliminate uncertainty, it would be necessary to take variability into account, including that encountered in an operational environment, particularly with regard to applied stressors. In seeking to provide better life estimates it has been recognized that it should not be NDE or SHM, (e.g. [4, 8]) but that the two approaches are complementary and can contribute to giving data and insights needed for advanced diagnostics and prognostics. NDE has been more about seeking more efficient inspections and expanding to full coverage of internal and hard to access structures. SHM seeks much broader coverage and demonstrated flaw sizing capability. The current SHM has, in many cases, limited coverage and reliability of flaw size determination capability, and has yet to demonstrate an acceptable POD, but the resulting data are given in near real time. It is the combination of NDE/SHM that seeks to accelerate progress to give the required coverage [40] and data sets. In the context of aerospace applications addressing these challenges is being investigated. Monitoring of the operational environment is improving with new and improved temperature, strain, and chemical sensors all under development. For
8
From Nondestructive Testing to Prognostics: Revisited
193
material state sensing data researcher’s attention is considering both global parameters, which include for structures, strain, displacement, acceleration, and in propulsion parameters such as vibration analysis. For a structural material moving beyond diagnostics, to give an assessment at a point in time, based on observed data (e.g., an NDE or SHM assessment), to prediction of life and technologies for structural health monitoring/management, based on predicted future behavior, is requiring development of new approaches that are identified in schematic form in Fig. 7. These approaches range from the general statistical data based assessments, which are applied to large populations and parts like bearings or the performance of all pumps of a particular type or class, to those based on physical degradation models with specific data taken on a particular part or component. These higher-order analyses are both more costly and time consuming. A review of machinery diagnostics and prognostics for CBM is provided by Jardine et al. [41], but it does not consider nuclear power systems. An assessment of the state of diagnostics and prognostics technology maturity was provided by Howard [42]. A review of the current paradigms and practices in system health monitoring and prognostics was provided by Kothamasu et al. [43]. The insights from these papers have been combined into the current status for diagnostics and prognostics for various system types and elements which is shown in Table 2. To implement SMH for passive components will require: advances in sensors; better understanding of what and how to measure within the plant or system; enhanced data interrogation, communication, and integration; new predictive models for materials damage/aging; and effective deployed system integration. Central to all such prognostics (remaining life prediction) is quantification of uncertainties in what
Fig. 7 Range of prognostic approaches. (After [11])
194
L. J. Bond
Table 2 Assessment of state of maturity for diagnostic [D] and prognostic technologies [P] [7, 42, 44] Diagnostics/prognostics technology for: Basic machinery (motors, pumps, generators, etc.) Complex machinery (helicopter gearbox) Metal structures Composite structures Electronic power supplies (low power) Avionics and controls electronics Medium power electronics (radar, etc.) High power electronics (electrical propulsion, etc.) Instrumentation calibration monitoring (NPP) Active components (NPP) Passive components (NPP)
APa D&P D&P D D* D D D D D D
Ab
P D P P P P
D
Ic
NOd
P
P P P
P*
D*, some application; P*, development still needed; NPP, nuclear power plants (current) – for small modular reactors D&P technology being investigated a AP ¼ Technology currently available and proven effective b A ¼ Technology currently available, but V&V for many uses not completed c I ¼ Technology in process, but not completely ready for V&V d NO ¼ No significant technology development in place
are inherently ill-posed problems, with in many cases limited networks that only gave sparse data sets. For expanded implementation, there is the need for integration of enhanced CBM/prognostics philosophies into new plant designs, operation, and then operations and maintenance (O&M) approaches [12]. Technologies are being developed for non-nuclear applications, including instrumentation and system health monitoring for electronics, in what is being called electronics prognostics; for example, the work of Urmanov [45]. There are also integrated technologies being developed for advanced fighter aircraft and unmanned aerial vehicle (UAV) system health monitoring, which include both electrical/electronic and some mechanical systems. Within the field of advanced diagnostics/ prognostics, systems have been deployed for individual elements, but fully integrated systems are still being developed. In understanding prognostics and moving beyond CBM trending of data, even within a normal operating band and developing performance related figures of merit can all assist an operator, particularly with applications to active components and enable “off-normal” conditions, or those trending in that direction to be identified and early warning to be provided [46], as illustrated with Fig. 8. Additional time (ΔT) can potentially be given for response when trending data inside a normal operating band are followed. When best practices are adopted the implementation of advanced and integrated life-cycle management schemes with data and metrics from NDE and SHM have the potential for prognostics and there are economic advantages that that can be provided. The potential effect of such modalities on system economics is shown for adding a prognostic with the schematic given in Fig. 9. Even when operating within an initial design life additional utilization can become available [4, 41].
8
From Nondestructive Testing to Prognostics: Revisited
195
Fig. 8 Schedule showing features in terms of early warning (response time from stressor-based prognostics)
Fig. 9 Life and component economics when prognostics added. (After [4, 41])
The question then becomes, what is needed to provide an advanced diagnostic, prognostic, and health management system, which seeks to combine insights from NDE and SHM into advanced life management, as well as material characterization and models for damage evolution. An example of the concept which identifies the numerous system elements to consider, after Mrad, is shown in Fig. 10.
196
L. J. Bond
Fig. 10 A moderately complex conceptual structure of a diagnostics, prognostics and health management system. (After [47])
The open question then becomes what is the state of the art for such integrated diagnostics and prognostics, illustrated with, for example, application to a nuclear power plant life/license extension [11, 43].
Models of Degradation Accumulation Model-based approaches to prognostics typically are the most accurate, providing the best estimates of the rate of degradation growth. Developing and validating such models presents significant challenges, both experimentally and mathematically. In particular, the physics of failure (from damage initiation to failure of the component) is still generally poorly understood, especially for structural materials [6, 44]. For instance, while the factors that impact the growth of a crack in materials are reasonably well-understood, the dynamics of incipient crack growth are less wellknown. The impact of one or more stressors on the rate of growth of degradation is also needed. Numerical studies, backed by careful experiments, are being conducted at several institutions worldwide to obtain a better understanding of damage phenomena, especially, for example when applied to the structural materials used in nuclear power plants (NPP).
Diagnostics and Damage State Awareness A related issue is the availability of diagnostic methods that are sensitive to the early stages of degradation. At issue are both the sensitivity and specificity of the diagnostic method to the degradation mechanism of interest. Further, the issue of determining the current damage state (or level) from the diagnostic measurements is also challenging. It is likely that advances in diagnostics technology from other
8
From Nondestructive Testing to Prognostics: Revisited
197
industries can be adapted to the unique needs of the nuclear power area. It is also likely that no single diagnostic method can provide adequate information about the damaged state of a material, component, or system. Instead, multiple orthogonal diagnostic tools will be necessary, as will novel data fusion methods, to uniquely determine the damaged state of the component [2].
Prognostics from Precursors To be useful, estimates of RUL are necessary from early stages of degradation (precursors), which can be from before changes can be detected using NDE tools. Challenges in this area include appropriate definitions of degradation precursors (i.e., what is a degradation precursor), availability of measurement tools sensitive to the precursors, and an understanding of degradation development from precursor states to component failure [6].
Uncertainty Quantification Given the various uncertainties associated with measuring the current state of components and those associated with stressors and degradation evolution, the RUL estimate is likely to be somewhat uncertain as well. Some methods for quantifying the uncertainty associated with the RUL are available and constraining (bounding) estimates will need to be validated for NPP implementation.
Technology Demonstrations There are currently various activities which are both seeking to understand and align data and capabilities provided by SHM, CBM, and PHM [48] and NDE with SHM, specifically for damage characterization [49]. Critical to these approaches is an understanding of failure behaviors and establishing methods that enable information/insights provided from various measurement modalities and disparate data sets to be combined. One example of online condition monitoring that can be used as an example to guide thinking with regard to the processes and practices needed to potentially extend operation of nuclear power plants is work by Meyer et al. [50]. Currently acoustic emission (AE) is the only tool sanctioned by the ASME code for online monitoring of passive systems, structures and components (SSC). This technology was used at both the Limerick Generating Station, Unit 1 reactor [51] and the Watts Bar Unit 1 reactor [52]. These studies ultimately resulted in a methodology that was included in the ASME Code, Section V, articles 12, 13, and 29 [50]. More recent work has sought to expand and now deploy both AE and guided waves in a laboratory study to monitor fatigue crack growth in 4 inch diameter stainless steel pipe [50].
198
L. J. Bond
In reviewing the AE-guided wave study of the pipe [50] it was recognized that there is an opportunity to potentially bring together both NDE and SHM data, if adequate consideration is given to the measurement systems and the output metrics. The steps that need to be considered in this approach are shown in Fig. 11a. The most fundamental step is to understand the aging or degradation mechanism, and resulting degradation, that is occurring and to select a metric that can be used and which can be measured and provided as an output from both NDE and SHM data sets. For example, for corrosion it could be a thickness measurement or in fatigue a crack size.
Fig. 11 Combining NDE and SHM: (a) conceptual framework, (b) application to a fatigue experiment
8
From Nondestructive Testing to Prognostics: Revisited
199
Following identification of the process metric measurement modalities are then needed to give this parameter at discrete times (with NDE) and as a function of time (with SHM). The resulting metric is then taken and combined into a time-metric plot (e.g., crack growth) which can then be used, together with other data and assumptions, in a prognostic algorithm. An example of the two paths to give data is illustrated for a fatigue experiment in Fig. 11b [50]. In establishing integrated approaches to combine both NDE and SHM data all parts of the prognostics modalities can be advanced and applied. For example a wireless network can potentially be used to transmit the SHM data, or some subset or locally calculated metric [53] and if full data sets are recorded a variety of advanced data processing tools can potentially be deployed, including artificial intelligence [54], but irrespective of how data are transmitted, recorded, and processed, the most critical attribute is to provide “common metrics” that can be derived from both NDE and SHM data and which can then be combined into a remaining useful life assessment [e.g., 46, 55].
Technical Challenges The challenges in providing advanced prognostics, particularly for passive components, are not negligible and still require systematic and interdisciplinary attention. The first questions are to determine what to measure, to give data that will reliably provide a common metric, and how to measure it, with adequate sensitivity, using NDE and SHM modalities? In too many cases current sensors simply lack the required (desired) sensitivity to parameters of potential importance. There then becomes a shopping list of what is needed, in terms of technology and capabilities, to more fully implement an integrated NDE/SHM/prognostics vision, in the age of NDE 4.0: • • • • • • • • • • • •
Better sensor materials (more sensitivity) Data interrogation, communication and integration Handling large volumes of data (may be real time) – big data and data fusion Addressing signal to noise – extracting signals from noise in signals (to give early detection) and managing drift in sensors (aging) Understanding and improving (eliminating drift) – stability of measurement systems/sensors over time System integration and deployment on real-world hardware Better understanding and modeling the phenomena of aging and degradation – effects of stressors on parts, including combination of multiple stressors More complete aging-damage models Health sensors/NDE/NDI – sensors for SHM – smart components (selfdiagnostic) Data integration with process models Predictive/prognostic models – symbiotic systems Probabilistic analysis – risk informed in-service inspection (ISI)
200
• • • • •
L. J. Bond
Integration of prognostics into plant operation and maintenance (O&M) Cost-of-ownership – and life cycle management Development of tools for early damage characterization Quantification of uncertainty – (ill-posed problems) Move capabilities from SHM to true prognostics, at system level, particularly for passive components and adding impact of stressors and resulting degradation.
The overarching requirement is to achieve the fully integration of NDE/SHM into engineering and product life cycle – design for inspectability, monitoring, and then understand life “utilization” and prognostics – to give the accurate estimate of RUL, with quantifiable uncertainty.
Summary NDE is a family of mature technology which have benefited from major advances in science, technology, and data approaches in the past 20 years, most importantly, leveraging advances in computers for both data processing and modeling. Condition based maintenance (CBM) is now routinely applied to many active components (e.g., pumps, valves and rotating machines), with in some cases the use of algorithms (prognostics) that estimate remaining useful life (RUL). The same period has seen a slower rate of development and deployment of structural health monitoring (SHM) applied to passive components in high-tech industries including both aerospace and nuclear (e.g., airframe, pressure vessels, and concrete). New approaches to prognostics are now offering expanded opportunities for deployment, bring together more traditional NDE data and that given by SHM, for passive structures, and which utilize new sensors, wireless data transmission, robotic deployments, and advanced data processing, including artificial intelligence, as more big data become available for analysis within the framework of the internet of things (IoT) and the move to NDE 4.0. Fundamentally both NDE and SHM need to provide a common metric or metrics which can then be combined with stressor data and degradation models used to estimate RUL with a prognostic model. Moving forward there is a need to improve: • Identification of common metrics that can be measured using NDE and SHM technologies. • NDE be increasingly seen as part of CBM/Prognostics and quality manufacturing process. • NDE be used to minimize ownership costs. • NDE to be more quantitative and more sensitive, particularly for early damage. • Use of robots, processing large data sets, use new sensors, and have integration into manufacturing metrology. • Development of tools for early damage characterization. • Move, for structural components, from SHM to true prognostics, at system level.
8
From Nondestructive Testing to Prognostics: Revisited
201
• Fully Integration of NDE into engineering and product life management cycle – design for inspectability and monitoring. • SHM capabilities to provide local data reduction, that reduces data transmission bandwidth, which gives a metric that can then be utilized in a prognostic prediction. The adoption of prognostics has the potential to provide a technological and economic framework that can support use of advanced NDE and SHM. PHM is best implemented and sold during the guild-installation phase of a project: it is always cheaper to implement a PHM system in the beginning (e.g., when starting a new design/process) rather than after the fact. This is demonstrated with some of the most advanced deployments which are in the automotive sector. The community is at a point where new approaches to data management and use, including addressing big data needs for NDE and SHM, has the potential to revolutionize asset life management and prognostics in the age of Industry 4.0.
Cross-References ▶ Artificial Intelligence and NDE Competencies ▶ Best Practices for NDE 4.0 Adoption ▶ Compressed Sensing: From Big Data to Relevant Data ▶ Digital Twin and Its Application for the Maintenance of Aircraft ▶ Digitization, Digitalization, and Digital Transformation ▶ In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of Additively Manufactured Metal Parts ▶ Introduction to NDE 4.0 ▶ NDE 4.0 in Civil Engineering ▶ NDE 4.0 in Railway Industry ▶ NDE in Additive Manufacturing of Ceramic Components ▶ NDE in Energy and Nuclear Industry ▶ NDE in Oil, Gas, and Petrochemical Facilities ▶ NDE in The Automotive Sector ▶ Registration of NDE Data to CAD ▶ Smart Monitoring and SHM ▶ The Human-Machine Interface (HMI) with NDE 4.0 Systems ▶ Value Creation in NDE 4.0: What and How
References 1. Ahmad A, Bond LJ, editors. Metals handbook, volume 17, Nondestructive evaluation of materials. ASM International; 2018. 2. Bond LJ, Ramuhalli P, Tawfik MS, Lybeck NJ. Prognostics and life beyond 60 years for nuclear power plants. In: Proceedings, IEEE PHM Reliability Society, Denver, June 20–22, paper # 10622. 2011.
202
L. J. Bond
3. Bond LJ. Predictive engineering for aging infrastructure. In: Reuter WG, editor. Nondestructive evaluation of utilities and pipelines III. Proceedings SPIE, March 4, 1999, Newport Beach, vol. 3588; 1999. p. 2–13. 4. Adams DE, Nataraju M. A nonlinear dynamic systems framework for structural diagnosis and prognosis. Int J Eng Sci. 2002;20:1919–41. 5. Larson JM, et al. Materials damage prognosis. In: Proceedings, symposium structural materials division the materials society, New Orleans, Sept 2004. 6. Bond LJ, Doctor SR, Taylor TT. Proactive management of materials degradation – a review of principles and programs. PNNL-17779. Richland: Pacific Northwest National Laboratory; 2008. 7. Coble J, Ramuhalli P, Bond LJ, Hines W, Upadhyaya B. A review of prognostics and health management applications in nuclear power plants. Int J Prognostics Health Manage Forum. 2015;6(3):16. 8. Bond LJ, Meyendorf NG. NDE and SHM in the age of industry 4.0. In: Chang F-Y, Kopsaftopoulos F, editors. Structural health monitoring 2019: enabling intelligent life-cycle health management for industry internet of things (IIOT) (Proceedings of 12th international workshop on structural health monitoring, Sept. Stanford University). 2019; DEStech Publications, p. 3–15. http://www.dpi-proceedings.com/index.php/shm2019 9. Bond LJ. From NDT to prognostics: advanced technologies for improved quality, safety and reliability. In: Proceedings, 2015, 12th Far East NDT Forum, Zhuhai, China May 28–2015. IEEE Xplore; 2016. https://doi.org/10.1109/FENDT.2015.7423235 10. Starr A, Al-Najjar B, Holmberg K, Jantunen E, Bellew J, Albarbar A. Maintenance today and future trends, Ch 2. In: Holmberg K, et al., editors. E-maintenance. Berlin: Springer; 2010. 11. Bond LJ, Doctor SR, Griffin JW, Hull AB, Malik SN. Damage assessment technologies for prognostics and proactive management of materials degradation. Nucl Technol. 2011;173: 46–55. 12. Bond LJ. Moving beyond NDE to proactive management of materials degradation. In: Proceedings of ASME pressure vessel and piping 2010 conference, July 18–22, Bellevue, Paper # PVP2010-26132. 2010. 13. Wustebberg H, Erhard A, Boehm R. Limiting factors for crack detection by ultrasonic investigation. NDT.net. 1998;3(1), 10pp. 14. Katt RJ. Applying materials state awareness to condition-based maintenance and life cycle management. NAE proceedings, workshop August 2014. 2016. 15. Bond LJ, Clayton BR. Nondestructive testing and condition monitoring of wind turbine blades. Wind Eng. 1989;13(1):19–29. 16. Wymore ML, Van Dam JE, Ceylan H, Qiao D. A survey of health monitoring systems for wind turbine blades. Renew Sust Energ Rev. 2015;52:976–90. 17. Lian J, Cai O, Dong X, Jiang Q, Zhao Y. Health monitoring and safety evaluation of the offshore wind turbine structure: a review and discussion of future development. Sustainability. 2019;11:494. 18. Eason TJ, Bond LJ, Lozev MG. Structural health monitoring of localized internal corrosion in high temperature piping for oil industry. American Institute of Physics (AIP), Conference proceedings # 1650, Boise (July 2014), p. 863–873. 2015. 19. Bishop-Moser J. Rapid reliability assessment of safety-critical and emerging technologies: nextgeneration nondestructive evaluation. Report MFORESIGHT, June, 2019. 20. Hebden IG, Crowley AM, Black W. Overview of the F-35 structural prognostics and health management system. In: Proceedings of 9th European workshop on SHM, July 10–13, Manchester UK. 2018. 21. Meyendorf NG. Industry 4.0 requires NDE 4.0’ proceedings of SPIE 10171, smart materials and nondestructive evaluation for energy systems 2017, 101710H (12 May 2017). 2017. https:// doi.org/10.1117/12.2263326 22. Cawley P. Structural health monitoring: closing the gap between research and industrial deployment. Struct Health Monit. 2018;17:1225–44.
8
From Nondestructive Testing to Prognostics: Revisited
203
23. Weiss BA, Brundage M, Tamm Y, Makila T, Pellegrino J. Summary report on the industrial forum for monitoring, diagnostics, and prognostics for manufacturing operations, NIST advanced manufacturing series 100-23. 2019. 24. Mann D. Avitas Systems, a GE Venture, Selects Limelight Networks to optimize predix-based cloud capabilities for digital industrial inspection, January 29. 2018. https://www.marketwatch. com/press-release/avitas-systems-a-ge-venture-selects-limelight-networks-to-optimize-predixbased-cloud-capabilities-for-digital-industrial-inspection-2018-01-29. 25. Canaday H. Honeywell emphasizes comprehensive, confidential big data solution, MRO-Network.com. July 10. 2019. https://www.mro-network.com/maintenance-repair-over haul/honeywell-emphasizes-comprehensive-confidential-big-data-solution. 26. Pozzi J. Rolls-Royce reaffirms digital focus. Inside MRO, July 2. 2019. https://www.mro-network. com/engines-engine-systems/rolls-royce-reaffirms-digital-focus-optimistic-trent-1000-fixes. 27. Derber A. Aircraft OEMs racing to catch up with health-monitoring solutions, Dec 6. 2018. https://www.mro-network.com/engineering-design/aircraft-oems-racing-catch-health-monitor ing-solutions. 28. McConnell VP. Commercial: engine prognostics. Aviation Today 2007 08 01 (11 pp). 2007. 29. Ramuhalli P, Roy S, Hirt EH, Prowant MS, Pitman SG, Tucker JC, Dib G. Component-level prognostics health management framework for passive components – advanced reactor technology milestone: M2AT-15PN2301043. PNNL-24377. Richland: Pacific Northwest National Laboratory; 2015. 30. Thompson DO, Chimenti DE, editors. Review of progress in quantitative nondestructive evaluation. AIP conference proceeding series. 1980–2012. 31. AGARD. Impact of emerging NDE-NDI methods on aircraft design, manufacture and maintenance. AGARD report CP-462. 1990. 32. Shah VN, MacDonald PE, editors. Aging and life extension of major light water reactor components. Amsterdam: Elsevier; 1993. 33. Burt HM, Chimenti DE. Unified life cycle engineering: an emerging design concept. In: Thompson DO, Chimenti DE, editors. Review of progress in QNDE, vol. 6B. New York: Plenum; 1986. p. 1797–809. 34. Farley M. 40 years of progress in NDT – history as a guide to the future. American Institute of Physics, conference proceedings # 1581, p. 1–12. 2014. 35. Bond LJ. Through the looking glass: the future for NDE? American Institute of Physics, conference proceedings # 1581, p. 21–35. 2014. 36. Georgeson G. Boeing research & technology trends in R&D for nondestructive evaluation of in-service aircraft 5th international symposium on NDT in aerospace, 13–15th November 2013, Singapore. 2013. 37. Vogl GW, Weiss BA, Helu M. A review of diagnostic and prognostic capabilities and best practices for manufacturing. J Intell Manuf. 2019;30:79–95. 38. Elattar HM, Elminir HK, Raid AM. Prognostics: a literature review. Complex Intell Syst. 2016;2:125–54. 39. Thompson RB. Exploring the implications of Bayesian approach to material state awareness, workshop on prognosis of aircraft and space devices, components and systems, AF OSR, Feb 19–20, 2008. http://www.ase.uc.edu/~pnagy/NDE/AFOSR%20Prognosis%20Workshop/ Abstracts.html – ppt. downloadable. 40. Achenbach J. From the Sputnik era to the present day: adventures in engineering science of one A & A Alumnus. Stanford University, May 9, 2008. https://aa.stanford.edu/events/ 50thAnniversary/media/Achenbach.pdf. 41. Jardine AKS, Lin D, Banjevic D. A review on machinery diagnostic and prognostics implementing condition based maintenance. Mech Syst Signal Process. 2006;20:1483–510. 42. Howard P. Prognostic technology – new challenges. In: Proceedings of 59th society machinery failure prevention technology, Virginia Beach, p. 3–8. 2005. 43. Kothamasu R, Huang SH, VerDuin WH. System health monitoring and prognostics – a review of current paradigms and practices. Int J Adv Manuf Technol. 2006;28:1012–24.
204
L. J. Bond
44. Bond LJ, Doctor SR, Jarrell DB, Bond JWD. Improved economics of nuclear plant life management. In: Proceedings, 2nd IAEA international symposium on nuclear power plant life management, Shanghai, China, 15–18 Oct 2007, IAEA Paper IAEA-CN-155-008KS, 26pp. 2008. 45. Urmanov A. Electronic prognostics for computer servers. In: Proceedings of reliability and maintainability symposium, RAMS, 2007. IEEE, p. 65–70. 46. Jarrell DB, Sisk DR, Bond LJ. Prognostics and conditioned-based maintenance: a new approach to precursive metrics. Nucl Technol. 2004;145(3):275–86. 47. Mrad N. State of development of advanced sensor systems for structural health monitoring applications, NATO RTO-MP-AVT-144 paper # 29. 2012. 48. Tinga T, Loendersloot R. Aligning PHM, SHM and CBM by understanding the physical system failure behaviour. PHM Soc Eur Conf. 2014;2(1). https://doi.org/10.36001/phme.2014. v2i1.1499. 49. Aldrin JC, Annis C, Sabbagh HA, Lindgren EA. Best practices for evaluating the capability of nondestructive evaluation (NDE) and structural health monitoring (SHM) techniques for defect characterization. AIP Conf Proc. # 1706: 200002. 2016. 50. Meyer RM, Bond LJ, Ramuhalli P. Online condition monitoring to enable extended operation of nuclear power plants. Nucl Saf Simul. 2012;3(1):31–50. 51. Hutton PH, Dawson JF, Frisel MA, Harris JC, Pappas RA. Continuous AE crack monitoring of a dissimilar metal weldment at Limerick unit 1. US Nuclear Regulatory Commission, NUREG/ CR-5963, PNL-8844. 1993. 52. Hutton PH, Frisel MA, Dawson JF. Acoustic emission monitoring of hot functional testing: Watts Bar unit 1 nuclear reactor. US Nuclear Regulatory Commission, NUREG/CR-3693. 1984. 53. Elghazel W, Bahi JM, Guyeux C, Hakem M, Medjaher K, Zerhouni N. Dependable wireless sensor networks for prognostics and health management: a survey. In: Proceedings of annual conference on prognostics and health management society, p. 681–691. 2014. 54. Schwabacher M, Goebel K. A survey of artificial intelligence for prognostics, In: proceedings. AAAI Fall symposium, artificial intelligence for prognostics, p. 108–115. 2007. 55. Eker OF, Skaf Z, Camci F, Jennions IK. State-based prognostics with state duration information of cracks in structures. 3rd international conference on through-life engineering services. Procedia CIRP. 2014;22:122–6.
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0 Daniel Kanzler and Vamsi Krishna Rentala
Contents Introduction to the Need and Capability of NDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Need of NDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Capability and Reliability of NDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example: Railway Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Understanding the Reliability of NDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of the Reliability of NDT Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measurement Uncertainty Versus Testing Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Receiver Operating Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Probability of Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reliability Evaluation of NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modular Model for NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Probabilistic Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Issue of Scarce Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
206 206 207 210 213 213 214 216 217 230 230 231 232 234 235 235 236
Abstract
This chapter mainly focuses on the major aspects of the reliability of nondestructive testing (NDT) techniques. From the safety point of view, evaluation of NDT techniques is vital for many risk-involved industries such as in aero-industry, railways, nuclear, oil and gas, etc. In addition, successful implementation of the damage tolerance concept highly relies on the reliability of NDT techniques. In other words, due to the aims of NDE 4.0, the qualitative evaluation of NDT is becoming vital. The first part of this chapter deals with the importance of NDT reliability with regard to the economical, jurisdictional, and safety-critical D. Kanzler (*) · V. K. Rentala Applied validation of NDT, Berlin, Germany e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_3
205
206
D. Kanzler and V. K. Rentala
requirements. Upon highlighting the importance of NDT, the second part of the chapter provides an overview of the understanding on the reliability of NDT. The third and last subsection of this chapter focuses on the topic of the reliability evaluation under NDE 4.0 along with discussion on the need and possibilities of the reliability evaluation. Keywords
Reliability · POD · ROC · Validation · Capability
Introduction to the Need and Capability of NDT The Need of NDT The discussion on the need of NDT and the estimation of its capability always starts with a basic question on why should the reliability of NDT systems be questioned at all? In recent references in the NDT community, the need of the evaluation of capabilities is considered to be essential. To gain a structured discussion about the value of reliability evaluations of NDT, the basic question needs to be answered: The most challenging and difficult question concerning the reliability of NDT is strongly connected to the need of NDT. In general, there are typically two different reasons toward the need of using NDT in companies/industries: (a) the company/industry requirements to use NDT and (b) the customer necessitates the company/industry to use NDT.
The Company/Industry Requirements to Use NDT Due to the benefits and uses of NDT, the management of the company/industry introduces NDT as a new tool for better-quality management. The two main reasons of using NDT in companies/industries are: (a) increasing product quality and (b) decreasing overall costs. The immediate value of a well-used NDT in quality management is the improvement of quality production. “Increasing quality” leads to the decrease in number of reclamations and an increase in the value of the trademark of the company. The importance of the trademark, of a product from the sales and marketing point of view, can be shown and proven for various products. For example, as soon as a new Apple Iphone [1] is released, the media starts to discuss the relation of “price versus material costs” in detail. In addition, quality distinctions for certain products, such as Japanese products (after the introduction of Kanban) or German products (with “Made in Germany” tag), lead the customer to value higher quality. With no doubt, higher quality also leads to a higher price and a larger profit margin, and hence the economic argument of the costly NDT is at that point rejected. In addition, the overall decrease in company/industry costs is also connected to the level of product quality. If the number of complaints is decreasing, the efforts in service and therefore the corresponding costs also decrease. For example, in automotive
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
207
companies, the number of recalls is one of the imminent threats, which costs a large amount of money along with a threat for the image of the companies [2]. In the recent past, aviation companies had similar experiences, which lead to high economical loss [3]. Moreover, NDT techniques evaluate the present status of technical systems leading to the concepts of life extension and also resulting in shorter or fewer downtimes due to longer intervals between maintenance schedules.
The Customer Necessitates the Company/Industry to Use NDT A company has the option to use NDT techniques but not evaluate the data in a broader view such as during optimization of the design or the production processes. Hence, the company loses the opportunity in using the data to gain additional potential. Even though the NDT costs are encountered by the companies, they lose the uses of data provided by the NDT techniques. Therefore, it is important to have a closer look into the procedure of NDT. Who demands the procedure of NDT? The demanding person can be a real customer (human being), or also an entity such as the government or insurance companies. Typical NDT requirements in companies are from different entities such as FAA [4], NASA [5], and Det Veritas [6] through original equipment manufacturers. With all this information, one can understand the need of NDT in industries. Further, the details regarding the capability and reliability of NDT are provided in section “Capability and Reliability of NDT.”
Capability and Reliability of NDT In general, every manager can individually decide whether NDT is necessary for a company or a department. In other words, they adopt NDT techniques if it is possible to monetize the additional quality and if their stakeholders will go along with their concept of NDT. However, if a company uses NDT, they should also be aware of the capability and reliability of NDT. Generally, the capability assessments of NDT techniques involve observing the condition of facilities, materials and equipment, operating practices, and overall expertise in the execution of the NDT process, whereas the reliability determinations measure flaw detection probabilities derived by NDT and the ability to discern flaw characteristics [7]. Due to the susceptibility of errors which might lead to wrong results, NDT techniques are calibrated before being used and also maintained in regular intervals. Unfortunately, wrong results are not only considered as useless but also risky. In a more complex way, the same is also true for the reliability evaluation. If the limits of a testing system are not known, there is a possibility for the NDT concept to fail. Therefore, it is obvious, that in case of events which are followed by a serious aftermath, the reliability of NDT has to be evaluated. The above-mentioned shareholder group, consisting of government and insurance companies, might demand not only the use of NDT but also the evaluation of its capability and reliability. Here are the few examples showing the interconnection between NDT and former reliability accidents with serious aftermaths. In three cases of accidents (railway 2009: Viareggio [8], aviation 1989: Sioux [9], power plants 1988: Irsching [10]), the capability and reliability of NDT systems have been
208
D. Kanzler and V. K. Rentala
an important part in public discussion. It can be understood that these accidents with a serious aftermath are not daily events. However, the clarification of these events in court takes a long time and therefore also a large amount of money. Of course, the number of accidents due to technical failure can be avoided by the correct use of NDT and the reliability evaluation. Even though, the number of accidents with a serious aftermath is decreasing, the costs of accidents are rising. One consequence of the rising costs of failures is the change in standards and guidelines issuing the capabilities of technical equipment. In the recent years, the need of reliability for NDT is increased due to the changed version of the ISO 17025: 2018. The importance of the evaluation of uncertainty of testing and measurement systems, in cases of accidents and cases in court, is increasing [11]. The given reference [11] is a good example in regard to the importance in the evaluation of quality and reliability in a probabilistic way. It shows that it is not only an issue for engineers or statisticians anymore, but also has arrived in court, as cases of failure. The capability of measuring and testing systems is highly important and essential. However, there is a difference between measuring and testing. If measuring a physical unit (in SI), the uncertainty evaluation of measurements according to Guide to the Expression of Uncertainty in Measurement (GUM) is certainly demanded. During testing as well as for the process of detecting and analyzing defects, the issue of uncertainty is slightly different. During testing, the result might just be an indication, but combined with the existence of a defect, it can change the testing result for a defect to be a “hit” (defect found) or “miss” (defect not found). Based on this, it is not possible to use the typical equation on calculating the uncertainty. According to the reliability, evaluations have to be based on detecting errors.
The Failure as the Reason for Reliability Evaluation As discussed earlier, the event of a failure is the main motivation to ensure a reliable NDT system, as it might lead to the loss of large amounts of money, injuries, or death of workers as well as damage of environment in a large scale. In the case of these losses, the specific event is called “a catastrophic accident.” Applications with the risk of catastrophic accidents are called safety-relevant situations where the reliability of the NDT has to be taken into consideration and demanded. However, the reliability evaluation of the NDT system is only one of the parts in preventing failures. The event of failure also occurs when the load exceeds the capabilities of the component. The load exceeding is either the overall exceeding or the punctual exceeding of the component inner structure [12]. The reason for failure in overall exceeding is the misuse of the component or a design flaw where the NDT might not be the best option to avoid failures. In case of punctual exceeding, there is a chance of the load being larger than the capabilities. If this is not due to the design, the issue could be caused by an unexpected defect. If a defect jeopardizes the functionality of a component, it is called a “critical defect.” For further discussion, it is necessary to define critical defects as the inhomogeneities in the material not acceptable for the design of the component. Typical examples are cracks in the case of tensile and strength loading parts or volumetric defects like pores for corrosion barriers. The
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
209
handling of material with inhomogeneities changed drastically in the last decade. While in the 1990s the “zero-defect tolerance” was the baseline of quality control (often used from automotive sector), the more traceable concept of “damage tolerant approach” was later introduced and mainly used for the aerospace and aviation sectors. The damage-tolerant approach is now state of the art in handling inhomogeneities. Under the “zero-defect tolerance” concept, even the smallest defect should be found and repaired as soon as possible. Even though, the NDT techniques possess the sensitivities in identifying the smallest defects, they lack the reliability with which the lowest sensitive defect can be detected. Moreover, the statement that even the smallest, microscale inhomogeneity will lead to the failure of components is not comprehensible today. Under the “damage-tolerant model” concept, the functionality of a component might be jeopardized by a defect on a defined size. Despite the growth of cracks during usage or the rate of corrosion over time, the component can still be operated, as long as the damage is less than the critical defect size at which the component might fail. This concept is mainly based on the assumption that a critical defect is not the one which appears all of a sudden and not every small inhomogeneity is critical for a component. Furthermore, this “damage-tolerant model” is mainly based on the probabilistic nature of the situation.
Probabilistic Nature of Failure The existence of a critical defect does not automatically mean that this certain defect will lead to the failure of a component. In order to describe the probabilistic nature of a failure, the following mathematical model can be used. The probability of failure (POF) describes the statistical event that a failure (F) is conditioned on the existence of a critical defect in the component (C) which was not detected D as shown in Eq. 1. POF ¼ p FjD, C
ð1Þ
As mentioned earlier, the POF also depends on the design and the usage of the component, e.g., loads. Here, the short equation is used as well as focused on the variables, which are relevant for the NDT. Furthermore, the POF is depending on the structural integrity, and therefore it is a task for the design department. The probability of detection of critical defects (PODc) is the main characteristic for the reliability evaluation. Based on the requirement of testing for the critical defect (C), the detection (D) of the defect is also a probability: PODc ¼ 1 p DjC ¼ pðDjCÞ
ð2Þ
The complementary probability of revealing a defect is not detecting a defect. The probability of detection (POD) depends on the design of the component and the capability of the NDT system, which will be discussed in section “The Probability of Detection.” Especially the dependency on the design is often discussed critically. However, the design for NDT will be more relevant in the cases where there is more
210
D. Kanzler and V. K. Rentala
than one possibility of designing a component. In the near future, this will be an interesting issue in the area of additive manufactured (AM) parts. The probability of detection is the major concern of the NDT department, which makes it challenging to influence the design. In addition, it is also important to know about the probability of occurrence (POO) of a critical defect (C) as shown in Eq. 3. POO ¼ pðCÞ
ð3Þ
Whether defects occur or not strongly depends on the production method, and again on the design of the component. The POO is mainly the task for the involved production engineer. As seen at AM, POO can be complicated. For example, some of the melting processes within AM productions behave like welding and other more like casting. There is no classical way to group AM neither in the welding processes nor in the casting processes. If all the equations are combined in a calculation, the NDT lies in the sandwich position of the probability of failure as shown in Eq. 4: POF ¼ p Fjp DjpðCÞ
ð4Þ
Finally, it can be understood that the three different departments have to work together to produce a reliable component, i.e., the designer of the parts, the production engineers, and the NDT. Moreover, another observation shows that the design is the essential part in all three described probabilities. Therefore, the “design for NDT” might be an interesting tool in the near future.
Example: Railway Maintenance In order to illustrate the previously mentioned theory and its connection to the reliability of NDT, an industry example is discussed. Therefore, the railway industry and the testing of trains are discussed in detail. Derailing of trains might lead to death of passengers as well as the people of the surrounding area of the accident. It also has a high impact to the environment – as seen in Viareggio. Hence, trains are tested in regular periodic intervals and checked by governmental institutions. The need of NDT is indispensable. The most critical components, with the highest requirement according to quality and structural integrity, are the components of the wheel-rail contact, which secure the moving of the train. The components of the wheel-rail construct are the rail, the wheel, and the wheel set. In order to avoid the failures and derailing of the train, each of the components is tested and maintained regularly (Fig. 1). This particular example is focusing on the wheel set. Several forces affect the wheel set such as the forces due to the weight of the cabin, the torsion forces during acceleration and braking, the interaction of more complex forces while driving in curvatures, etc. Hence, a crack in the wheel set is a large issue in the mentioned situations. However, due to the development of modern NDT methods, even smallest defects can be detected. After covering a defined distance (driven
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
211
Fig. 1 Schematic picture of the critical components of a train
Fig. 2 The perfect inspection time spot
kilometers) or at specific time, the wheels set is to be tested by different methods. Within these intervals, based on the expected loads, a small defect such as a crack can grow during the service. The testing and maintenance interval should be defined, in order to detect a crack, which is large enough to jeopardize the use of the asset until the next scheduled inspection. Figure 2 shows the time of the crack formation until it reaches the critical length, which might jeopardize the asset. The different phases of the crack growing (crack formation, stable and unstable growing) are highly depending on the material, the geometry, and the load on the wheel set within the time of usage. Yet, in this case, one line symbolizes all possible situations which can lead to a crack, and it is
212
D. Kanzler and V. K. Rentala
growing in the wheel set. The use of a statistical mean crack growth is not acceptable, otherwise this might lead to accidents. Thus, it is clear that this model oversimplifies the situation. With regard to the testing system, it is obvious that there is a number of minimal defects detectable. The physical basic elements of the testing system give a lower limit, and the cracks smaller than that cannot be detected. This underlines the fact that there is no useable “zero-damage” management, because there might be cracks inside the wheel set, which cannot be seen at the beginning of the crack formation which is an important part of the damage-tolerant design concept. Due to inspections, the defects can be identified, and the wheel set can be replaced or repaired. The detection and repair of the defect prevents the crack to grow instable or even reaching the critical length. Nevertheless, there is the second major simplification: With only one inspection in a certain amount of time and no crack detected, it may reach the critical length, and the failure of the asset could be a possible scenario. To avoid this situation, there should be more than one inspection before the crack might reach its critical length. In the advanced model, the above-mentioned oversimplifications are avoided. Therefore, there are three different time slots to detect the defect as shown in Fig. 3. During the first one, the crack length is still too small to be detected. During the second, the crack length reaches the size corresponding to the decision threshold. The perfect testing system might detect the defect, but due to the probabilistic nature of the capability of the testing system, 50% of the defects are still undetected during the second phase of detection (here: Probability of Detection/POD). However, there is still one reliable time slot of testing to ensure the defect will be detected. The previous paragraph leads to the question on the length of the safety margin. Different industries accept 90% probability of detection in the case of three testing intervals, in which the defect could be found reliably before it grows to a critical
Fig. 3 Advanced model of probabilistic crack propagation
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
213
length [13]. Due to the growing of the defect, later inspections might have higher probabilities. Therefore, this model is quite conservative and has proven its value in the past. In a later section “Reliability Evaluation of NDE 4.0,” the predictive maintenance model, which is one major aim for NDE 4.0, will be discussed. However, trying to change the slots for testing intervals, without reaching the critical length of the crack and without knowing neither the reliability of the testing system nor the probabilistic framework of the crack growing, is a huge challenge.
Understanding the Reliability of NDT Definition of the Reliability of NDT Systems The reliability of NDT systems was defined by the European-American Workshop on Reliability of NDE [14] as the degree that an NDT system is capable of achieving its purpose regarding detection, characterization, and false calls. Due to this definition, there are a few probabilistic characteristics of interest:
Probability of Detection (POD) There are a several definitions for POD, e.g., ASTM E3023: “the fraction of nominal discontinuity sizes expected to be found given their existence” [15]. As mentioned earlier, under the presence of a defect, which can become critical soon, the probability to detect this defect can be understood as POD. Often, the size of a defect is one parameter distinguishing between a defect, which should be detected, and an inhomogeneity, which does not need an immediate response. The POD is always a function of the defect size. Probability of Identification (POI)/Probability of Characterization (POCh) The POI and POCh are describing probabilistic detectability depending on the kind of inhomogeneity. In a testing situation, there are different kinds of inhomogeneities or geometrical attributes which lead to an indication for the NDT method. For components affected by load, cracks are more critical than pores, and this is mainly due to the difference in the defect morphology between the cracks and the pores or porosity. As the crack kind of defects are highly propagating in nature and resulting in failure of a component or a structure under the influence of loading, they are considered as critical defects. Consequently, it is important to distinguish between different kinds of attributes. This distinction between critical and noncritical structures is the basis of POI and POCh. Probability of Correct Sizing (POS) As already mentioned in the subsection about maintenance, the size of the inhomogeneity is often essential for the reliability of tested components. Regularly, there is an evaluation of the possible size of the found inhomogeneity. The still existing and
214
D. Kanzler and V. K. Rentala
occurring obstacle with using the known methods of uncertainty of measurements for the nondestructive testing is a major topic of the ongoing research. This topic is also discussed with more details in section “Measurement Uncertainty Versus Testing Uncertainty.”
Probability of False Alarm (POFa) A false alarm is complementary to the POD discussion. While POD implies detection of an existing defect, here it might be the case to denominate a defect-free part as defective. A reason might be a large signal-to-noise ratio while testing or structural attributes which are mistaken for defects. The evaluation of the POFa is not a primary discussion of the structural integrity, instead an economical one about the costs of testing procedures. For the time being, it should be kept in mind, that the use of NDT can be focused on the detection of defects or the cost optimization of the test procedure. To visualize both options, in the case of a safety-relevant component, the use of NDT will be focused on the detectability of defects. If measurements continue in the field of structural health monitoring, the optimization of the costs caused by avoiding false alarms will be important. Interconnection Between the Evaluation Characteristics of NDT Reliability From the logical point of view, there is a dependence between these characteristics. Prior to the sizing of a defect, it must be characterized correctly (different NDT methods decide the criticality based on the type of defect). The characterization or identification is based on the detection of the defect. Consequently, the POD is mainly of interest when the reliability is a primary focus. The probability of false alarm and the probability of detection depend on each other, as discussed in later sections (Receiver Operating Characteristics). It is important to keep in mind that the PODc, as defined earlier, focuses on critical defects and is already a combination of detection, identification, and sizing. Especially in the field of validation of artificial intelligent (AI) systems, further characteristics are developed to specify the capabilities of AI systems according to their detectability, false alarm rate, and the criticality of their findings and misses. However, all these different kinds of information which can be represented by a single characteristic is currently under development. Often the POD is used as an embodiment of reliability. Moreover, neither the POI / POCh nor the POS has been discussed as broadly as the POD. Next, this chapter focuses on POD, due to its importance in the industry and its statement for technical equipment.
Measurement Uncertainty Versus Testing Uncertainty In measurement systems, there is a defined uncertainty based on SI (System International) units (e.g., measuring linear dimensions in millimeters (mm)) and time in seconds (s)). The definition of a testing system is not as clear. The following example
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
215
focuses on an ultrasonic testing equipment, including a pitch catch or a pulse-echo scenario with a conventional probe. Within the ultrasonic probe, an electrical signal is transformed into a mechanical wave. Due to the coupling, an ultrasound wave is initiated in the tested component. In the case of a defect, the wave will partly be reflected, traveling back through the component, coupling again, and transformed via the ultrasonic probe into an electrical signal (Fig. 4). Within this testing situation, there are different information which are determined. While testing, the operator can achieve the difference of the acoustic pressure in dB, time in seconds, until the reflected wave reaches the probe, etc. In order to gain necessary information regarding the details of the defect, reference defects of known size are required for comparison. This is based on a known structure or physical table, which makes it possible to estimate the defect. Even though there are physical units within the testing process (electrical signal in V and A, time in secs between echo), the information about the defect is gained through a comparison on the kind of defect, geometry, and the material of the component. These influences make it difficult to use a measurement uncertainty evaluation for testing systems. It has to be noted that in the procedure to evaluate a measurement system, there is no need to evaluate the detectability of defects for carrying out the measurement uncertainty evaluation. This is mainly due to the fact that the primary objective of a measurement system is to measure the characteristics such as the size of an already identified or known defect or region of interest in the sample. Prior to the evaluation of the uncertainty of a testing system, it is important to determine the three different levels of probabilistic uncertainty, i.e., detection, characterization, and sizing. Here, the probability of detection and the probability of characterization correspond to the evaluation of testing uncertainty, whereas the probability of sizing (POS) corresponds to the measurement uncertainty. This POS is commonly referred as the correct sizing of a defect for acceptance or rejection criterion in the industrial scenario. Even though this term is used rarely, it signifies the accuracy of estimating the size of a defect. Hence, it has to be understood that both the probability of sizing and the measurement uncertainty are similar in nature in terms of the correct estimation of defect size, which is essential for the acceptance and rejection criterion. In each of these named levels, i.e., detection, characterization, and sizing, the limits of the system should be specified. Therefore, the POD evaluation is the beginning of a correct evaluation.
Fig. 4 Schematic ultrasonic testing situation
216
D. Kanzler and V. K. Rentala
Receiver Operating Characteristics One of the earliest approaches to visualize the capabilities of a detection system is the receiver operating characteristics approach. This approach was first used for radar detection of airplanes and ships in the second world war [16]. In the recent past, it was mainly used in the medical field, due to its capability to include the POFa. Currently, it is a valid tool to identify answers provided by artificial intelligence (AI) in order to characterize the capabilities of neural networks.
Fourfold Table The Receiver Operating Characteristics (ROC) is based on the fourfold table, i.e., the four possible answers a testing system might deliver. On the level of abstraction as shown in Fig. 5: The four possible answers of a testing system, the testing system indicates a possible defect with a red lamp and not detecting any inhomogeneity with a green lamp; the truth within the component might be defect free or defective. Both events together result in four different testing results: 1. Hit – True Positive (TP): The defective component causes an indication of the testing system (red lamp). The defect is detected, and hence the component will probably be repaired or replaced. 2. Miss – False Negative (FN): Despite the fact that the component is defective, the testing system does not indicate a defect (green lamp). The defect was missed. In this case, the defective component might return into use and has the probability to failure and causes risks resulting in catastrophic failures in the future. 3. Confirmed defect free – True Negative (TN): A defect-free part results in no indication for the testing system. Even if this is the best possible event for the component, there are still the costs for testing systems. Therefore, at this point, the use of the testing systems is often criticized.
Fig. 5 The four possible answers of a testing system
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
217
4. False alarm – False Positive (FP): Even if the component is free of any defects, the testing system indicates a defect. Possible reasons for this event might be signal noises, geometrical structures of the component, or similar incidents, all depending on the testing systems and the tested components. As mentioned earlier, the characteristics of interests are POD ¼
Hit Hit þ Miss
ð5Þ
The POD is the proportion of the number of hits in all defective components. In this case, there are no requirements about the type or the size of the defect. Two possible ways to use the POD are the following: First, it can be used to describe all the defects to gain a general statement, and second, it can be used to limit the defects to one kind and one size to describe the testing system for one possible use. For the second case, there is a connection to the later descript size-depending PODs. The probability of false alarm is the second characteristic of interest as shown in Eq. 6. Often the false alarm rate is used to avoid the generalization of all possible false alarm events. This requires a large amount of data, which is sometimes not possible to collect using experimental methods. Hence, model-assisted probability of detection (MAPOD) studies [17] are widely used to generate trial data under the known defect conditions or variations in order to overcome many such difficulties. Even though model-based approaches would aid in providing the necessary data, the assumptions considered under these approaches may not always be necessarily true in various situations. POFa ¼
False Alarm Confirmed : defect free þ False Alarms
ð6Þ
Receiver Operating Characteristics Curve The ROC curve is usually plotted using the two characteristics the POD and the POFa. The POD is plotted on the Y-axis whereas the POFa is plotted on the X-axis of the ROC curve. The ROC is useful in cases where the detectability as well as the costs of the testing process should be evaluated. The ROC evaluation can be used for simple answers, such as a red and green signal from a testing system, to show hit or miss. However, there are advanced approaches using signal and noise distributions with a defined decision threshold to describe more complex testing systems and the role of possible decision thresholds within the testing procedures (Fig. 6).
The Probability of Detection The evaluation of the probability of detection of nondestructive testing systems was first mentioned in the aerospace sector in 1972 [18]. Since then, different approaches were developed and are now used for many different NDT systems in different testing situations, component geometries, and various materials. This section
218
D. Kanzler and V. K. Rentala
Fig. 6 Receiver operating characteristics: in green and red, the events true positive (TP), false positive (FP), true negative (TN), and false negative (FN) are descriptions of the probability of each event
provides a short overview of the four different kinds of POD approaches such as the nonparametric evaluation, approaches based on binary responses, signal-response evaluation, and advanced approaches. Each approach is based on the theoretical knowledge of previous ones.
Nonparametric POD Approach: 29 out of 29 The “29 out of 29 method” to estimate the POD of a testing method is the first standard approach which is still being used in various industries. There are no requirements according to the statistical distribution of the signal or the noise. The method uses the binary answer of the testing system (indication: hit; no indication: miss). However, this method is based on a statistical frequentist view. Therefore, no additional information (e.g., a priori knowledge, technical justification) is required, and hence the only source of knowledge is experimental data. For each type of defect, geometry, and position, several experimental data are evaluated. The necessary amount of data points depends on the desired probabilistic characteristics and its confidence intervals. A typical characteristic which is often used for safety-critical components is an a90/95 (detecting a defect with 90% probability and 95% confidence). Here “a” represents the defect size, which is tested by the NDT method, and at the same time the parameter affecting the criticality of the component. Example: Based on the eddy-current inspection of aerospace components, surface breaking cracks might jeopardize the application of these components. In this case, “a” is the size (typically measured in length) of the surface breaking crack. The aim of the evaluation is to verify size “a” for each defect or a bin of defect sizes. Due to the binary response of the NDT testing system, the evaluation of the detectability can be understood as a Bernoulli experiment of two outcome possibilities “1” or “0,” where
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
219
1 ¼ defect found 0 ¼ defect not found In order to estimate the characteristic probability, for example, p ¼ 90% (i.e., POD of 90%), the binomial distribution is used for evaluating the likelihood of error as shown in Eq. 7. Fð x Þ ¼
n X k¼0
n pðnkÞ ð1 pÞðkÞ nk
ð7Þ
where n is the number of tests and k is the number of misses during the NDT inspection. By using Eq. 7, it is possible to calculate the necessary number of tests to decide whether the used testing system is able to detect a defect of size “a” with a probability of p ¼ 90% and a likelihood of error less than 5%. If 29 hits out of 29 tests are obtained, then the likelihood of the error is 0.047, i.e., less than 5%. Hence, this results in POD, for this particular defect is 90% and with a 95% confidence interval. 29 is the smallest possible number in experiments to obtain a90/95. Therefore, it is just called the 29 out of 29 method. However, in cases where there is at least 1 missed defect, then it has to be understood that there should be a minimum of 46 tests required for obtaining the a90/95. Similarly, if there are 2 misses, it requires at least 61 tests, and so on. This result is only valid for the specific defect type, the defect size, and the defect position. Therefore, the number of experiments has to be increased if more general information is required on a size-dependent POD. When the first PODs used this approach, over 300 defects were tested in order to obtain an informative result. However, the benefit of this model is that it can be used in occasions of unknown relationship between the defect parameter and the NDT response. In the case of lots of data, a nonparametric evaluation is powerful enough to be implemented successfully.
HIT-MISS POD Approach An approach where there is not much requirement of huge amounts of data in comparison to the nonparametric approach is the HIT-MISS POD approach which was first developed by Berens [19]. The parametric assumption using a mathematical model describes the scattering in the testing system and the relationship between the detection of defects. Due to this parametric model which depends primarily on a defect parameter, the necessary amount of data can be reduced drastically. In the nonparametric case for each size, at least 29 defects are needed, whereas in the HIT-MISS POD approach, the total minimum number of required data for “a” is 60 and over for the relevant span of defect sizes according to MIL-HDBK 1823. For the HIT-MISS POD approach, the essential information is the assumption that larger defects can be detected more easily. This is a requirement of the approach which should be validated. Yet again, the focus is on the defect size “a.”
220
D. Kanzler and V. K. Rentala
This assumption can be described as shown below: PODða1 Þ > PODða2 Þ, if a1 > a2
ð8Þ
Due to the nature of the probabilities, there are a few assumptions which are narrowing down the amount of possible statistical models that can be used: pð a Þ > 0
ð9Þ
pð a Þ < 1
ð10Þ
and
Typical mathematical distributions, which can be used, are log-odds, logit, probit, and complementary log-log and loglog functions [20]. Example : Log it function :
log
p 1p
ð11Þ
where p is the probability. For the graphical evaluation of the HIT-MISS POD approach, the data is plotted according to its defect size “a.” The HIT-MISS POD curve as shown in Fig. 7 is a typical sigmoidal curve (or S-curve). It is not an easy task to calculate. The generalized linear model for a logit transformation has to be approximated and cannot be calculated manually. Often, only numerical solutions are used. However, there are mathematical software solutions, which can calculate a solution for the HIT-MISS POD curve as well as its confidence band.
Fig. 7 HIT-MISS POD curve
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
221
Despite the capability of the HIT-MISS analyses to describe a large variety of nondestructive testing systems, the major disadvantage of HIT-MISS technique is that the POD analysis largely depends on the mathematical model adopted. Especially in the case of large defects not being found, or only small defects detected, the generalized linear models are not much productive. Hence, a 4-parameteric HIT-MISS POD approach can be an alternate technique in providing reasonable solutions. This 4- parametric HIT-MISS POD approach consists of a possible lower asymptote (α) and a possible upper asymptote (β) [21] as shown in Eq. 12: Example : 4 Parametric Logit : p ¼ α þ ðβ αÞ
exp ðb0 þ b1 log ðaÞÞ 1 þ exp ðb0 þ b1 log ðaÞÞ ð12Þ
Despite the fact that even though there are various models such as the MAPOD, these HIT-MISS POD curves or the nonparametric PODs are mainly based on empirical data obtained from experiments. The main advantage is that all NDT systems can be evaluated by a HIT-MISS POD evaluation due to their nature of providing binary responses, i.e., either detection or not detection of defects. These HIT-MISS POD evaluations mainly focus on the holistic NDT system. Consequently, the human interaction with the NDT system and also the human factors can be evaluated using HIT-MISS as well as nonparametric POD evaluations. The validation of its outcome and the evaluations are not easy to understand. In the case of additional information (e.g., the signal response of an NDT system), all available information should be part of a validation of an NDT system.
Signal Response POD Approach The signal response POD or “â (NDT signal response) vs a (defect size) approach” mainly focuses on the physical capability of an NDT system. The evaluation of the physical capability of the NDT system is the first basic step in answering the questions on ability to detect a specific kind of defect by the NDT technique. The physical description of the testing system is a part of intrinsic capability, and it is the evaluation of the technical behavior which is the best possible POD for a testing system. “â vs a approach” mainly focuses on the intrinsic capabilities of the NDT system. It estimates the relationship of the testing equipment with its capability to show indication in the case of a defect. On the one hand, this method is very useful in deciding the suitability of an NDT testing method for a particular task. On the other hand, the knowledge about the physical behavior of a system can help to support the POD. A signal above a defined decision threshold âdec is defined as a HIT. This decision threshold is part of the testing procedure. Defined due to the noise of the system or due to the relevance of a specific size, it will be estimated by experts (e.g., a level 3 operator after ISO 9712). In the reliability evaluation, it is the input information which has a major influence in the POD characteristics. In the signal response-based POD evaluation approach, the HIT (defect detected) is assigned for a defect of certain size whose signal amplitude
222
D. Kanzler and V. K. Rentala
is higher than the decision threshold, whereas MISS (defect not detected) is assigned for a defect whose signal amplitude is less than that of the decision threshold. At this point, it is obvious that the signal height is transformed into a dual response. Therefore, in comparison to the HIT-MISS POD approach, a signal response POD includes more information due to the continuous signal. With the additional information, a lower amount of data, i.e., at least 40 defect sizes according to MIL-HDBK 1823, might be used to describe the system in comparison to the HIT-MISS POD evaluation. Berens [19] developed the first signal response POD approach. Mathematically, the POD is defined as the number of signals above the decision threshold âdec, described by a distribution g(â), which depends on the size of the defect a as shown in Eq. 13: 1 ð
ga ðabÞdb a
PODðaÞ ¼
ð13Þ
badec
Relationship Between Signal and Defect Parameter According to the Berens signal response POD approach, the relationship between the defect parameter (crack depth) and the signal (eddy current testing) can be described in an empirical way by a linear relationship using logarithmic transformations. In later publications, this approach is used in several standards [22, 23]. In both references, a linear relationship is accepted with and without logarithmic transformation. Therefore, to describe the linearity between the testing signal and the defect parameter, the following possible relationships are accepted: • • • •
Linear (signal) – linear (defect parameter) relationship Logarithmic (signal) linear (defect parameter) relationship Linear (signal) – logarithmic (defect parameter) relationship Logarithmic (signal) – logarithmic (defect parameter) relationship
Depending on the relationship between â and a, it is possible to describe the detection capabilities not only by position a, but also in an interval with â – as long as the relationship between â and a can be described mathematically. In later references, further options were developed. One example is mentioned in the multiparameter approach. The linearity is a useful approach for the processing of signal response NDT data; however, it has to be used with caution. There are various reasons for the usage of linear models in the processing of signal response data, beginning from approximation theories of different curves limited for small intervals to the physical understanding of the NDT methods, while receiving a replica of the real defect as an indication. It is essential to check these models and include the well-founded knowledge of the testing system as much as possible to avoid missing information or having a model with a larger deviation than the effect of linearity. To challenge the
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
223
reliability models is an essential step within the process to estimate the capability of the NDT system and a major reason why the evaluation should be put in the hands of the interdisciplinary expert commission.
Statistical Requirements About the Scatter of Data Using the relationship between a and â, a decrease of necessary data is possible, as long as all additionally requirements are met. Even though it is theoretically possible to change the distribution for each “a,” as mentioned in a reference above, the special kind (of the distribution) and the form of the distribution are fixed. Therefore, three major requirements for the POD are: Type of distribution: Many POD approaches use the normal distribution as a possible evaluation of the scattering of the data points around the linear relationship. It should be noted that the possible logarithmic transformation is performed which makes it a log-normal distribution. Figure 8 shows the relationship between the function of “â” and the function of “a” in order to understand that the function can either be a normal or log-normal distribution function. In general, the chosen normal distribution is used for the complete interval of the POD evaluation. The normal distribution is often used, due to its properties of having only a few parameters (μ (mean), σ (standard deviation)) and the fact that the number of distributions can be approximated by a normal distribution for a larger amount of data according to the law of large numbers. In addition, for the ordinary least squares (OLS) method which is often used to describe the linear regression between â and a, the errors are normally distributed as well. Homoscedasticity: An additional requirement origin in the OLS is the homoscedasticity which means that the scatter for small defects is as big as for large defects. In this present case, the normal distribution has the same σ in the evaluated interval of a. Independency: Another basic requirement for the POD evaluation is the independency of the data points, i.e., all data points are serially uncorrelated. The testing of this requirement is an essential part of the design of experiments and assessed in Fig. 8 Relationship and scattering of the data values
224
D. Kanzler and V. K. Rentala
the process of evaluation. In the case of one of the requirements cannot be fulfilled, the above-mentioned approach of the conventional POD should not be used; however, advanced methods can provide the reasonable solutions in those cases. Signal Response POD Curve Based on the above-mentioned requirements, the evaluation of the linear regression curve, and the introduction of the threshold, the POD can be calculated as a function of “a.” In general, a linear-linear relationship is assumed for simplification reasons (Fig. 9). The mathematical basics of the POD for the â versus a relationship are ab ¼ β0 þ β1 a þ e
ð14Þ
where β0 and β1 are the regression parameters and ε is the mathematical error. While for the mathematical error, scattering around the relationship applies e N ð0, σ Þ
ð15Þ
the POD concludes to:
ab ðβ0 þ β1 aÞ PODðaÞ ¼ 1 F dec s
ð16Þ
whereas F is the standard normal distribution. In order to obtain the necessary parameters for the cumulative normal distribution, a transformation is commonly required, therefore: μ¼
abdec β0 β1
ð17Þ
s β1
ð18Þ
and σ¼
With these previous steps, the POD curve can be evaluated.
Fig. 9 â vs a-relationship and POD
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
225
Number of Data Points Despite the decrease in the number of data points from the original 328 in Rummel’s nonparametric POD [18] to almost 40 in the signal response POD of Berens [19], the number of data points remains an essential part of the evaluation. Due to the statistical requirements of the regression evaluation, the amount of data builds the confidence in the evaluation itself. For a meaningful evaluation of the capabilities, based on experimental data, the confidence band is an essential task within the evaluation. Figure 10 shows the POD evaluation and the significant influence of the amount of data on the confidence bound. In the world of NDE 4.0, the amount of data useful for validation approaches is various such as simulations, feed forward loops, big data collection, etc. which is discussed in section “Introduction.” This theoretical model shows that the resulting POD curve and the parameter of the cumulative normal distribution are the same. However, the number of experiments, to estimate the curve, are different. The confidence band shows a significant difference between a small amount of data (e.g., n ¼ 5) and a large amount of data (n >100). The confidence band gives an idea of the amount of data: In the case of a too broad confidence band, the statistical model should be questioned, or the number of experiments should be raised. In the case of a large amount of data, the number of data is not limiting the uncertainty of the model anymore. It is the quality of the data which describes the testing situation. Therefore, alternative statistical descriptions of uncertainty should be considered, such as predictive and tolerance bands [24] or the degrees of belief. Especially, POD evaluations based on simulations should focus on
Fig. 10 POD with confidence band
226
D. Kanzler and V. K. Rentala
these kinds of uncertainties, due to their nature to collect a large amount of data, while having significant differences in comparison with experiments due to mathematical approximation within the simulation approach. However, another important attribute, concerning the data used in the evaluation, is the size in relation to the probability of detection. The Berens paper also includes the ordinary least square (OLS) additional data, which is specified as censored data [19]. Censored data is the signal response data whose signal amplitudes are lower than the detection threshold value, i.e., either below the noise or higher than the saturation threshold where the signal can no more increase in amplitude despite there is an increase in size. Both data pools can influence the result of an OLS, which is the reason why the more complex model of maximum likelihood estimation (MLE) is generally used. Despite the fact that OLS is one kind of MLE, the more general MLE can also work with censored data. The value of censored data is obviously less valuable than uncensored data. Therefore, the established requirement says that the main amount of data should lie between 5% and 95% of the POD evaluation [25]. Figure 11 shows the graphical representation of this requirement. One of the main tasks in the evaluation of testing systems is the accurate planning and the structural design of experiments (DoE). The DoE builds the base of every statistical evaluation. Here, the influences of different parameters are estimated and their role in the experiments is defined.
Advanced POD Approaches There are various reasons for conducting the capability and reliability evaluations of NDT techniques. Some of the reasons are listed below: • • • • •
Evaluation of structural integrity [26] Identification of influencing factors [27] Optimization of component designs [28, 29] Comparison of different testing methods [30] Optimization of the testing procedure [27]
Fig. 11 Value of existing data
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
227
• Verification of the capability of new testing procedures [5] • Determination of testing intervals [31] • Avoidance of operational risks Each field of application and purpose can lead to different requirements for the POD evaluations. Thus, the design of experiments, the amount of experimental data, and the additional knowledge included in the evaluation differ, and hence the results also vary. Due to the various requirements and uses of the POD in different fields, numerous directions of POD evaluations were developed in the last 50 years. The developments were done around the statistical requirements (scatter distribution, decreasing amount of data) as well as on physical models to describe the relationship between detectability and NDT physics (multiparametric approach, data field POD). Figure 12 shows the overview of some of the most popular approaches. Some approaches also include the AFNDI-sponsored work carried out by Karta Technologies to quantify human factors using DOE [32]. However, in the last two decades, various approaches are developed due to the complexity of the testing systems.
Influence on the Reliability of the Nondestructive Testing System As mentioned earlier, the POD evaluation describes an adequate tool to evaluate the intrinsic capabilities of the testing system. However, a testing system is a permanent part of a larger system (Fig. 13). Despite the intrinsic capability of a testing system, each testing system is enclosed in environment based on the design of the component (as discussed earlier in the section “Probabilistic Nature of Failure”). Nowadays, the testing system is working in-service with other testing methods such as the condition monitoring (CM) or the structural health monitoring (SHM) systems. The in-service working of the systems is according to the maintenance plans based on standards and guidelines. Internal
Fig. 12 Reliability toolbox
228
D. Kanzler and V. K. Rentala
Fig. 13 Environment of the reliability evaluation of testing systems
stakeholders are part of every company department (Engineering, Production, and Service) and external stakeholders who become obvious in cases of large accidents. In this chapter, the evaluation focuses on the reliability itself. So, the reliability itself is not only limited to the intrinsic capability of the testing system but also covers the holistic testing environment. This fact is captured from the modular model. The modular model was developed at the series of “European American Workshops on Reliability of NDE.” Due to technical necessity, it is important to focus on the intrinsic capabilities. Nonetheless, after a while, the focus changed toward application factors, which describe the difference between an optimal testing environment, e.g., in laboratories, and under field conditions. Nowadays, the operator plays an essential role in the testing. For human operators and humans being part of the testing process, there are a number of influences decreasing and increasing the quality of the testing systems [33], i.e., stress, motivation, time pressure, etc. All these factors summed up to be human factors. As already mentioned, the NDT system is not isolated within the company from other departments. As a result, also the organizational factors represent an important part of the modular model [34]. To visualize the organizational factors, the connection between the purchasing department and the NDT department should be mentioned: Each new system needs to fulfill the technical need of the NDT departments, in addition to the economical and legal requirements for the purchasing department. All these factors might influence itself within the real daily routine. Figure 14 shows the modular model with various factors effecting the POD of NDT systems. For the reliability evaluation perspective, each of the factors decreases the optimal performance described by the POD for the intrinsic parameters. This correlation was first defined in the work of Wall [35]. With further extensions of the modular model,
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
229
Fig. 14 Modular model [14]
Fig. 15 Influence of the different parameters of the modular model (some factors can go either way)
this correlation can also be widened as shown in Fig. 15. Even though Fig. 15 shows the decrease in POD curves with respect to different factors, it has to be noted that the decrease in POD curves can be possible by either way between the human factors, organizational factors, and the application factors.
230
D. Kanzler and V. K. Rentala
Reliability Evaluation of NDE 4.0 Introduction Even though the first evaluation of NDT reliability was done five decades ago, the development of the NDE 4.0 is still seen as a ground-breaking event in regard of the evaluation of the reliability of testing systems. “Industrial Internet of Things” (IIoT) and the idea of “Feedback Loops” within industrial production have to connect testing systems for reliable inspection results. The missing information about the reliability and the informative value of the testing results can lead to extinction of testing methods in production and maintenance. On the other hand, the chance to optimize the design, the production, and the service is the unique possibility of the NDE 4.0. The nondestructive evaluation data is already seen as a treasure of the Industry 4.0. Yet, the role of the different parts of the modular model may change along with their significance. The framework of NDE 4.0 has the potential to overcome the intrinsic capabilities of an NDT equipment (see in Fig. 16: NDE 4.0 potential, previously published by Vrana & Singh [36]) as mentioned in section “Capability and Reliability of NDT.” To explain this provoking statement, it is necessary to evaluate the potentials of the modern means. The first stage of improving the POD from system reliability to intrinsic physical capability can be made possible by eliminating the factors responsible for decreasing the intrinsic physical capability of the system such as the organizational factors, application factors, and the human factors. This can be achieved by individually sorting out each of the factors using the aforementioned Industry 4.0 technologies. For example, automation and robotics play a major role in reducing the human factors in certain aspects taking system reliability closer to intrinsic physical capability of the test system. The main advantage of new technologies is to use the advantages of the technology together with the individuality of humans. From the technology point of view, this is due to the fact that they can perform even better if further developments are made in the robotics or automation in reaching critical locations, providing the proper training for interpretation of complex NDT data, etc. Fig. 16 NDE 4.0 potential
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
231
In those cases, the algorithms play a major role in overcoming the important shortfalls. Hence, it is believed that the proper application of these advanced technologies such as the robotics, automation, and the algorithms would definitely aid in enhancing the system reliability closer to the intrinsic physical capability of the NDT system. The second stage of achieving the POD beyond the intrinsic physical capability of the NDT system and closer to the idealized POD curve can be made possible by incorporating the positive results coming from digitalization of NDE systems. The major advantage of one of the digitalization concepts, i.e., digital thread, is obtaining the accessibility of data of a product or a component from the planning and design phase to the production, service, and the end stage of failure. In addition, the concepts of data fusion techniques can also provide multiple advantages. These concepts and technologies can produce much more reliable POD beyond the intrinsic capability of the NDT system. For example, different NDT inspections such as visual testing, penetrant testing, eddy current testing, ultrasonic testing, etc. are carried out at various stages or at the end of the manufacturing process of a component. This will enable obtaining the NDE data of the component from both surface and subsurface or volumetric defects, and when this data is combined using data fusion techniques, the overall POD is always more reliable than the POD of the individually inspected NDT techniques. This combination of data originating from NDT methods combined with the digital twin of prediction can be visualized as a sixth sense of NDE 4.0 [37]. The concept of digitalization not only provides the NDE data from different inspections at different stages of a component or product manufacturing but also provides extremely valuable data which is usually unconnected to the NDE inspection systems. For example, NDT inspection data can be obtained from all stages of service and maintenance schedules which usually lack the material or microstructural degradation information. This degradation state of the material leads to the degradation of material properties such as conductivity which is the main criteria for carrying out eddy current inspection procedures for any material. If this kind of information related to the material history or degradation state is priorly known, then the negative effects of not being aware of such information such as the reasons for the changes or variations in the NDT signal response data can be avoided. Hence, if this kind of information is also properly incorporated into the NDE system data, the POD can be increased beyond the intrinsic physical capability of the system. Finally, it is believed that the POD curve which crosses beyond the intrinsic physical capability of the NDT system is no more the POD curve of a particular NDT inspection technique. Instead, it is the POD or the reliability of the complete test systems in the Industry 4.0 in connection with the NDE 4.0.
Modular Model for NDE 4.0 However, the inclusion of the NDE 4.0 methods also changes the influences for the NDE process. Therefore, it is necessary to improve the modular model. Figure 17
232
D. Kanzler and V. K. Rentala
Fig. 17 Modular model for NDE 4.0
shows the modified modular model for NDE 4.0 which also includes the algorithmbased factors along with the previously mentioned factors in the conventional modular model. Interface issues, evaluation programs, automated algorithms, user training data, and specific AI-networks are part of the modern modular model. The validation and the influences of algorithms will keep the NDT community busy for the next coming years. The characteristic, which shows the difference of various algorithms, is the POD. Therefore, a new age for the POD is expected. In addition, the importance of the “old” factors is obvious, and there exists a balance between the parameters. The system will not work properly if only one influence is taken into consideration. Consequently, the modern focus of algorithms also will not function if the factors relevant to the environment, the operators, or even the physical background are ignored. This is why additional discussions among the areas of influences will have a major impact in the near future.
The Probabilistic Framework The use of the NDE 4.0 will therefore also be handled in another way for the future intrinsic capabilities. The simplified deterministic point of NDT does not reflect the true situation when compared with the modern damage-tolerant framework. The crack length is probabilistic in nature due to various stages such as the crack initiation, crack growth, and the sudden failure which cannot be
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
233
Fig. 18 Probability of defect existence (a) without NDT (red) and (b) with NDT (green)
determined. In addition, the NDT system is also probabilistic due to the nature of missing even larger defects and also capable of detecting defects of smaller sizes. Moreover, the usage of the part is also probabilistic in nature due to the loading on the part, the possible defect geometries, and the type of defects which are priorly unknown. Even in the probabilistic framework, the probability of defect occurrence can be estimated [29]. Without any testing, the probability of occurrence will be unchanged (red graph in Fig. 18). However, the Probability of Occurrence (POO) changes to the green curve by carrying out a testing procedure using an estimated POD for the system. It is apparent to spot that the curve does not change for very small defects, as they cannot be observed using the testing systems. However, for larger defects, even though there is a chance to be misidentified, it is highly decreased due to testing. For each defect size (with a defined probability), Fig. 18a can be used for calculation. This graph does not result in one curve, but in a large number of different sizes, with different probabilities and defect attributes (like crack growing) which is the best way of a database for the digital twin technology. This model can also be observed in a different way. For example, in case of each NDT system collecting the data of every single indication found, including the knowledge of the POD, the unknown probability of defect occurrence can be
234
D. Kanzler and V. K. Rentala
defined. This knowledge builds the foundation for probabilistic lifting and an opportunity for the design departments. In this case, there is an interconnected feedback-loop between the service and the design sectors, which would not be possible to occur without the knowledge of the reliability of testing systems. Another point to mention in the probabilistic framework is the predictive maintenance. The predictive maintenance is the desire to identify testing intervals and to enlarge the time to use the asset. The interval between the tests might change over time due to two basic reasons. The first reason is the varying usage of an asset which might lead to different defect growing. An example is the usage of a military helicopter where almost one-third of the fleet is regularly under maintenance. Maintenance and testing are therefore a major matter of expenses. However, a military helicopter is used for transfer, for training, and for combat. So, these three completely different usages with different loads lead to different requirements for maintenance and testing. The second reason for the changes in the testing intervals is depending on the load which might vary the potential critical defect size. Under the idea of predictive maintenance, it can be observed that the crack growth varies drastically with the type or amount of loading. Therefore, the deterministic view of crack growth cannot be used. In addition, it can also be noted that the variable testing intervals depend on the knowledge of the capability of the testing system. The simplified model is true in the case of a POD with 100% detectability. The testing intervals are only depending on the load in the case of a crack. However, in reality, the POD will never be 100%; therefore, the POD is an essential parameter for predictive testing and maintenance planning. These ideas of the probabilistic framework are not new. However, in the fourth revolution of NDE, it has a larger role than ever before. The Industrial Internet of Things (IIoT) and the global digitalization will create a new interface between different companies. This optimizes the probabilistic framework to lower testing and maintenance costs, increase availability of assets, and increase the degree of product reliability.
Issue of Scarce Events The probabilistic issue of scarce events is another major issue for the modern production and maintenance environment. In addition, the production quality has seen a tremendous increase in the last few years. Despite the rise of testing capabilities, the number of detected defects has decreased in the last decades. Also, the new production methods such as the additive manufacturing require validation approaches for very small batch sizes. The original evaluation of NDT reliability cannot give an answer at this point. The historical data of the quality of the production system are too small. Moreover, even the potential number of parts which can be produced to increase the data for making the POD curves also would burst the budget of the component. Here, the issue of scarce events becomes obvious. The confidence in these parts cannot be obtained in the usual way. Therefore, the mathematical context needs to be changed from basing on a large number of
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
235
experiments (frequentist view) to the view of using all kind of information available in a sophisticated professional way (e.g., in the Bayesian viewpoint). The complexity will increase immensely from the mathematical point of view. The evaluation of PODs in this context, without external help from statisticians and reliability experts, might be almost impossible. However, there are already different approaches [38] to solve the issue of scarce events in the community and the need of experts, which work interdisciplinary.
Summary The fourth revolution of the industry and the nondestructive testing is groundbreaking. The potential and the threats, due to the changings, may occur even more than during the last three revolutions. Nevertheless, in the same way each revolution had the need to change, this one also invites to think about the processes, the products, and its attributes. The NDT will be a major treasure directing the production of tomorrow. Therefore, it is necessary to know what we identify in the process and which information we can trust. The reliability evaluation 4.0 or the modular model 4.0 which also accounts for algorithms can build up trust, and it is the only possible way to enable the fourth generation of industrial environments to its full extent. This chapter gives an overview about the work which has been done in the field of reliability evaluations. These are the basics, the modern evaluation will be grounded on. In addition, this chapter gives the reader an overview about the potentials and the way to include this powerful evaluation tool in their industrial context. This chapter is no recipe, instead it should motivate to start a discussion on modern POD evaluation. However, it can aid in thinking how the Industry 4.0 technologies may help in filling the gaps for a better NDE 4.0 reliability. The complexity of the physical behavior of testing evaluations, the variety of testing objects, and the interaction with other testing systems – structural health monitoring, condition monitoring systems, or other testing methods – give a plurality of testing environments, which should also be reflected by the evaluation. This is something which should be specially noted for each offered holistic evaluation of testing system, by not only human and artificial intelligence – but also by the procedure of the capable use of modeling and simulation software. Only the understanding and the knowledge of technical processes can lead to confidence in technical systems, in which we entrust our lives day after day.
Cross-References ▶ Applied Artificial Intelligence in NDE ▶ Best Practices for NDE 4.0 Adoption ▶ Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0 ▶ Introduction to NDE 4.0 ▶ NDE 4.0: New Paradigm for the NDE Inspection Personnel
236
D. Kanzler and V. K. Rentala
References 1. Segan S. Teardown Reveals iPhone X Parts Cost $370, PCMag. 2017. https://uk.pcmag.com/ apple-iphone-2/91910/teardown-reveals-iphone-x-parts-cost-370 Accessed 29 Dez 2020. 2. Scott O. Over 13 million vehicles recalled Year-to-Date globally, FinBold. 2020. https://finbold. com/over-13-million-vehicles-recalled-year-to-date-globally/. Accessed 29 Dez 2020. 3. Reuter Boeing Posts Surprise Loss, 737 MAX Costs Climb to $19 Billion, Voanews. 2020. https://www.voanews.com/economy-business/boeing-posts-surprise-loss-737-max-costsclimb-19-billion. Accessed 29 Dez 2020 4. FAA, DOT/FAA/AR-01/96. A methodology for the assessment of the capability of inspection systems for detection of subsurface flaws in aircraft turbine engine components. 2002. 5. NASA, STD-5009. Nondestructive evaluation requirements for fracture critical metallic components. Washington, DC: National Aeronautics and Space Administration; 2009. 6. Førli O, Ronold K. Guidelines for NDE reliability determination and description. Nordtest NT TECHN report. 1998. 7. Hovey PW, Sproat WH, Schattle P. The test plan for the next AF NDI capability and reliability assessment program. Rev Prog Quant NDE. 1989;8B:2213–20. 8. N.N. Italy’s former rail boss sentenced to jail over disaster that killed 29. The Local.it. 2017. https://www.thelocal.it/20170131/italys-former-railway-boss-sentenced-to-jail-over-disasterthat-killed-29. Accessed 29 Dec 2020. 9. Bauer P. United Airlines Flight 232 aviation disaster, Sioux City, Iowa, United States [1989], Britannica. 2017. https://www.britannica.com/event/United-Airlines-Flight-232. Accessed 29 Dec 2020. 10. Vrana J, Zimmer A, Lohmann H-P, Heinrich W. Evolution of the ultrasonic inspection over the past decades on the example of heavy rotor forgings. 19th world conference on non-destructive testing. 2016. 11. Vosk T, Emery A. Forensic metrology – scientific measurement and inference for Lawyers. Judges and criminalists. CRC Press. 2015. 12. Grandt A. Fundamentals of structural integrity damage tolerant design and nondestructive evaluation. New Delhi: Wiley; 2014. 13. Matzkanin G, Yolken T. Probability of detection (POD) for nondestructive evaluation (NDE). Austin: Nondestructive Testing Information Analysis Center; 2001. 14. Mueller C, Bertovic M, Kanzler D, Ronneteg U. Conclusions of the 6th European American workshop on reliability of NDE. AIP conference proceedings 1706:020006. 2016. https://doi. org/10.1063/1.4940452. 15. ASTM. E3023-15. Standard practice for probability of detection analysis for â versus a data. West Conshohocken: ASTM International; 2015. 16. Wald A. Statistical decision functions. New York: Wiley; 1950. 17. Rentala VK, Mylavarapu P, Gautam JP. Issues in estimating probability of detection of NDT techniques – a model assisted approach. Ultrasonics. 2018;87:59–70. https://doi.org/10.1016/j. ultras.2018.02.012. 18. Rummel W, Todd P, Frecska S, Rathke R. The detection of fatigue cracks by nondestructive testing methods. Spring conference, American Society for Nondestructive Testing. Los Angeles. March 1972. 19. Berens A. NDE reliability data analysis. In: ASM handbook. ASM International; 1989. p. 689– 701. 20. ASTM E2862-18. Standard practice for probability of detection analysis for Hit/Miss data. West Conshohocken: ASTM International; 2018. 21. Knopp J, Grandhi R, Zeng L, Aldrin J. Considerations for statistical analysis of nondestructive evaluation data: hit/miss analysis. E-J Adv Maint Jpn Soc Maintenol. 2012;4(3):105–15. 22. Department of Defense. MIL-HDBK-1823A 2009, Nondestructive evaluation system reliability assessment. Handbook. 2009.
9
Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
237
23. Gandossi L, Annis Ch. Probability of detection curves: statistical best-practices ENIQ report No 41. ENIQ. 24429 EN. 2010. 24. Li M, Spencer F, Meeker W. Quantile probability of detection: distinguishing between uncertainty and variability in nondestructive testing. Mater Eval. 2015;73:1. 25. Safizadeh MS, Forsyth DS, Fahr A. The effect of flaw size distribution on the estimation of POD. Insight. 2004;46(6):355. 26. Burkel R, Chiou P, Keyes K, Meeker W, Rose J. A methodology for the assessment of the capability of inspection systems for detection of subsurface flaws in aircraft turbine engine components. General Electric Co Cincinnati. 2002. 27. Kanzler D, Müller C, Pitkänen J. Probability of defect detection of Posiva’s electron beam weld. POSIVA-WR-13-70. 2013. 28. Pavlovic M, Takahashi K, Müller Ch, Boehm R, Ronneteg U. NDT reliability – final report reliability in non-destructive testing (NDT) of the canister components. SKB R-08-129, 2008. 29. Vrana J, Kai K, Christian A. Smart data analysis of the results of ultrasonic inspections for probabilistic fracture mechanics. 43rd MPA-seminar October 2017, Stuttgart. https://www. researchgate.net/publication/327579067. Last accessed 19 Apr 2021. 30. Gaal M. Trial design for testing and evaluation in humanitarian mine clearance. BTU Cottbus. 2007. 31. Zoëga A, Kurz J, Oelschlägel T, Rohrschneider A, Müller C, Pavlovic M, Hintze H, Kanzler D. Investigations to introduce the probability of detection method for ultrasonic inspection of hollow axles at Deutsche Bahn. 19th world conference on non-destructive testing. 2016. 32. Singh R. Three decades of reliability assessment. Karta- 3510-99-01. Report. San Antonio: Karta, 2000 33. Bertovic M. A human factors perspective on the use of automated aids in the evaluation of NDT data. In: AIP conference proceedings. AIP Publishing; 2016. p. 1706. 34. Holstein R, Bertovic M, Kanzler D, Müller C. NDT reliability in the organizational context of service inspection companies. Mater Test. 2014;56(7–8):607–10. 35. Wall M. Evaluating POD in real situations and the ‘delta’ factor, 5th European-American workshop on reliability of NDE. 2013. 36. Vrana J. NDE 4.0 – a design thinking perspective. J Nondestruct Eval. 2021;40:8. 37. Singh R, Vrana J. The NDE 4.0 – an ecosystem perspective. International virtual conference on NDE 4.0, presentation, April 2021. https://www.researchgate.net/publication/350877399. Last accessed 19 Apr 2021. 38. Kanzler D, Müller C, Pitkänen J, Ewert U. Bayesian approach for the evaluation of the reliability of non-destructive testing methods: combination of data from artificial and real defects. 18th world conference on nondestructive testing, Durban, South Africa, 16–20 April 2012.
NDE 4.0: New Paradigm for the NDE Inspection Personnel
10
Marija Bertovic and Iikka Virkkunen
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining the New NDE 4.0 Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The NDE Paradigm Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Changing Role of Human Inspectors in NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Addressing the Challenges of the New Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
240 242 242 252 259 263 264 264
Abstract
Nondestructive evaluation (NDE) is entering an era of the fourth industrial revolution and will undergo a major transformation. NDE is a vital part of industry and a successful move to NDE 4.0, it will require not just developing and embracing new technologies, but also developing and adopting new ways of working and becoming an integral part of the overall Industry 4.0. This will pose new challenges for the inspection personnel. To ensure the expected benefits from NDE 4.0, inspectors need stay in charge of the changing inspections. The promised autonomy and interconnectedness of NDE 4.0 will supersede the majority of traditional inspector tasks and will in turn require a different set of skills and raise different demands and challenges for the inspection personnel, thus conflicting the current “procedure-following”-“level I-III” paradigm. The
M. Bertovic (*) Federal Institute for Materials Research and Testing, Berlin, Germany e-mail: [email protected] I. Virkkunen Aalto University, Espoo, Finland e-mail: iikka.virkkunen@aalto.fi © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_9
239
240
M. Bertovic and I. Virkkunen
new industry 4.0 technologies can be integrated into the current framework, but exploiting their full potential requires changes in the role of the inspectors. The inspectors will be relieved from the tedious and error-prone aspects of the current system. At the same time, they will need to take responsibility for increasingly complex automated systems and work in closer collaboration with other experts. We propose that the traditional inspector roles will be transformed into that of the system developer, caretaker, and problem solver, each requiring a specific set of skills and assuming different responsibilities. For full NDE 4.0, NDE must abandon its traditional role as a self-contained entity with well-defined boundaries and take its role in the wider system that is the industry 4.0. Keywords
Non-destructive evaluation · NDE 4.0 · Industry 4.0 · Inspection personnel · Human-centered approach · Human machine interaction · Paradigm change
Introduction The rapid development in the field of information and computer technologies has laid ground for the fourth industrial revolution (often referred to as Industry 4.0). Technologies, such as the Internet of things (IoT) and cyber physical systems (CPS), enable the integration of virtual and physical worlds and are able to operate more autonomously. This “smart automation” has initiated a paradigm shift in the industry, especially in the manufacturing sector (e.g. [1]). Humans play a vital role in production systems. They are the ones in control of these systems, the ones solving problems and providing flexible solutions when needed. Industry 4.0 is a socio-technical system, characterized by an interplay of both humans and technology [2, 3]. Next to smart products, and smart machines, the third pillar of the factories of the future are the so-called augmented operators, the most flexible part of the system, taking charge of various responsibilities and complementing the highly automated system [4]. Romero et al. [5] introduced the term of Operator 4.0, defined as “a smart and skilled operator who performs not only cooperative work with robots but also work aided by machines as and if needed by means of human cyber-physical systems, advanced human-machine interaction technologies and adaptive automation towards achieving human-automation symbiosis work systems.” (p. 1) The promise of the fourth industrial revolution is that people will be relieved of taxing work and instead of being replaced by complex autonomous systems will rather become in charge of them and assume the role of strategical decision-makers and flexible problem-solvers [6]. Thereby it is expected to make use of people’s flexibility and adaptability at tasks in which they are superior to any machine while at the same time making the systems more efficient, reliable, and safe. Yet this promise seems to be at odds with practical experience that goes with increasing automation. The increase in automation does improve efficiency and
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
241
repeatability in many cases, but it also increases complexity of the overall system to the point of fragility and thus the increased task efficiency does not necessarily transfer to the overall system. The requirements for human operators seem to increase with increasing automation: non-automatable and highly complex activities are transferred to humans, they are expected to monitor technical systems and eliminate errors and thus, the higher the reliability of the automated system, the more training is needed. These are known as the ironies of automation [7]. The move toward industry 4.0 leverages new “smart” technologies that may solve some of these issues and exacerbate others. Research on human automation interaction [8, 9] shows that more automation does yield better human-system performance, however only when all is well. This is because more automation increases automation dependence, which has shown to be counterproductive when things go wrong. These and other challenges in the interaction with automated systems have made the notion of human-machine interaction and the role of the operator in cyberphysical systems synonymous with industry 4.0 [3, 6, 10–12]. As technical systems become smarter relieving people from difficult manual or demanding cognitive work, attention shifts to people as designers, decision makers and problem solvers. This is a revolution in itself. Nondestructive evaluation (NDE) is a vital part of manufacturing and will, therefore, undergo a similar revolution: within itself and within the greater context of the industry it serves. In the recent years, several taxonomies emerged, defining the key principles and key enabling technologies of NDE throughout the industrial revolutions. NDE 1.0 was characterized with the use of senses and made possible by the introduction of procedures. In NDE 2.0 it became possible to have a look into the components, facilitated by the development of electric energy. The move to NDE 3.0 was enabled by the development of digital technologies resulting in automation. The vast development in computer technology and artificial intelligence is paving way to interconnected autonomous cyber-physical NDE systems, enabling the shift to NDE 4.0 [13–15]. Although the use of cyber-physical systems shows great promise to generally improve NDE reliability, it is still the people we will rely on in unexpected situations, as they are more flexible and possess better judgment [16]. Consequently, a successful move to NDE 4.0 will require not just embracing new technologies, but also developing and adopting new ways of working and division of labor that make better use of the new technologies and effectively address their shortcomings. Taken together, these changes in tools, technologies, and ways of working redefine the way NDE is conducted. Together, they constitute a change in the NDE paradigm. In this chapter we outline the developments that give rise to this paradigm change: how adopting new technologies necessitates changes in the organization of work and how these changes in turn enable adaptation and integration of further Industry 4.0 tools and interfaces. We discuss the potential challenges with this transition and highlight the key features of these stages with the focus on the changing role and responsibilities of the inspection personnel.
242
M. Bertovic and I. Virkkunen
Defining the New NDE 4.0 Paradigm The NDE Paradigm Change Paradigms define the concepts we use, discuss, and describe things and, thus are so pervasive as to become invisible [17]. To define a change in paradigm, it is useful to start from the current paradigm.
Traditional NDE Paradigm The operation and application of NDE is highly regulated and documented. Almost all aspects of NDE are standardized, and the standards are further concretized in written procedures and inspection instructions. Thus, there should be very little ambiguity in what we mean by NDE. An NDE inspection system consists of the equipment, the procedure, and the inspectors. The paradigmatic system is described by an inspector, using the equipment to inspect a defined inspection target to determine if it meets set criteria. The inspection (data acquisition and the analysis of the results) is carried out following a detailed and, ideally, unambiguous procedure, which acts as a mediation object that connects the phases together and is equipment-specific (Fig. 1). The analysis of data in NDE is typically focused on determining the presence or absence of certain flaws deemed unacceptable. In a typical inspection, the presence of an unacceptable flaw is a rare encounter. The role and responsibilities of the inspectors are defined in standards (e.g., [18]). The qualified inspectors are divided, according to experience and training requirements, into three distinct levels with the corresponding increase in responsibilities. Selected responsibilities at each level are shown in Fig. 2. In simple terms, inspectors (Level 1) set up the equipment, perform the inspection and report the results strictly following and according to written instructions. These instructions are adapted from
Fig. 1 NDE inspection system
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
243
Fig. 2 Selected responsibilities of the NDE personnel at various levels (according to ISO 9712: 2012). With each level, the required experience and training requirements increase
procedures by more experienced (Level 2) inspectors. The procedures, in turn, are adapted from standards by more senior (Level 3) inspectors. The main objective of the inspection is to determine if the inspection target meets the set criteria. In a comparative approach, the technique is set up to make measurements in a specific manner and to compare results to a clearly defined reference standard (e.g., a hole or a notch). Such standards are typically quite detailed and provide information on both the technique (e.g., [19]) and on the assessment of the results (e.g., [20]). The results may include qualitative characterization (e.g., [21]). This setup is often quite straightforward and works for manufacturing inspections, in which the objective is to quickly mark defective parts or welds for repair. However, the results can be difficult to interpret. Especially when the objective is to find service-induced cracks, the comparative signals are not easily relatable to the actual state of the component. Even for manufacturing inspections in critical components, such as fatigue-loaded components, the comparative information is not necessarily sufficient to confirm a serviceable component. To improve on the comparative standards, the NDE procedures can be assessed against performance-based criteria (e.g., [22, 23]). In contrast to the comparative measurements, the stated requirement is to confirm the absence of flaws greater than a certain threshold size. The procedure may leave more room to inspector judgment. The flaw size is more directly relatable to the expected service life of the component and thus provides more directly usable result, albeit still rather simplified [24] and in many cases too conservative. Even then, it is fundamentally a difficult task to assess the expected service life of a component and the full assessment requires expertise from multiple disciplines (see, e.g., [25]). Central to the current NDE paradigm is the distinction between defining an NDE procedure (or instruction set) and performing the test. Ideally – although not always in practice – the procedures are unambiguous and thus provide comparable results independent of the individual inspector performing the test. For comparative inspections, the procedures are faithful adaptations of the respective standards and provide results independent of a particular application. For performance-based inspections, the procedures are qualified and thus expected to provide the set performance
244
M. Bertovic and I. Virkkunen
regardless of the inspector performing the test. In both cases, the inspections are expected to be essentially equivalent, as long as they follow the procedure, regardless of time, local conditions, individual inspector or inspection vendor providing the test. The performance is known and can be used as basis for design without knowledge of the details of the inspection. This paradigm has been very successful, indeed. Sophisticated inspections are routinely performed around the world with international inspection crew and comparable results. However, as any paradigm, it is necessarily a simplification and thus has its limitations. These simplifications must be worked out in practical application. The idea of unambiguous and comprehensive procedures that are followed to obtain repeatable results is well known to be an idealization. The ISO 9712 [18] clearly states that “effectiveness of any application of non-destructive testing (NDT) depends upon the capabilities of the persons who perform [. . .] the test” (p. vi). This unattainable ideal of unambiguous procedures is not unique to NDE but is found in many walks of life. In NDE it creates the peculiar contradiction that inspectors performing the test are expected to blindly follow the procedure and act for all essential purposes as a programmed machine. Yet procedures are not always unambiguous (e.g., [26, 27]), people are known to be fallible [28], and entirely replacing humans with machines has proven difficult. Procedures often leave significant ambiguity to the final judgment, especially in more demanding inspections, in which the procedures are developed to meet some performance-based criteria. They have shown to not always be equally understood by the inspectors and to be written in ways that make the following of the procedure difficult (e.g., procedure too long, procedural steps not clearly distinguished) thus leaving room for missing steps or errors in interpreting the procedure [26, 27]. The expert inspectors can very well be capable of consistently finding cracks while at the same time being unable to articulate unambiguous criteria as to how it’s done. Consequently, as the procedures are pushed from simple comparative manufacturing inspection toward more demanding and performance-based inspections, this requirement of unambiguous procedure and criteria become unwieldy. Another paradox of the current paradigm is the expectation that if the inspection is carried according to the procedure, it is expected to provide a reliable result. The issue is that despite well-written and unambiguous procedures and a system that can – in theory – find all the needed flaws, critical flaws are nevertheless sometimes missed. This is due to various human factors that, next to equipment and the application factors, can affect the reliability of NDE [29]. Studies have shown that the inspectors sometimes fail to follow the procedure [26, 27, 30]. Even when the flaws are reported by an inspector, they are often explained away on further inspection [31]. It is also widely known that humans are susceptible to errors [28, 32]. Human error, defined as a failure of an action to achieve its intended outcome [28], is responsible for the performance variation between the inspectors and often, wrongly, assigned to the “human factor,” that is, “a reckless inspector not following the procedure.” Thereby, it is overlooked that human factors relate not only to individual capabilities or differences but rather to a myriad of individual, technical, organizational, and social factors [30, 33–35], all having an effect on
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
245
human performance. This is in direct contradiction with the notion that inspectors will predictably follow procedures and obtain consistent and repeatable results regardless of the circumstances. The current view tends to be that this is a problem with the humans that fail to live up to expectations and thus need to be better controlled. It may, however, be more productive to consider these issues to be problems with the current system that expects humans to function in ways we know to be unachievable.
The Changing Role of Automation Close to the limits of any method, top performance and simple articulable decision criteria are mutually exclusive. This is clearly indicated by the lack of automation in the most demanding inspections until very recently made possible by machine learning models, which can effectively work with decision criteria too complex to be articulated (in addition to the more rule-based standard requirements). The promise of the Industry 4.0 and NDE 4.0 is to provide “smart” automation. Instead of machines being directly operated and monitored by inspectors, these machines are capable of carrying out more autonomous tasks without the direct human operator involvement [36, 37]. The automated smart systems can now interact with human operators on a higher level, perform subtasks autonomously and report results or request human interaction when needed. They can also interact with other automated systems and make use of available data. The strict procedures have forced people to work in machine-like fashion, which is not where humans excel. New smart machines can do the tedious work and relieve humans to do tasks they are more suited to do. The smart machines are not nearly as smart as humans and cannot replace them fully. Nor can they work within the traditional system that assumes traditional “dumb” automata. (By “dumb” automata, we mean automated systems that require constant interaction with and guidance from the human operator.) Many of the tasks of the current Level-I and Level II inspectors could technically have been automated for quite a while. This includes processes such as the calibration of the system, parts of report writing and checking data quality. This has not been done, partially because partial automation would not relieve the inspectors much because, according to the current assignment of responsibilities, they would need to control the processes. Thus, automating these tasks would just induce idle waiting while the partly automated system does its work. Technologies, in general, develop as interconnected ecosystems, and connecting technologies may inhibit or accelerate the pickup of new techniques and technologies [38]. The one task that has defied automation for long is data analysis. Especially for the performance-based inspections, the data analysis often involves a significant element of “inspector judgment” to make the final decision. This has been difficult or impossible to encode with traditional automation. However, techniques based on machine learning have recently showed human-level performance in complex flaw detection tasks [39–41]. Consequently, the automated analysis of NDE data may form a “tipping point” that would enable full automation of the application of the NDE procedure and change the work of the inspector and enable NDE 4.0.
246
M. Bertovic and I. Virkkunen
The shift to NDE 4.0 offers substantial potential for NDE itself but it must also be viewed as development in which NDE must develop in context of the wider trend toward Industry 4.0. The new Industry 4.0 systems will demand tighter integration and pose new demands for nondestructive inspections as well. Reliability – and thus NDE – is essential for Industry 4.0 [42].
Paradigm Shift: Transitional Systems Pave Way to NDE 4.0 The fourth industrial revolution introduces various new technologies: Internet of things, artificial intelligence, robots, blockchain, and the use of augmented and virtual reality, to name a few. With machine learning, the data analysis of even challenging performance-based inspections can now be automated. This allows all the tedious and repetitive tasks of the inspection to be finally fully automated. It also allows the implementation of significant portions of current procedures into automated software processes, making them more repeatable. This development offers several significant benefits. The traditional human error issues related to attention loss and fatigue are significantly reduced. The humans are dynamically involved in tasks. The procedures followed by the humans become simpler and easier to follow. The first natural application of this technology is to the inspections, in which the data acquisition is already mechanized. There, automated data analysis using machine learning can be added to the existing mechanized data acquisition making the process faster, more reliable, and easier for the analyst (note that for the purpose of this chapter, automated systems with manual data evaluation are denoted as “mechanized” to separate from fully automated systems, in which also the data evaluation is done mostly automatically). The analyst does not have to check all the data, but rather only the rare locations, where the automated system indicated a flaw. The automated analysis improves the efficiency of the entire system and systems that were not worthwhile to mechanize previously can now be made significantly more efficient with automation. The increased automation also leads to increasing differentiation of different roles within the system and tends toward greater division of responsibilities. This enables further increase of efficiency. Furthermore, the single integrated NDE equipment develops toward more modular set of tools, each more optimized to the different tasks. This development of differentiated tools is illustrated in Fig. 3 (please note that the image is schematic and intended to be viewed together with Figs. 4 and 5). Further technologies that have already or will likely soon find applications in NDE are assistive technologies such as communication devices (e.g., smartphones and tablets), augmented and virtual reality (AR and VR), drones, etc. These technologies are going to bring information to the tips of the inspectors’ fingers anytime and anywhere and allow them to perform their role with increasing distance from the actual inspection. The increased digital automation greatly increases inspection efficiency and automates away of the most tedious and error-prone tasks. However, the fundamental setup of inspection stays the same. The objective of the inspection is still to evaluate if the target contains unacceptable flaws and the inspection results and interaction with the wider technical systems is the same it ever was. It is NDE 3.0
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
247
Fig. 3 Transitional system. Increased automation fosters increased differentiation between different tasks associated with inspection and increased specialization in the corresponding tools used. In addition to the traditional roles (Levels I–III), the inspectors acquire new roles in taking care of the equipment and solving problems with the system as they arise
Fig. 4 Anatomy of an NDE 4.0 system. The increased specialization allows integration with outside systems (both human and automated). Normal data flow after development is through machine interfaces (solid arrows). Optional or as-needed interaction is shown in dashed lines
using Industry 4.0 tool set. Thus, we define these technologies in the current NDE paradigm as transitional systems that pave the way to NDE 4.0, that is, “NDE 3.5.” Similar need for transitional “3.5” system has been suggested in the context of industry 4.0 as well [43, 44]. Vrana and Singh [45] refer to the similar transition as the “horizontal implementation” of NDE 4.0 or “Industry 4.0 for NDE.” This transitional system is a necessary step toward NDE 4.0. The new technologies, most importantly the use of machine learning for automated analysis, are still
248
M. Bertovic and I. Virkkunen
Fig. 5 New roles and responsibilities in an NDE 4.0 system. The emerging inspector roles will interact with smart automation technologies over different interfaces. NDE Systems Developer receives input from various disciplines (listed are only examples) and directs the development and the design of the systems (UX Design)
quite new and to maintain the high reliability standards characteristic of the NDE industry, we need a way to adopt them incrementally. This allows building trust in these systems and addressing any potential issues or limitations as they become apparent [16]. The increased efficiency and increased differentiation brought by transitional system are a significant improvement over classic NDE 3.0. Thus, we may see transitional systems become stable and in some cases there is no need or benefit in going to full NDE 4.0. At the same time, the increased use of various modern automation tools (machine learning, augmented reality, etc.) increases the complexity of the inspection systems. While the use of these technologies can greatly improve the specific tasks where they are used, they increase the complexity of the system as a whole. It becomes infeasible for the inspector to be an expert in all the various technologies now needed to design and implement a modern NDE system. The traditional role of the inspector is now also burdened by responsibilities of taking care of the sophisticated automation, adapting, and solving problems with the equipment as they arise and acting as a fallback in case the automation fails. This is the key paradox of the transitional system: making use of the new labor-saving technologies simplifies each task but complicates the full system to a point, where the requirements for the inspectors become untenable. Then, new division of responsibilities becomes a requisite for effective use of these complex systems. Building an NDE 4.0 system that uses multiple enabling technologies requires a design team with multidisciplinary expertise.
The Full NDE 4.0 System The key feature in an industrial revolution is that the object of the work changes. Often the old paradigm is retained in some form as part of the new system, but the
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
249
new system encompasses wider scope that enables emerging concepts that are not reducible to their constituent parts. For the fourth industrial revolution and Industry 4.0, the defining change is the move from individual digitalized datasets to interconnected and interactive data [4, 46, 47]. The previously necessary simplifications that made managing copious data possible are giving way to modern computational techniques that can handle rich data from various sources directly. The self-contained individual subsystems are giving way to interconnected and interactive subsystems, that can use information from other systems and provide their input to the functioning of the whole system. Digital twins enable combined analysis, visualization, and interaction with all the data pertaining to a certain component, regardless of its source or technical nature. Separate installations can gather usage and diagnostic information that enable further development of the systems. This development mirrors the more general move to Industry 4.0, which shares these developments and describing features [48]. For an NDE system to shift to a full NDE 4.0 system, it must abandon its traditional role as a self-contained entity with well-defined boundaries and take its role in the wider system that is the Industry 4.0. The objective of NDE can no longer be to evaluate a component. It must be to provide pertinent information that enables (together with other data) assessment of structural integrity and serviceability. Thus, the move from NDE 3.0 to NDE 4.0 is to move from an NDE-centric system to integrated and interconnected service-oriented system. This development is both enabled and necessitated by the wider transition to Industry 4.0 systems [49, 50]. While the Industry 4.0 systems are mostly developed in the context of manufacturing, for NDE the scope naturally extends to in-service maintenance and asset life cycle management. Vrana and Singh [45] denote this as the “vertical implementation” of NDE 4.0 or “NDE for Industry 4.0.” The Problem That NDE 4.0 Solves While the increased use of automation and assistive technologies as transitional systems in NDE 3.0 paradigm solves a myriad of problems within the NDE system itself, it does not address the existing issues between NDE and a wider technical system. As is often the case, harnessing the potential for new technologies requires organizational and operational changes, in addition to the technical [51, 52]. The results will still be difficult to interpret and significant information will be left out (see, e.g., survey responses from [45]). To make an informed decision about the component serviceability, the result of an individual inspection needs to be combined with a host of other information sources. Firstly, there may be previous inspections on the same component and the evolution of differences may provide additional insight [53]. The previous inspection data is typically available today, but the comparison has to be done manually and thus cannot be fully exploited. Secondly, there are often variations in, for example, the detection reliability of the NDE method even within a single inspection and the differences in uncertainty should be considered when interpreting the results. This is also already implemented in some occasions [54, 55], but not in general use. Thirdly, the flaw criticality often varies greatly within the inspection volume. Thus, superficially similar NDE results
250
M. Bertovic and I. Virkkunen
may have very different significance depending on the location or orientation of the flaw. Again, this information is already integrated to some procedures as loading heat-maps, but it requires manual interpretation from the inspector or structural integrity engineer. For NDE 4.0, the integration would be at system level (e.g., [56]). Finally, the flaws, which we inspect against, are not the results of random occurrences, but are produced by typically well-known material degradation mechanism or manufacturing errors/insufficiencies. Thus, the result of inspection is more meaningful when integrated with the known features of possible failure mechanisms. The flaw information is commonly used as input information to the procedure development and to define things like the detection target or inspection volume (e.g., [23]). However, it is typically infeasible to make use of information about degradation mechanism at the level of individual inspections. Huber et al. [57] provide an example of an integrated system for manufacturing errors. This separation of information was a historical necessity. The information sources outlined above encompass information from multiple separate disciplines and it is infeasible for the NDE level III to be seasoned expert in all of them. Thus, the solution has been twofold: first, in simple cases, the information was simplified to form clear and generic boundaries (detection targets, required probability of detection values) for the inspectors. These are necessarily simplistic but enable efficient inspection with limited information exchange. Second, in critical cases, the implications of the inspection results are analyzed with multidisciplinary team of experts. This enables better use of available information from different disciplines but is too expensive and time consuming to be used for all inspections. NDE 4.0 offers a third option. The data from different disciplines can be integrated in the procedure to form a more informative result at higher level of interpretation.
Full NDE 4.0 System Defined Thus, we define a Full NDE 4.0 System to be an NDE system, which through smart automation and information interchange makes integrated use of information from different sources and disciplines and provides a nondestructive evaluation result in high level of interpretation that is directly related to the purpose of the inspection. In addition, we define an NDE 4.0 subsystem to be an NDE system, which provides results to be used as part of similar integrated (industry 4.0) system to be used in forming system state information at high level of interpretation directly related to the purpose or serviceability of the system. This definition is in line with current definitions of Industry 4.0 [4, 46, 47], but is defined irrespective of the underlying enabling technologies and instead defined in terms of the functioning of the system and difference in the outcome and integration. This definition allows more direct conceptualization of the purpose of each of these enabling technologies within the framework and the main benefits of the new system.
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
251
Anatomy of an NDE 4.0 System Whereas the development of transitional systems (algorithms and assistive technologies exploited in the context of traditional NDE system) is characterized by greater differentiation within the NDE system, the full NDE 4.0 system is characterized with greater integration of NDE in the entire manufacturing system. The integration takes place not within the NDE system, but within the surrounding socio-technical system. The result of the automated analysis software is usually directly usable for the end user, the inspector is consulted when needed and the role of the procedure as a mediating object is smaller. The previous distinction between inspector levels decreases in significance since each task requires specialized skills. The changed system is illustrated in Fig. 4 (to be compared with Fig. 3). This integration with outside expertise and with additional information (both with past inspections and with other available sensory data) enables richer data for NDE evaluation as part of integrated lifing. In addition, the NDE results can be interpreted in terms that are significant for the end use. For example, flaw criticality and reporting limits can be set based on flaw size, orientation, location, classification, and the assessment can be integrated in the automated evaluation system. This development relates to what is recently described as “Integrity engineering” [58] and contains both technical and social changes [2]. At the data acquisition level, the NDE 4.0 equipment may integrate data from other systems and sensors and may, in turn, provide data to other systems. At the data analysis level, the NDE 4.0 equipment will provide automated analysis and alert inspectors and end users to significant findings. These can then take the indications and decide the further actions needed. The increased automation takes care of most of the repetitive and tedious tasks from the inspector. Instead, the inspectors spend most of their time reacting to unexpected events or disruptions in the normal flow of inspection. When something unexpected is found, the procedure may need to be adapted and such adaptations will require human intervention with NDE knowledge. The NDE 4.0 will shift the traditional Level-III from a designated “responsible experienced inspector” to part of a tightly integrated group that develops NDE systems as parts of wider integrated systems. In many ways, the Level-III’s already serve this function, but with NDE 4.0 it will increase in importance and the collaboration will extend to the design of the inspection system to take advantage of information from different disciplines. The current tasks of the Level-I and LevelII inspectors will change from hands-on following inspection procedure toward application and caretaking of an automated system. In summary, the roles of the inspectors will assume significant changes and some of the new responsibilities may even conflict with the current practices. This does not mean that traditional manual inspection will become obsolete, rather that they will be somewhat shadowed by the more common automated and integrated inspections. In this work, we focus only on the latter.
252
M. Bertovic and I. Virkkunen
The Changing Role of Human Inspectors in NDE 4.0 The proposed paradigm change is about to alter the nature of the task for the NDE inspectors substantially. The shift to NDE 4.0 will be accompanied by the introduction of new tools, higher degree of automation of processes and, gradually, fully automated incorporation of NDE processes into production systems. With the introduction of new technologies and adopting of the new roles, new ways of interacting with technology, new task demands, and new responsibilities will demand different cognitive capacities, different competencies, and, thus, new strategies to cope with all the challenges this shift will present on people. In the current NDE paradigm, automation was often used to replace the subjective, inconsistent, sometimes biased manual inspections (and thereby inspectors) and decrease the likelihood of human failure [59, 60] by automating those tasks in which people do not excel (monotone, physically and mentally demanding tasks) and by relying on human skills in tasks, which could not yet be automated (e.g., complex decision making and problem solving). The aim of the new industrial revolution is to develop and design systems that will co-exist with human operators, extend their capabilities, and assist them to be more efficient and effective [61]. This can be achieved by developing, extending, and supporting: a) Physical capabilities (by means of robots, exoskeletons, teleoperated devices) b) Sensorial capabilities (creation of new augmented senses, transforming from one signal to another, IoT sensors, posture sensors) c) Cognitive capabilities (by means of artificial intelligence, virtual reality, cloud computing) d) Interaction capabilities (by means of augmented reality, “intelligent” human machine interfaces, mobile devices) [62, 63] This will result in the development of a myriad of complex technologies, with which the operators will have to interact differently. Human-machine interaction and user-centered design will play a crucial role in assuring the quality of these interactions (e.g., [12, 64]). The role of people is likely to change from operating and using systems into designing and monitoring them, overriding, if necessary, and solving complex problems that automation is not yet mature enough to solve (human as a supervisor vs. human as an operator). Thus, automation gaps will likely be handed over to people giving rise to the ironies of automation [7]. Digital technologies will give rise to the complexity of the task. With increasing complexity of production and high degree of automation, the cognitive demands on human resources are likely to increase. For monitoring tasks, more attention resources will be consumed. The emphasis of the personnel will be on problem solving and decision making. The pressure to keep processes agile will require making decisions in less time, etc. As a result, the importance of the human role will decrease in the task itself but will increase in the equipment (hardware, software) design and task preparation.
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
253
This will pose different demands on people and require different competences and measures to deal with possible errors. Let us consider several scenarios – based on our first-hand experience and on our assumptions on how industry 4.0 will change the NDE task – of how the shift to NDE 4.0 will change the extent and nature of human involvement, the task and the responsibilities of the inspectors in the context of increasing level of automation. Scenario 1: Industrial heavy manufacturing inspection system Company X manufactures heavy industrial equipment. One particular component is subjected to fatigue loading and may be susceptible to fatigue failures during service life, if the raw forged billet contains impurities or inclusions of sufficient size, that find their way to the highly loaded areas of the final component. The part is critical to the functioning of the equipment and should such a failure take place during use, it would result in substantial financial loss and potentially put nearby people at risk. NDE 3.0 (current system): To exclude manufacturing flaws, the system is inspected using mechanized ultrasonic inspection. The work pieces and UT probes are moved by industrial robots during inspection, and the data acquisition is fully automated. The inspector receives the scanned data for each component and analyzes the data to exclude any unacceptable flaws. If a flaw is found, the inspector assesses its severity by combining information from the UT scan and a precomputed load map that shows the areas critical for the use of the component. Transitional system: The manual, time–consuming, and tedious task of detecting flaws from the noisy data is automated with a machine learning based system that can find flaws with humanlevel performance. When the automated system finds something, the inspector looks at the data and confirms the indication. He/she then uses the precomputed load map to make the final judgment on the acceptability of the found flaw. NDE 4.0: The automated flaw detection has been integrated into the load computation. The system not only shows found flaws but also evaluates them considering the location, orientation, and the load data. When the system finds something deemed unacceptable, the operator is alerted, and they can review the system recommendations and decide the best course of action. Scenario 2: Composite wing inspection using augmented reality Composite wing structures in the aerospace sector are inspected using robotic ultrasonic scanning. NDE 3.0: The data is evaluated mostly with the aid of conventional projection images. The construction is rather complex, and thus it is sometimes difficult to pinpoint the locations of the ultrasonic echoes to geometric features or potential flaw sites on the component. This is made easier by 3D-graphics visualization, but due to the size and level of detail in the component, this, too, is somewhat complicated. Transitional system: Augmented reality allows visualization of the complex inspection data along with the actual component. The inspector can have an overview of the component in real threedimensional space and then go to the physical location of the indications to get more detailed results. This makes the data visualization more intuitive and less prone to errors resulting from missing some locations on the computer screen. NDE 4.0: The automated analysis systems create a higher-level interpretation of the data, which is related to the serviceability of the component and readable without detailed knowledge of the ultrasonic source. When something significant is found, an expert inspector can be consulted to discuss more detailed diagnostics. To aid this discussion, the same augmented reality can be used to visualize raw inspection data as well as production data and other external data that help interpret the indication.
The presented scenarios show a gradual shift of NDE systems toward higher autonomy and a changed role and responsibility of the inspection personnel. In the following, this shift will be in detail discussed.
254
M. Bertovic and I. Virkkunen
The Role of People in the Traditional NDE Paradigm In the current “NDE 2.0–3.0” paradigm, the inspectors are trained, qualified, and experienced NDE professionals, whose responsibility is to set up and calibrate the equipment, to detect and interpret signals received from the equipment according to a well-defined inspection procedure, and to report the results of his findings. The task demands vary depending on whether the inspections are carried out manually or with an aid of a mechanized system. In manual inspections, inspectors typically go on site with their hand-held device, collect and analyze data often under difficult working conditions (e.g., high noise, vibration, high temperatures and humidity, poor lighting, restricting working place, sometimes even radiation), and compare the data with previous inspections and decision criteria. It is a complex signal detection, information processing, and a decision-making task that requires high vigilance, sensory, perceptual, cognitive, and motor skills [30, 65, 66]. Even when the inspection is partly automated (i.e., mechanized), the inspectors are highly involved in the task by preparing the equipment, overseeing what the automated system is doing and interpreting or controlling the results. The data analysis requires a higher level of expertise and experience, as well as high attentional resources, which can be additionally demanding in cases, in which the occurrence of flaws is rare. In some cases, data analysis is aided by automated aids, that is, flaw detecting and sizing software, requiring control check by the inspectors. In her study on mechanized NDE and automated aids in NDE Bertovic [33, 67, 68] discussed different risks and challenges associated with automation in NDE: though partly automated, risks associated with individual differences, technology (e.g., image quality, software, and hardware), and the physical and organizational working environment can still occur. Furthermore, interaction with automated aids is another source of risks due to the potential adverse effects of over trust in automation manifesting in automation-induced complacency (relying on the correct functioning of the system without checking) and automation bias (failure to notice problems because an automated aid failed to direct to them, or inappropriate following of the aid’s directives) (e.g., [67, 69–71]). Inspection reliability is affected by the equipment, the application factors, and the human and organizational factors [29]. The results are, thus, not always reliable, as different technology shortcomings, organizational influences, working conditions, individual differences and interaction with other people and automation itself could give rise to human error (e.g., [30, 66]). In summary, the key features of the current NDE paradigm are well-defined inspection procedures, high physical and cognitive demands, high influence of the environment and the organization and the responsibility in the hands of the inspector at the sharp end. The automation, used to increase efficiency and reliability and in turn, reduce the likelihood of human error, can backfire, due to inappropriate trust toward automation and the lack of controlling behavior.
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
255
The Transitional System: New Challenges with New Technologies In a transitional system, data analysis becomes automated and the inspection task becomes aided by digital technologies that enable higher grade of data processing, visualization, and decision-making to a greater extent. The examples of these technologies are automated data acquisition systems accompanied by machine learning algorithms and the use of assistive technologies, such as augmented and virtual reality, tablets, smartphones, etc. Though these technologies are counted as technologies enabling the industry 4.0, it is likely that they are gradually going to be implemented as support into the existing working practices and Level I-II-III qualification paradigm and, as such, they alone do not constitute a full NDE 4.0 system. At this stage, the inspectors’ physical workload will decrease. There will be less need to inspect manually under difficult working conditions and many inspections will be carried out remotely, with decisions being made outside of the plant in which the inspections are carried out. Data analysis will be automated and, thus, inspectors’ attention will be called upon only when there are obstacles that technology cannot overcome. In the case of assistive technologies, the inspector will go on site or observe the processes remotely. The decision making will be aided by, for example, visualizations, data from previous inspections, and incorporated workflows. This, in turn, will give rise to the cognitive demands and change the nature of the task significantly. Use of robots, drones, and different assistive technologies, such as the smartphones, tablets, and especially AR, VR or mixed reality (MR), will awaken different senses, for example, voice control, gesture control, and the ability to synthesize data coming from different sources. This will require even more attentional and data processing skills that will redefine ways people interact with systems, creating a physical and mental challenge. In the case of machine learning algorithms, inspectors will be distanced from the data acquisition and analysis processes, but will be expected to supervise, control, and intervene in cases of unforeseen events, errors, or situations, in which application of this technology has its limitations. The machine learning based systems are more opaque than traditional mechanical automation [11] and the inspectors will, thus, need to learn to co-operate with them and to compensate for their shortcomings. This will require not only high attentional resources and dealing with high ambiguity, but also retention of data analysis skills that will be needed only occasionally. The traditional role of the inspector is compounded by the roles of responsible supervisor and problem solver. The systems are likely to function very reliably, but it will still be expected from the inspectors to supervise the equipment and become aware of possible failures of the system and intervene. When unused over longer periods of time, people’s data analysis and problem-solving skills will slowly degrade, making it difficult to complete this task with the same expertise and skill as earlier, when the task was solely carried out by them. This paradox is known as the out-of-the-loop
256
M. Bertovic and I. Virkkunen
phenomenon and it describes the diminished ability of people to detect system errors and subsequently perform tasks manually in the face of automation failures due to loss of skills and loss of situation awareness [72], because of being kept out of the loop, that is, losing oversight of the single steps of the inspection. Constant monitoring of the equipment will require a load of attentional resources and could be counterintuitive, leading to loss of situation awareness. With an increase in reliability, automation is likely to become trusted, which could lead to complacent behavior (decreased level of checking behavior when aided by automation as opposed to manual work) and automation bias (failure to notice problems of automated aids, as a result of reducing cognitive effort) – issues already encountered when dealing with automated systems [67, 69–71]. The inspectors – used to the current processes and trained according to current principles – may find new technologies demanding or distrust them for reasons of unknown reliability, loss of control, lack of technical understanding, or fear for the future retention of their jobs. As more tasks become allocated to machines, the inspectors may struggle with assuming responsibility for the end result. Whereas some inspectors may greet new technologies with curiosity and enthusiasm, others may struggle with the lack of understanding of its functioning or inadequately designed user-interfaces. Usability, design of the human machine interface and the interaction between the two will become some of the most important determinants of acceptance of these new technologies [73] and of the inspection reliability, though inspectors will still be affected by the same human factors (interaction with technology, team, organization, and the environment) as in the current paradigm. In summary, the transitional system is associated with the use of NDE 4.0 enabling technologies in the current inspector-device-procedure paradigm. Through the opaqueness of automation and additional load on attentional and cognitive resources, while at the same time relying on the same NDE paradigm of controlling what the automation is doing and acting according to its recommendations, the task of the NDE inspectors will become more diverse and demanding and involve more supporting staff. The major problem the transitional system will face is that new technologies are going to be developed and applied with “old” practices still in action (training, qualification, procedures, technology-oriented design) and they may not be easy to integrate together or to the old processes. Integrating a single new technology, such as automated analysis using machine learning, will improve the system significantly with a substantial increase in the complexity. However, adopting several new technologies will require a full redesign of the work, associated roles and responsibilities, new approaches to the design, and increased specialization.
NDE 4.0: Definition of New Roles and Responsibilities Full NDE 4.0 system will revolutionize the way inspections are made. The unsustainable increase in demands for the inspector can be solved by changing the traditional inspectors’ role by a set of different complementary roles. The inspectors’ role as the primary data gatherer, analyst, and decision maker will be replaced by different roles. We propose three new NDE-related roles: the NDE Systems
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
257
Developer, the Caretaker, and the Problem-Solver. In addition, we propose a non-NDE UX Design role. This change will require a complete revolution of the training and qualification practices, different allocation of functions within the system, a different set of skills and give rightful importance to user-centered design. With people being at the center of the design of new technologies and new processes, people will assume the vital role in the correct functioning of the system. Figure 5 shows the different NDE roles in the system and explains their relationships (compare with Fig. 4). Please note that the roles do not necessarily refer to individual people; they rather indicate distinct responsibilities and roles that need to be fulfilled within the system, sometimes, by teams of people. The role of the NDE Systems Developer will be to dictate the strategy, develop the underlying NDE system, and integrate it with information from other sources and automation systems. This role is a natural extension of the traditional Level-III role but is now more reliant on other experts from different fields (requiring basic knowledge from those fields). It is the responsibility of the NDE Systems Developer to build the NDE solutions needed for the inspections and to combine them with the needs and expertise from other fields, NDE personnel that is going to use them, automation and other integrated Industry 4.0 interfaces. The NDE Systems Developer decides which systems are needed and directs the UX design (engineers, data scientists, designers, usability experts), defines the performance metrics for the system and is responsible for the reliability of the system, pertaining to NDE. The Caretaker will oversee the functioning of the developed system, notice when and if it should fail and undertake measures to repair or adapt the system. In contrast to the old paradigm, the inspection system will be a more autonomous agent and will not require constant monitoring. The system will be able to inform the caretaker about a potential problem, which will to some extent relieve his attentional resources. Still the monitoring and situation awareness will be required. System complexity will rise as monitoring and controlling tasks will originate from a multitude of different data sources. The Caretaker will be responsible for the dayto-day deployment and operation of the NDE system and will need to have the necessary knowledge and understanding of NDE to be able to judge if everything is going well. Also, the Caretaker will be responsible for adapting the system to varying locations when this can be done without substantial changes to the underlying NDE system. Finally, the Problem Solver will have the NDE know-how to diagnose more substantial problems in the systems use or to offer further explanation of the results and their meaning. If the automated system reports unexpected results or the results need to be augmented or further explained to the client in the context of deciding proper actions, the Problem Solver will have sufficient knowledge of the underlying NDE system to do this. The Problem Solver can also implement more substantial adaptation for special locations or interrogate the system to confirm its proper functioning. This will require high flexibility and adaptability to continuously changing conditions. In problematic cases, this role will act as a final fallback able to diagnose and/or confirm the output of the system.
258
M. Bertovic and I. Virkkunen
The development of an NDE 4.0 system will be carried out by a multidisciplinary team that we define as the “UX Design” role. The main task of this role is to develop and design a physical system that serves as a mediator between the people and a vast amount of available information from different sources and technologies and the interface between the system and the users (human-machine interface). The NDE Systems Developer directs which systems need to be developed, and the UX Design team executes it. To do this, different competencies will have to be combined and different expertise will be involved, that is, that of an engineer, with knowledge in physical principles, that of the data scientist, with distinct IT skills, and that of the user experience expert, with knowledge of human-computer interaction, usability, and design. The resulting systems will have a major impact on the inspection reliability as well as the acceptance by the users. In practice, the roles will overlap and depend on current organizational settings. All the new roles will require the inspectors to deal with complexity and problem solving as well as continuous flexibility to adapt to new working environments [74]. In the new paradigm, people will be better integrated in the process and the roles and responsibilities will be clear. People will be trained for the roles that they will now assume and it will be clear which inspections or supervision tasks and responsibilities are fully human, which tasks are carried out solely by the machines, and how and when people are supposed to intervene, make decisions, or solve problems. That will relieve the personnel from some of the problems and paradoxes of the old paradigm, in which automated systems took over tasks that used to be human, but people were still expected to control the automation. With the NDE system users at the center of design, people will interact with these technologies in a more intuitive way and, thus, some of the cognitive, sensory, and psychical load will be decreased. Complex problem solving and decision making will require a high level of expertise and the state-of-the-art knowledge, which will suffer from similar phenomena as the shift to automation (out-of-the-loop phenomenon). Thus, training and qualification will have to be reinvented. Furthermore, full NDE 4.0 will allow for many tasks to be carried out remotely, diminishing the negative influence of the environment. That is, the interconnectedness of technical systems will allow high flexibility in time and space in carrying out the inspections and problem solving, having expert advice always at hand. While the physical effort will significantly decrease, cognitive performance and expert knowledge will become more vital for the inspections. Information will be available at all times and visualizations of this information will be intuitive and helpful to solve complex problems. This will relieve the cognitive demand. On the other hand, technology and possible problems will become more difficult to understand and handle, which will pose new demands on the people and may, in turn, become a source of failure. People will be taken out of the loop even to a greater extent, as automated interconnected self-learning systems will share information and require human attention only in exceptional cases. The issues related to monitoring of automated systems, trust in automation, and complacent behavior will likely remain but will be in the focus of the development, design, and training.
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
259
Next to technical and problem-solving skills, social interaction will become another area, in which people will have to excel even more, as people with different skills and backgrounds will have to communicate to make decisions appropriate for the entire system of which NDE is only a part of. Instead of specific persons solving specific problems, now teams from multiple disciplines will solve problems jointly. That is, the intensity of interactions with other people will shift from interactions between the inspector and the supervisor (simplified) onto interactions of individuals with other team members, interactions with people from other disciplines and remote interaction for which communication and networking skills, among others, will be a requirement. Different analyses show the following core competencies that will be required [5, 6, 74, 75]: technical (i.e., state-of-the art knowledge, technical skills, process understanding, media and coding skills, and understanding of IT security); methodological (i.e., creativity, entrepreneurial thinking, problem and conflict solving, decision making, analytical and research skills, and efficiency orientation); social (i.e., intercultural, language, communication and networking skills, ability to work in a team, to be compromising and cooperative, and to transfer knowledge and leadership skills); and personal (i.e., flexibility, ambiguity tolerance, motivation to learn, ability to work under pressure, sustainable mindset, and compliance). The new demands of the task and the necessary skills will require a different approach to training and qualification of personnel. In summary, NDE 4.0 will change the nature of NDE inspections radically. With new technologies and new processes, new roles and new design and training practices will have to be adopted. With user in the center of the design of systems and processes, the challenges the transitional system was facing (e.g., continuous controlling of automated systems, lack of acceptance, high physical load, high cognitive demand) will be overcome. High diversity and complexity of the inspectors’ work will be solved by increased specialization. This, in turn, will facilitate further integration of outside information. Technology will take over in tasks in which computers excel. People will do tasks in which they excel. Though this function allocation uses the benefits of all parties involved and shows promise for the functionality and success of the new humanmachine system, it presents at the same time also with challenges. Those challenges will be summarized in the following section.
Addressing the Challenges of the New Paradigm Industry 4.0 assumes that people continue to play a vital role in the production systems. For NDE 4.0 that means that the inspectors will remain in charge of the inspections. In the traditional systems, they are in charge hands-on with continuous monitoring and supervision of the whole inspection. In the transitional systems, machine learning and other Industry 4.0 tools will ease the burden of the routine inspections, but the human inspectors will still be directly in charge of the overall system and ready to stand in for the automated system, if needed. In the 4.0 system,
260
M. Bertovic and I. Virkkunen
the inspection will be completed as a part of a more connected system and thus the inspectors will be in charge of the “NDE part” of the integrated industrial system. The used systems and procedures should be developed with this emphasis in mind, that is, developed to allow the inspectors to take charge of bigger and more complicated systems by working in a higher level of abstraction and in tighter collaboration with experts from other fields. From the perspective of the inspection personnel, the changes associated with NDE 4.0 will, at first, be challenging. New ways of interacting with technical systems, increased complexity, lack of insight into what the system is doing, gradual loss of manual skills, need for new skills, and trust in the system are some of the challenges that will gain in significance. In the following, some of the ways of addressing those challenges will be discussed. An important prerequisite for a successful NDE 4.0 will be the shift from a technology-centered approach onto a human-centered approach toward NDE. The technology orientation has in the past lead to attention being directed to engineering and designing technical systems and working environments, which the inspectors needed to adapt to. Adapting to autonomous and highly complex automated systems would require a vast amount of knowledge and skill and be significantly physically (e.g., wearable technologies) and cognitively (complex decision making) demanding. In contrast, industry 4.0 puts people in the center of design, that is, the systems are developed to collaborate with humans and augment them, by taking over tasks, in which technology excels. This is done to aid people in making higher-level decisions and to solve problems – something that technology is not yet mature enough to do. Human centricity is the most often cited prerequisite for the factories of the future: “humans should never be subservient to machines and automation, but machines and automation should be subservient to humans” [76]. Human-centered design is defined and continuously revised in the ISO 9241 standard “Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems” [77]. According to the standard, it is defined as an “approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques” (p. 6). Designing systems using human-centered methods will increase productivity and efficiency, reduce discomfort and stress, provide a competitive advantage, and reduce costs (systems that are easier to use require less training and support) [77]. In terms of NDE, this approach refers to developing and designing automated systems, software, user-interfaces, wearable technologies, integrated workflows, etc. together with those people who are going to work with the systems. The key is to pay attention to the user’s needs, understanding, habits, expectations, and to designing systems, in which humans interact with machines with ease and in a way that is intuitive. Ergonomically designed and usable interfaces are expected to enable maximum user productivity, user acceptance, and user satisfaction [6]. Only intuitive and usable user-interfaces will allow the tasks to be carried out correctly, efficiently, and with user satisfaction [78]. Thus, usability engineering and its associated methods (cf. [79]) is going to become instrumental in the design and use of the new systems.
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
261
Appropriate function allocation and automation reliability are considered to be main determinants of adequate automation support [80]. The work of the future is considered as a symbiosis of people and cyber-physical systems, who are expected to dynamically adapt to each other and cooperate to achieve common goals [62]. Therefore, as a first step, it is important to appropriately divide functions between people and systems, paying attention to their advantages, not just to “replacing the error-prone inspectors.” That is, a careful analysis of physical and cognitive tasks should be carried out playing to the advantages of both. Pursuing a goal of automation does not necessarily mean that everything should be automated. Instead, inspection reliability will profit more if people who are left in charge are given tasks, they are more superior in and they find more engaging. In fact, research has shown that designing adaptive automated systems, that is, systems that allow for a dynamic allocation of control of system functions to either the human operator or the automation, is more likely to optimize the overall system performance [81]. In contrast, static function allocation, in which tasks are assigned to either the human or a machine based on expected performance, possibility to automate or cost-benefit is associated with problems, such as the lack of flexibility, risk of assigning people uninteresting functions that could lead to boredom or demotivation and technologyoriented design, in which the designers are the one deciding which functions are allocated to humans, without the user having a say in it [82]. Furthermore, automation reliability is a major prerequisite for trust, and therefore for the system’s effective and appropriate use [69, 83]. If the automated system or AI is unreliable and mistrusted, the system will be underutilized, that is, disused. Over trust, in turn, will lead to uncritical reliance on the correct functioning of the system (i.e., misuse), missing its possible failures [70]. One’s own experience with the system [84] as well as individual differences additionally affects the tendency to trust in automation. Therefore, attention needs to be directed toward enabling appropriate automation use. This can be achieved if the users are informed about the actual reliability of the system and have sufficient experience with the system [69]. Human tasks in NDE 4.0 are expected to still be physically and cognitively demanding, even though differently than in the current NDE paradigm. To address the cognitive demands, it is necessary to develop aids for problem solving for specific tasks in cooperation with the users. For that purpose, cognitive demands need to be identified using appropriate tools, for example, a cognitive task analysis or other similar methods (cf. [85]) and carefully addressed. Interaction with a user interface can be especially demanding, if designed inappropriately. Therefore, they need to be designed with a user-centered approach. To address the physical demands, the physical interaction with wearable technologies, tablets, and automated aids in terms of voice control, haptics, or its ergonomics have to be developed together with a group of actual users and adapted to the special user needs. Individualization, flexibilization, and usability of those systems are pertinent. Moreover, to counteract the current problems with human factors, physical environment will need to be specially designed with consideration of possible negative influences [30, 66].
262
M. Bertovic and I. Virkkunen
The high complexity and autonomy of the systems will furthermore push people out-of-the-loop: people will monitor systems and intervene only when necessary or directed to do so by the automation. Though this is a natural development, one must consider that by being kept out-of-the-loop, it becomes difficult for people to understand what the system is doing and act in times of need, at least not to the extent as this was possible in the current NDE paradigm. This is due to skill loss. Though inspectors will be trained for different tasks and the traditional NDE skills might become obsolete, even monitoring, and problem-solving skills could degrade if the automated systems are highly reliable and those skills are not frequently used. Problem of skill loss could be counteracted by designing adaptive automated systems, in which the inspector decides when to be involved based on, for example, current workload. Furthermore, Gorecky et al. [6] suggest developing adaptive learning assistance systems that will support the operators as best as possible in those rare unforeseen situations, dynamically adapt to the context of the situation and operator’s actions, and that will be based on continuously upgraded knowledge. Interaction with wearable technologies and personal assistants in form of smartphones, tablets, AR, and VR will keep people engaged and immersed into the autonomous environment. As people will remain the governing agents and flexible problem solvers of the system, it is important to find measures for engagement, support, and development of those skills pertinent to the new nature of the task. Although NDE knowledge – as currently taught – will remain the vital part of the curriculum, the proposed NDE roles of the NDE Systems Developer, the Caretaker, and the Problem Solver (UX designer is considered as a non-NDE role) will require additional set of skills than those in the current paradigm. Therefore, the training and qualification processes will experience a revolution. New programs will have to be developed to train those skills needed for the new roles. Skills, such as problem solving, caretaking, adapting systems, and remote social interaction will have to be carefully defined and trained. Aldrin (2020) adds that even those designing and training the algorithms may need training and qualification. Furthermore, social skills and new ways of interacting and communicating in virtual teams will gain importance. Selection of the personnel might become more common, as some skills (analytical skills, social skills, communication, etc.) may become more important in predicting the success of the task. Human-centered design and usable interfaces are also the main determinants of acceptance of new technologies. According to the Technology Acceptance Model (TAM) [86, 87], technologies are more likely to get accepted if users are included in their design. Perception of the system’s output quality and of one’s own job relevance contribute to the acceptance [88] suggesting the importance of own involvement and reliability of the system. Another possible obstacle to change and acceptance is the clear division of responsibilities between people and NDE 4.0 systems. People are not likely to take over responsibility for actions of a system they do not understand or control. Regulations and standards establish the rules and responsibilities, quality control and safety. Since divided responsibility is likely to prevent acceptance of new
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
263
technologies, it will probably be one of the greatest challenges to face. Keeping the users in the loop of these processes will be crucial for establishing trust in the technology, in its use, and in the organization that sets the rules. Finally, organizations will have a grave impact on how the personnel is prepared for the upcoming changes and challenges. New technologies are often met with fears by the users: people’s natural resistance to change and fears of being left out. Chakraborty and McGovern [89] suggest implementing changes incrementally, as such gradual changes are less palpable than abrupt changes. Empowering people and giving them the feeling of ownership (understanding of the design concepts behind it) are also expected to aid in acceptance of new technologies [89, 90]. The process of transformation and the preparation of the personnel will have a great impact of how those changes are accepted by the personnel: communication, information, and participation of the people are some of the key strategies to counteract resistance to change [91]. Gradual change, through open communication and involvement, is advisable to assure people are well prepared to the challenges of the NDE 4.0.
Summary The emerging technologies and the transformation of work caused by the fourth industrial revolution will drastically change the way NDE is conducted. The technological development is expected to offer significant improvements to current practices in terms of efficiency and reliability. Also, the automation will take much of the burden of repetitive, tedious, and error-prone tasks of the current practices and let people focus on higher level tasks of decision making and problem solving. At the same time, the new technologies may not integrate easily to current qualification and procedure-following practices, and taking advantage of their full potential will require changes in the organization of the NDE work and the various roles that make up the whole inspection. This requires a new paradigm. The first step toward NDE 4.0 will be associated with implementation of industry 4.0 tools, in particular of machine learning and augmented and virtual reality, into the current NDE inspection practices. We call these transitional systems (“NDE 3.5”). This in itself is expected to increase the reliability and efficiency of the current practices and allow increased specialization of the inspectors’ work. Though integrating them is not going to change the current “procedure-following” paradigm, it is likely that extended use of multiple new technologies like this will give rise to cognitive and physical demands and the overall complexity of the inspector’s work, which in turn could result in lack of acceptance, insufficient use, and a decrease in productivity. Adopting an increasing number of these new technologies and, in particular, integrating to the wider Industry 4.0 systems will increase the complexity of the NDE processes beyond the capacity of the current way of working. Expertise from various disciplines will be needed to integrate information from different surrounding systems and to develop the full NDE 4.0 system that makes use of multiple
264
M. Bertovic and I. Virkkunen
Industry 4.0 technologies. To solve this increased complexity, new division of responsibilities is needed that allows inspectors to focus on a different type of activities in the entire process and excel in them. The traditional inspectors’ roles are expected to transform into the NDE Systems Developer, the Caretaker, and the Problem Solver, each of which will work as part of a greater team. New expertise will have to be added to the traditional NDE roles, that is, that of a UX Design team (engineers, data analysts, usability experts). Thus, a new paradigm. In this new paradigm, inspectors’ tasks are not supposed to be replaced by superior machines, but rather aided by them, in activities in which this is deemed to be most productive and reliable. The inspectors are expected to still be in charge and responsible for the inspections, but now on a more abstract level. This process of transformation will require keeping the inspectors and their evolving roles at the center of this development.
Cross-References ▶ Artificial Intelligence and NDE Competencies ▶ Introduction to NDE 4.0 ▶ Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0 ▶ The Human-Machine Interface (HMI) with NDE 4.0 Systems ▶ Training and Workforce Re-orientation
References 1. Xu LD, Xu EL, Li L. Industry 4.0: state of the art and future trends. Int J Prod Res. 2018;56: 2941–62. Taylor and Francis Ltd. https://doi.org/10.1080/00207543.2018.1444806. 2. Sony M, Naik S. Industry 4.0 integration with socio-technical systems theory: a systematic review and proposed theoretical model. Technol Soc. 2020;61:101248. https://doi.org/10.1016/ j.techsoc.2020.101248. 3. Romero D, Stahre J, Taisch M. The operator 4.0: towards socially sustainable factories of the future. Comput Ind Eng. 2020;139:106128. https://doi.org/10.1016/j.cie.2019.106128. 4. Weyer S, Schmitt M, Ohmer M, Gorecky D. Towards industry 4.0 – standardization as the crucial challenge for highly modular, multi-vendor production systems. In: IFACPapersOnLine. 2015. p. 579–84. https://doi.org/10.1016/j.ifacol.2015.06.143. 5. Romero D, Stahre J, Wuest T, Noran O, Bernus P, Fast-Berglund Å, et al. Towards an operator 4.0 typology: a human-centric perspective on the fourth industrial revolution technologies. In: Proceedings of the international conference on computer and industrial engineering (CIE46), Tianjin, China. 2016. p. 1–11. Available from: https://www.researchgate.net/publication/ 309609488 6. Gorecky D, Schmitt M, Loskyll M, Zuhlke D. Human-machine-interaction in the industry 4.0 era. In: 2014 12th IEEE international confernece on industrial informatics. IEEE; 2014. p. 289– 94. https://doi.org/10.1109/INDIN.2014.6945523. 7. Bainbridge L. Ironies of automation. In: Rasmussen J, Duncan K, Leplat J, editors. New Technology and Human Error. Chichester, UK: Wiley; 1987. p. 271–83.
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
265
8. Onnasch L, Wickens CD, Li H, Manzey D. Human performance consequences of stages and levels of automation: an integrated meta-analysis. Hum Factors J Hum Factors Ergon Soc. 2014;56:476–88. https://doi.org/10.1177/0018720813501549. 9. Wickens CD, Li H, Santamaria A, Sebok A, Sarter NB. Stages and levels of automation: an integrated meta-analysis. Proc Hum Factors Ergon Soc Annu Meet. 2010;54:389–93. https:// doi.org/10.1177/154193121005400425. 10. Fantini P, Pinzone M, Taisch M. Placing the operator at the centre of industry 4.0 design: modelling and assessing human activities within cyber-physical systems. Comput Ind Eng. 2020;139:105058. https://doi.org/10.1016/j.cie.2018.01.025. 11. Hannon D, Rantanen E, Sawyer B, Ptucha R, Hughes A, Darveau K, et al. A human factors engineering education perspective on data science, machine learning and automation. Proc Hum Factors Ergon Soc Annu Meet. 2019;63:488–92. https://doi.org/10.1177/1071181319631248. 12. Krupitzer C, Müller S, Lesch V, Züfle M, Edinger J, Lemken A, et al. A survey on human machine interaction in industry 4.0. 2020;45:1–45. Available from: http://arxiv.org/abs/2002. 01025 13. Vrana J. NDE perception and emerging reality: NDE 4.0 value extraction. Mater Eval. 2020;78: 835–51. https://doi.org/10.32548/2020.me-04131. 14. Singh R. NDE 4.0 the next revolution in nondestructive testing and evaluation: what and how? Mater Eval. 2019;77:45–50. 15. Meyendorf NG, Heilmann P, Bond LJ. NDE 4.0 in manufacturing: challenges and opportunities for NDE in the 21st century. Mater Eval. 2020;78:1–9. 16. Aldrin JC. Intelligence augmentation and human-machine Interface best practices for NDT 4.0 reliability. Mater Eval. 2020;78:1–9. 17. Kuhn T. The structure of scientific revolutions. 4th ed. Chicago: The University of Chicago Press; 2012. 18. ISO 9712. Non-destructive testing – qualification and certification of NDT personnel. Geneva: International Organization for Standardization (ISO); 2012. 19. ISO 17640. Non-destructive testing of welds. Ultrasonic testing. Techniques, testing levels, and assessment. Geneva: International Organization for Standardization (ISO); 2018. 20. ISO 11666. Non-destructive testing of welds. Ultrasonic testing. Acceptance levels. Geneva: International Organization for Standardization (ISO); 2018. 21. ISO 23279. Non-destructive testing of welds. Ultrasonic testing. Characterization of discontinuities in welds. Geneva: International Organization for Standardization (ISO); 2017. 22. ASTM E2862-12. Standard Practice for Probability of Detection Analysis for Hit/Miss Data [Internet]. ASTM International, West Conshohocken, PA; 2012. Available from: www.astm.org 23. ENIQ. European methodology for qualification of non-destructive testing. ENIQ Rep. No. 61. Issue 4. Nugenia, Technical area 8, European Network for Inspection & Qualification; 2019. 24. Rummel WD. Nondestructive evaluation – a critical part of structural integrity. Procedia Eng. 2014;86:375–83. https://doi.org/10.1016/j.proeng.2014.11.051. 25. Vrana J, Kadau K, Amann C. Smart data analysis of the results of ultrasonic inspections for probabilistic fracture mechanics 1 introduction. In: 43rd MPA-Seminar, Stuttgart; 2017. 26. McGrath B. Programme for the assessment of NDT in industry, PANI 3. Health and Safety Executive, UK; 2008. p. 199. Available from: http://www.hse.gov.uk/research/rrpdf/rr617.pdf 27. Bertovic M, Ronneteg U. User-centred approach to the development of NDT instructions [SKB report R-14-06]. Oskarshamn: Svensk Kärnbränslehantering AB; 2014. Available from: http:// www.skb.se/upload/publications/pdf/R-14-06.pdf 28. Reason J. Human error. New York: Cambridge University Press; 1990. 29. Müller C, Bertovic M, Pavlovic M, Kanzler D, Ewert U, Pitkänen J, et al. Paradigm shift in the holistic evaluation of the reliability of NDE systems. Mater Test. 2013;55:261–9. https://doi. org/10.3139/120.110433. 30. Bertovic M. Human factors in non-destructive testing (NDT): risks and challenges of mechanised NDT. Doctoral dissertation, Technische Universität Berlin, Berlin. BAM-Dissertationsreihe Band 145. Bundesanstalt für Materialforschung und -prüfung (BAM); 2016. Available from: https://opus4.kobv.de/opus4-bam/frontdoor/index/index/docId/36090
266
M. Bertovic and I. Virkkunen
31. NEA. Operating experience insights into pressure boundary component reliability and integrity management. Topical report by the component operational experience, degradation and ageing programme (CODAP) group [NEA/CSNI/R(2017)3]. OECD Nuclear Energy Agency; 2017. 32. Reason J, Hobbs A. Managing maintenance error: a practical guide. Aldershot: Ashgate; 2003. 33. Bertovic M. Assessing and treating risks in mechanised NDT: a human factors study. ZfP Zeitung. 2018;161:52–62. Available from: https://d-nb.info/1170388477/34 34. HSE. Reducing error and influencing behaviour (HSG48). 2nd ed. Health and Safety Executive, HSE Books; 1999. Available from: http://www.hse.gov.uk/pubns/priced/hsg48.pdf 35. Badke-Schaub P, Hofinger G, Lauche K. Human factors. In: Badke-Schaub P, Hofinger G, Lauche K, editors. Human Factors Psychologie sicheren Handel Risikobranchen 2 Auflage. Berlin/Heidelberg: Springer; 2012. p. 3–20. 36. Wang S, Wan J, Zhang D, Li D, Zhang C. Towards smart factory for industry 4.0: a selforganized multi-agent system with big data based feedback and coordination. Comput Netw. 2016;101:158–68. 37. Frank AG, Dalenogare LS, Ayala NF. Industry 4.0 technologies: implementation patterns in manufacturing companies. Int J Prod Econ. 2019;210:15–26. https://doi.org/10.1016/j.ijpe. 2019.01.004. 38. Arthur B. The nature of technology: what it is and how it evolves. New York: Free Press; 2009. 39. Virkkunen I, Koskinen T, Jessen-Juhler O, Rinta-Aho J. Augmented ultrasonic data for machine learning. arXiv 190311399v1. 2019. https://doi.org/10.1007/s10921-020-00739-5. 40. Fuchs P, Kröger T, Garbe CS. Self-supervised learning for pore detection in CT-scans of cast aluminum parts. In: international symposium on digital industrial radiology and computed tomography; 2019. p. 1–10. Available from: https://www.ndt.net/search/docs.php3?id= 24750. 41. Fuchs P, Kröger T, Dierig T, Garbe CS. Generating meaningful synthetic ground truth for pore detection in cast aluminum parts. 2019; 9th Conf Ind Comput Tomogr 2019, 13-15 Feb, Padova, Italy (iCT); 2019. p. 1–10. Available from: https://www.ndt.net/search/docs.php3?id= 23730. 42. Hoffmann Souza ML, da Costa CA, de Oliveira Ramos G, da Rosa Righi R. A survey on decision-making based on system reliability in the context of industry 4.0. J Manuf Syst. 2020;56:133–56. https://doi.org/10.1016/j.jmsy.2020.05.016. 43. Chien CF, Hong T-y, Guo HZ. A conceptual framework for “industry 3.5” to empower intelligent manufacturing and case studies. Procedia Manuf. 2017;11:2009–17. https://doi.org/ 10.1016/j.promfg.2017.07.352. 44. Ozkan-Ozen YD, Kazancoglu Y, Kumar Mangla S. Synchronized barriers for circular supply chains in industry 3.5/industry 4.0 transition for sustainable resource management. Resour Conserv Recycl. 2020;161:104986. https://doi.org/10.1016/j.resconrec.2020.104986. 45. Vrana J, Singh R. NDE 4.0 – a design thinking perspective. J Nondestruct Eval. 2021;40:8. https://doi.org/10.1007/s10921-020-00735-9. 46. Philbeck T, Davis N. The fourth industrial revolution: shaping a new era. J Int Aff. 2019;72:17– 22. https://www.jstor.org/stable/26588339. https://doi.org/10.2307/26588339. 47. Culot G, Nassimbeni G, Orzes G, Sartor M. Behind the definition of industry 4.0: analysis and open questions. Int J Prod Econ. 2020;226:107617. https://doi.org/10.1016/j.ijpe.2020.107617. 48. Beier G, Ullrich A, Niehoff S, Reißig M, Habich M. Industry 4.0: how it is defined from a sociotechnical perspective and how much sustainability it includes – a literature review. J Clean Prod. 2020;259. https://doi.org/10.1016/j.jclepro.2020.120856. 49. Alcácer V, Cruz-Machado V. Scanning the industry 4.0: a literature review on technologies for manufacturing systems. Eng Sci Technol Int J. 2019;22:899–919. https://doi.org/10.1016/j. jestch.2019.01.006. 50. Tesch da Silva FS, da Costa CA, Paredes Crovato CD, da Rosa Righi R. Looking at energy through the lens of industry 4.0: a systematic literature review of concerns and challenges. Comput Ind Eng. 2020;143:106426. https://doi.org/10.1016/j.cie.2020.106426.
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
267
51. Perez C. Technological revolutions and techno-economic paradigms. Camb J Econ. 2009;34: 185–202. https://doi.org/10.1093/cje/bep051. 52. Venkatraman N. IT-enabled business transformation: from automation to business scope redefinition. Sloan Manag Rev. 1994;35:73–87. 53. Yeagley B, Madden M. Leveraging previous inline inspection assessment results. Pipeline Gas J. 2014;241:42–8. 54. Pavlovic M, Zoëga A, Zanotelli C, Kurz JH. Investigations to introduce the probability of detection method for ultrasonic inspection of hollow axles at Deutsche Bahn. Procedia Struct Integr. 2017;4:79–86. https://doi.org/10.1016/j.prostr.2017.07.002. 55. Chiachío J, Bochud N, Chiachío M, Cantero S, Rus G. A multilevel Bayesian method for ultrasound-based damage identification in composite laminates. Mech Syst Signal Process. 2017;88:462–77. https://doi.org/10.1016/j.ymssp.2016.09.035. 56. Leser PE, Warner JE, Leser WP, Bomarito GF, Newman JA, Hochhalter JD. A digital twin feasibility study (part II): non-deterministic predictions of fatigue life using in-situ diagnostics and prognostics. Eng Fract Mech. 2020;229:106903. https://doi.org/10.1016/j.engfracmech. 2020.106903. 57. Huber A, Dutta S, Schuster A, Kupke M, Drechsler K. Automated NDT inspection based on high precision 3-D thermo-tomography model combined with engineering and manufacturing data. Procedia CIRP. 2020;85:321–8. https://doi.org/10.1016/j.procir.2019.10.002. 58. Trampus P, Krstelj V, Nardoni G. NDT integrity engineering – a new discipline. Procedia Struct Integr. 2019;17:262–7. https://doi.org/10.1016/j.prostr.2019.08.035. 59. Lingvall F, Stepinski T. Automatic detecting and classifying defects during eddy current inspection of riveted lap-joints. NDT E Int. 2000;33:47–55. https://doi.org/10.1016/S09638695(99)00007-9. 60. Liao TW, Li Y. An automated radiographic NDT system for weld inspection: part II – flaw detection. NDT E Int. 1998;31:183–92. https://doi.org/10.1016/S0963-8695(97)00042-X. 61. Tzafestas S. Concerning human-automation symbiosis in the society and the nature. Int J Factory Autom Robot Soft Comput. 2006;1:16–24. Available from: https://www.academia. edu/11883136/Concerning_human-automation_symbiosis_in_the_society_and_the_nature. 62. Romero D, Bernus P, Noran O, Stahre J, Fast-Berglund Å. The operator 4.0: human cyberphysical systems & adaptive automation towards human-automation symbiosis work systems. In: Nääs I, Vendrametto O, Mendes Reis J, Gonçalves RF, Silva MT, von Cieminski G, et al., editors. Adaptation of Automation towards Human-Automation Symbiosis Work Systems. IFIP international conference on advance in production management systems (APMS), Sep 2016, Iguassu Falls, Brazil. Cham: Springer International Publishing; 2016. p. 677–86. https://doi.org/ 10.1007/978-3-319-51133-7_80. 63. Gazzaneo L, Padovano A, Umbrello S. Designing smart operator 4.0 for human values: a value sensitive design approach. Procedia Manuf. 2020;42:219–26. https://doi.org/10.1016/j.promfg. 2020.02.073. 64. Nelles J, Kuz S, Mertens A, Schlick CM. Human-centered design of assistance systems for production planning and control: the role of the human in Industry 4.0. In: 2016 IEEE International Conference Industries and Technology. IEEE; 2016. p. 2099–104. https://doi. org/10.1109/ICIT.2016.7475093. 65. Enkvist J, Edland A, Svenson O. Human factors aspects of non-destructive testing in the nuclear power context. A review of research in the field [SKI report 99:8]. Stockholm: Swedish Nuclear Power Inspectorate (SKI); 1999. 66. D’Agostino A, Morrow S, Franklin C, Hughes N. Review of Human Factors Research in Nondestructive Examination [Internet]. Washington, DC: Office of Nuclear Reactor Regulation, U.S. Nuclear Regulatory Commission; 2017. Available from: https://www.nrc.gov/docs/ ML1705/ML17059D745.pdf 67. Bertovic M. A human factors perspective on the use of automated aids in the evaluation of NDT data. In: 42st annual review of progress in quantitative nondestructive evaluation AIP conference proceedings. 2016;1706:020003 (1–16). https://doi.org/10.1063/1.4940449.
268
M. Bertovic and I. Virkkunen
68. Bertovic M. Automation in non-destructive testing: new risks and risk sources. In: Proceedings of the 55th annual conference of the British institute of non-destructive testing, Nottingham, UK, 12–14 Sept 2016, CD-ROM; 2016. p. 1–11. 69. Parasuraman R, Riley V. Humans and automation: use, misuse, disuse, abuse. Hum Factors J Hum Factors Ergon Soc. 1997;39:230–53. https://doi.org/10.1518/001872097778543886 70. Parasuraman R, Manzey D. Complacency and bias in human use of automation: an attentional integration. Hum Factors J Hum Factors Ergon Soc. 2010;52:381–410. https://doi.org/10.1177/ 0018720810376055. 71. Mosier KL, Skitka LJ. Human decision makers and automated decision aids: made for each other? In: Parasuraman R, Mouloua M, editors. Automation and human performance theory and application. Mahwah: Lawrence Erlbaum Associates Ltd.; 1996. p. 201–20. 72. Endsley MR, Kiris EO. The out-of-the-loop performance problem and level of control in automation. Hum Factors J Hum Factors Ergon Soc. 1995;37:381–94. https://doi.org/10. 1518/001872095779064555. 73. Danielsson O, Syberfeldt A, Holm M, Wang L. Operators perspective on augmented reality as a support tool in engine assembly. Procedia CIRP. 2018;72:45–50. https://doi.org/10.1016/j. procir.2018.03.153. 74. Kazancoglu Y, Ozkan-Ozen YD. Analyzing workforce 4.0 in the fourth industrial revolution and proposing a road map from operations management perspective with fuzzy DEMATEL. J Enterp Inf Manag. 2018;31:891–907. https://doi.org/10.1108/JEIM-01-2017-0015. 75. Hecklau F, Galeitzke M, Flachs S, Kohl H. Holistic approach for human resource management in industry 4.0. Procedia CIRP. 2016;54:1–6. https://doi.org/10.1016/j.procir.2016.05.102. 76. Romero D, Noran O, Stahre J, Bernus P, Fast-Berglund Å. Towards a human-centred reference architecture for next generation balanced automation systems: human-automation symbiosis. In: Umeda S, Nakano M, Mizuyama H, Hibino H, Kiritsis D, von Cieminski G, editors. Advance in production management systems: innovative production management towards sustainable growth APMS 2015 IFIP advance in information and communication technology. Cham: Springer; 2015. p. 556–66. https://doi.org/10.1007/978-3-319-22759-7_64. 77. DIN EN ISO 9241-210. Ergonomics of human-system interaction – part 210: human-centred design for interactive systems. English translation of DIN EN ISO 9241-210:2020-03. DIN Deutsches Institut für Normung e.V., Beuth Verlag GmbH; 2020. 78. ISO 9241-11. Ergonomics of human-system interaction – part 11: usability: definitions and concepts. Geneva: International Organization for Standardization (ISO); 2018. 79. Nielsen J. Usability engineering. London: AP Professional; 1993. 80. Onnasch L. Crossing the boundaries of automation – function allocation and reliability. Int J Hum Comput Stud. 2015;76:12–21. https://doi.org/10.1016/j.ijhcs.2014.12.004. 81. Hancock PA, Jagacinski RJ, Parasuraman R, Wickens CD, Wilson GF, Kaber DB. Humanautomation interaction research: past, present, and future. Ergon Des Q Hum Factors Appl. 2013;21:9–14. https://doi.org/10.1177/1064804613477099 82. Abbass HA. Social integration of artificial intelligence: functions, automation allocation logic and human-autonomy trust. Cogn Comput. 2019;11:159–71. https://doi.org/10.1007/s12559018-9619-0 83. Lee JD, See KA. Trust in Automation: designing for appropriate reliance. Hum Factors J Hum Factors Ergon Soc. 2004;46:50–80. https://doi.org/10.1518/hfes.46.1.50_30392. 84. Manzey D, Reichenbach J, Onnasch L. Human performance consequences of automated decision aids: the impact of degree of automation and system experience. J Cogn Eng Decis Making. 2012;6:57–87. https://doi.org/10.1177/1555343411433844. 85. Stanton NA, Salmon P, Walker G, Baber C, Jenkins D. Human factors methods. A practical guide for engineering and design. Aldershot: Ashgate; 2013. 86. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13:319. https://doi.org/10.2307/249008. 87. Lin C-C. Exploring the relationship between technology acceptance model and usability test. Inf Technol Manag. 2013;14:243–55. https://doi.org/10.1007/s10799-013-0162-0.
10
NDE 4.0: New Paradigm for the NDE Inspection Personnel
269
88. Venkatesh V, Davis FDA. Theoretical extension of the technology acceptance model: four longitudinal field studies. Manag Sci. 2000;46:186–204. https://doi.org/10.1287/mnsc.46.2. 186.11926. 89. Chakraborty D, McGovern ME. NDE 4.0: smart NDE. In: 2019 IEEE international conference on prognostics and health management (ICPHM 2019); 2019. https://doi.org/10.1109/ICPHM. 2019.8819429. 90. Kinzel H. Industry 4.0 – where does this leave the human factor? J Urban Cult Res. 2017;15:70– 83. https://doi.org/10.14456/jucr.2017.14. 91. Gerdenitsch C, Korunka C. Digitale transformation der Arbeitswelt (digital transformation of the working world). Berlin/Heidelberg: Springer; 2019.
“Moore’s Law” of NDE
11
Norbert Meyendorf
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moore’s Law and its Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nanotechnology and NDE Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A New Type of NDE Techniques for the “Nano- World” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Barkhausen Noise Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nano-Raman Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-Ray Nano-Tomography: An Example for Extremely High Resolution 3D Imaging . . . . Acoustic Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifications of the Atomic Force Microscopy Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Detectability or Resolution of NDE Techniques on a Historic Scale . . . . . . . . . . . . . . . . . . . . . . . . . . The Resolution of an NDE Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
272 273 275 276 277 279 280 282 284 287 288 288 291 291
Abstract
Smart Factories, the Internet of Things, big data, digital twins, and many other new concepts all require today’s computes, with unbelievable speed, memory, and computational power and all that thanks to the enormous pace in the progress in Microelectronics that is still ongoing. NDE methods did follow the trend of Moore’s law. This will be discussed in and historical view. Keywords
Micro electronic · Structural size · Micro NDE · Nano NDE · Moore’s law · NDE resolution
N. Meyendorf (*) Chemical Materials and Bio Engineering, University of Dayton, Dayton, OH, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_36
271
272
N. Meyendorf
Introduction Smart Factories, the Internet of Things, big data, digital twins, and many other new concepts all require today’s computes, with unbelievable speed, memory, and computational power and all that thanks to the enormous pace in the progress in Microelectronics that is still ongoing. After inventing the first transistor in 1947, engineers realized that by using semiconductor technologies, it is possible to combine several transistors on one chip. At that time, nobody was making capacitors or resistors out of semiconductors. If it could be done, then the entire circuit could be built out of a single crystal, making it smaller and much easier to produce. Jack Kilby had an idea for a solution to this problem that he called the monolithic idea. He listed all the electrical components that could be built from silicon: transistors, diodes, resistors, and capacitors [1]. However, on April 25, 1961, the patent office awarded the first patent for an integrated circuit to Robert Noyce (U.S. patent 2981877) [2]. Kilby’s application was filed 5 months earlier than Noyce’s but it was still being reviewed and the patent was only granted in June 1964 [3]. Today both scientists are seen as inventors of the Integrated circuit (IC). In 1965 Gordon Moore published the observation that the transistor density in integrated circuits doubles about every two years. Moore postulated a linear relationship between device complexity, higher circuit density, and reduced costs on logarithmic scale over time [4]. With increasing complexity, the integrated circuit becomes more powerful and in 1971 Intel offered the first commercially produced microprocessor, the Intel 4004 based on a 4-bit central processing unit [5] (Fig. 1). Moore’s law remained valid for at least 55 years. The authors opinion is that everybody believed in the law and so everyone of the big players in the microelectronic business knew in advance where the competitor expects to be in 1, 3, or 5 years and that one has to follow this trend. Only the first one on the market made the big money. That created enormous pressure and trillions of dollar where invested Fig. 1 Intel 4004 microprocesser
11
“Moore’s Law” of NDE
273
in the development of semiconductor technologies. The general trend throughout the years was to make smaller, faster, better, and more affordable components and devices. At present, Moore’s law encounters physical limits. The size of structures on chips is in the range of the size of atoms [6]. The transistors switch only a few electrons. The speed of electric pulses on chips limits the processor speed. Most forecasters, including Gordon Moore himself, expect Moore’s law will end by around 2025 [7]. In April 2005, Gordon Moore stated in an interview that the projection cannot be sustained indefinitely: “It can’t continue forever.” The nature of exponentials is that you push them out and eventually disaster happens.” He also noted that transistors eventually would reach the limits of miniaturization at atomic levels. To advance further, new innovative concepts are required, for example, creating 3D structures and stapling of chips, or to make electronic faster by use of stressed silicon. In the semiconductor community, this is called “More than Moore.” But this technology is challenging and a new kind of NDE, “Nano NDE” becomes a significant tool for technological development. This will be discussed later in this work.
Moore’s Law and its Consequences As mentioned above, Gordon Moore stated that transistors density in an integrated circuit doubles about every two years. In other words, we expect a linear trend if the number of transistors is plotted on a logarithmic scale over the years. This was valid for almost 55 years with dramatic consequences [8]. High power microprocessors and memory chips became possible and affordable. During the processing of one wafer, a large number of complex structures can be crated simultaneously during one sequence of masking etching and doping processes. The structural dimensions had to decrease because of the increasing number of transistors and other electronic components and the limited size of the chips. This happened in combination with the increase in size of wavers, making chips significantly cheaper (Fig. 2).
Fig. 2 Size of wafers on an annual scale
274
N. Meyendorf
These developments resulted in today’s ability to manufacture affordable terabyte and even exabyte memories, and microprocessors with extreme high speed (Fig. 3). It is anticipated that computers will soon have capabilities comparable to those of the human brain. In conjunction with development of the electronics, packaging also became more complex. A 64-bit processor chip requires many more contacts in a smaller space than for example Intel’s 4004 4-bit chip. NDE stands to benefit from these powerful electronics, a process that will culminate in NDE 4.0. However, electronic systems are also used in safety relevant applications, for example, in cars or aircraft. This is also a challenge for a new type of NDE (Micro- and Nano-NDE). Due to the small structures and the high number of components, it is not realistic to perform a 100% inspection of such components.
Fig. 3 Trend of structural size, energy density, cost of d-rams, cost of computer power over years between 1970 and 2015
11
“Moore’s Law” of NDE
275
However, NDE is an important tool to support the technology development. This will be described below.
Nanotechnology and NDE Requirements Making things smaller, faster, cheaper, and better is not only a trend in the semiconductor industry but a trend in all engineering branches. In the 1990s, we have seen a boost in microtechnology in the production of technical instruments on the micrometer scale, such as micromechanical devices. Examples are micropumps for fluidic systems and micro mirror arrays, devices that have multiple applications. Other examples are: • • • • •
Electrostatic bending actuators Pumps and valves for microfluidics MEMS based micropositioning platforms MEMS based headphones or MEMS ultrasound transducer [9]
The Micro-Electro-Mechanical Systems (MEMS) technology makes it possible to eliminate the membrane used in conventional ultrasound transducers and, instead, microscopically small bending actuators are used, which are made to deflect by a signal and vibrate in the chip plane. In order to generate sound, these bending actuators are arranged in sound chambers. The resulting sound exits the sound chambers via inlet and outlet slots [9]. During the last century, a typical measuring accuracy was 0.1 millimeter (or 100 micrometer) in classic engineering applications. A typical NDE technique of that time had to be sensitive down to this size. Now however, one micrometer or better resolution in design is required and that also requires much better resolution or detectability in NDE. As a consequence, microfocus radioscopy and acoustic microscopy became the methods of choice. Figure 4 lists NDE methods and their applicability in various resolution ranges (Fig. 5). At the beginning of the present century, nontechnology appeared on the horizon. That had two aspects: 1. The creation of nanostructured materials with extraordinary properties. In the millennium issue of “acta metallurgica,” one of the fathers of nanotechnology, H. Gleiter, summarized the concepts [11]. Per its definition a nanomaterial is a material with at least one of its structural dimensions smaller than 100 nm (very thin layers fall under this definition). Typical examples are carbon nanotubes, graphene layers, or composite materials with integrated nanostructures such as nanotubes or wires. 2. The manufacturing of nanometer size manmade structures and devices down to the atomic level. An important aspect was the combination or integration with electronics and biological structures.
276
N. Meyendorf
•Ultrasonic Testing •Interferometry •Holography •Thermography •X-ray Imaging •Eddy Current
•Microscopic Thermography •Thermal Wave Microscopy •WLIM (z direction 3 nm) •Acoustic Microscopy •µ-Radiography, µ-CT
1mm
Macroscopic Defects •Cracks •Pores •Inclusions •Thickness loss
•SEM, TEM •SPM •Nonlinear Acoustics •Thermomechanics/ Acoustics •Position Annihilation •Acoustic Reverberation
1µm
µ - Defects
Microstructure
•Initial corrosion pits •µ-cracks •µ-pores
•Grains/Subgrains •Grain boundaries •Precipitations •Phases/Texture •Voids
1nm
Cryst. Lattice Defects •Dislocations •Vacancies •Microvoids •GP Zones •Interstitials/ H
Fig. 4 Structural dimensions, type of material parameters or defects, and NDE methods [10]
Fig. 5 Scale of structures and typical characterization techniques in the micro- and nano-world
A New Type of NDE Techniques for the “Nano- World” Techniques that are typically only used in physical labs can now be considered as nondestructive testing methods for structures in the nanometer range. No cutting or sample preparation might be required. For example, a complete microchip can be examined in a scanning electron microscopy (SEM) test. Limits between physical
11
“Moore’s Law” of NDE
277
analytics and NDE are becoming blurred. If, for example, transmission electron microscopy (TEM) is compared to X-ray nano-radioscopy, both techniques have different characteristics but both are important tools for Nano-NDE. The resolution of a TEM can be as low as 0.1 nm or better (less than an atom’s diameter). For X-ray microscopy, this is about 200 times higher (20 nm); however, the soft X-rays of such microscopes can penetrate specimens of approximately 10 μm thickness. The TEM can only penetrate very thin layers up to 100 nm. In the following section, some examples of high resolution NDE methods applicable to nanomaterials and nanostructures are introduced:
Barkhausen Noise Microscopy Barkhausen Noise Microscopy is a high-resolution imaging technique used to record local residual stresses in thin magnetic layers. Barkhausen noise can be detected by a coil if the hysteresis loop of a ferromagnetic material is cycled. The noise is generated by the stepwise change of the magnetic domains structure by “Barkhausen jumps” that create a wide spectrum of pulses in a detecting coil. Barkhausen noise is used to characterize the microstructure and residual stresses in ferromagnetic martials, because it is the result of the interaction of the magnetic domain structure with the microstructure and local stresses in the material (Fig. 6). The Barkhausen and Eddy Current Microscopy (BEMI) generates images of the magnetic properties and residual stress distributions in magnetic layers. That is of high importance for magnetic data recording and in magnetic sensors (Fig. 7). Figure 8 shows the distribution of coercivity of the soft magnetic layer by an optical technique compared to Barkhausen microcopy. This property can change dramatically for very thin layers because the walls between magnetic domains are transforming form Blochwalls with a perpendicular (out of plain) rotation of the magnetization vector to Neelwalls with an in plain rotation of the magnetization vector (see Figs. 9 and 10). Therefore, it is of importance to characterize very thin layers. The Barkhausen microscopy was capable of detecting noise signals for thin layers as small as 10 nm.
1,5
B [T]
M [V]
2
1 0,5
H [A/cm]
0 -20
-10
1
0
10
20
-0,5
Ht [A/cm]
-1 0
-1,5
-20
-10
0
10
20
Fig. 6 Magnetic hysteresis loops (left) and Barkhausen noise aptitude (right) for an as deposited (black) and an annealed (red) thin ferromagnetic layer (Sendust) [12]
278
N. Meyendorf
Fig. 7 Setup of Barkhausen and Eddy Current Microscopy (BEMI) for testing of magnetic layers
0.1- 5µm N: 1kHz - 10MHz 5µ H=f(J)
˜ 1 ... 500 Hz
3,5
55 44
y
33 22 11
x
Hc [A/cm]
2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 1.2 1
HCM [A/cm]
1,6 22 11
y [mm]
0
11
22
x [mm]
Fig. 8 Comparison of local coercivity of a thin ferromagnetic layer by Barkhausen microscopy HCM (left) and a magneto optical technique Hc(right)
Fig. 9 Barkhausen Noise amplitude (compare to Fig. 6) for magnetic layers of different thickness
11
“Moore’s Law” of NDE
279
9 8 7
Coercivity by Barkhausen noise H CM compared to optical technique HC
HC
6 5
Hc HCM
4 3 2 1 0 0
1
2
3
4
5
6
7
8
9
10
Layer thickness [µm]
Fig. 10 Coercivity for thin magnetic layers by Barkhausen microscopy HCM compared to an optical technique HC
In this case, the signal is reduced to one pulse that might be created by a Neelwall moving below the sensor [13].
Nano-Raman Microscopy One method used to increase the speed of microprocessors was to crate strained silicon by an epitaxial grow of silicon on a silicon-germanium substrate. The resulting higher distance between lattice atoms can increase the speed of conduction electrons and hence of electric pulse on the chip so that up to 30% higher operating frequencies become possible. Development of this material required a NDE technique to characterize stress in silicon on the nanometer scale. Silicon is Raman active so that stresses result in a frequency shift of RAMAN bands of scattered laser light [14] (Fig. 11). The resolution is limited to about half the wavelength with conventional microscopy (Abbe diffraction limit). This is too large for today’s nanoelectronic structures. An attempt to overcome the Abbe diffraction limit is Nearfield Scanning Optical Microscopy (NSOM). An optical tip with a diameter much less than the wavelength is used. This tip is moved over the surface in closed proximity. For this setup, the resolution depends on the tip size and distance of the tip to the surface, which can be as low as a few nanometers. With a patented technique that uses fully metalized tips to generate plasmons, the theoretical resolution limit is only a few nanometers (based on modeling) [15]. The challenge is to develop a technology to manufacture extremely sharp optical fiber tips (Fig. 12).
280
N. Meyendorf
Fig. 11 Stress sensitivity of RAMAN Stokes band in bended silicon [14]
Fig. 12 NSOM setup for Nano-RAMAN spectroscopy [15]
X-Ray Nano-Tomography: An Example for Extremely High Resolution 3D Imaging In conventional X-ray imaging techniques, the image resolution is limited by the focal spot of the X-ray source. For standard applications in the industry spot sizes are in the range of millimeters. Conventional diffraction lenses like in optics are not available for X-rays due to the diffraction index that is close to 1. Therefore, geometric magnification is used to image smaller objects. To get higher magnifications, X-ray sources with small spots of a few micrometers are used. However, reducing the spot size comes at a price. The applicable voltages are lower and the tube current that determines the X-ray intensity has to be reduced. This is required in order to reduce the very high energy concentration in the focal spot. Spot sizes significantly below 1 square micrometer are not possible due to the scatter of the electrons in the material even with a much better focused electron beam on the target. As an example the imaging of solder ball grid array (BGA) structures with a resolution of 1 micrometer was used for the development of X-ray sensors with
11
“Moore’s Law” of NDE
281
50 micrometer pixel size (Fig. 13). This is the physical limit by using conventional microfocus techniques. However, by applying wave physics, X-ray lenses based on Fresnel optics can be created. A key problem is to manufacture nanometer size structures with high aspect ratio from a high absorbing material. Zone plates are fabricated from materials with high atomic numbers (high-Z materials) like, for example, gold (Fig. 14). Techniques suitable for the production of such lenses are electron beam lithography, reactive ion etching and electroplating. Focusing efficiencies of 10–30% are currently achievable (depending on the aspect ratio) [17, 18]. Such structures can be used to build X-ray lenses appropriable for X-ray microscopes and nano-3D X-ray computer tomography with resolutions of 30 nm or better (Fig. 15). Fig. 13 Experimental BGA structure for an X-ray sensor imaged with X-ray microtomography. The resolution in this case is approximately 1 micrometer [16]
Fig. 14 Zone plate for focusing of soft x-rays
282
N. Meyendorf
Fig. 15 Setup for X-ray microscopy and tomography [18]
The limitation is that this system requires monochromatic radiation with a wavelength that matches the structures of the zone plates. Only very low energies, for example, characteristic radiation from Cu or Cr targets can be used with lenses that can be manufactured currently. This limits the thickness of structures that can be penetrated even for low absorbing materials like silicon or polymers. However, this can be a very valuable tool for the development of technologies for today’s high-end microprocessors (More than Moore). Chips are stapled and connect by Through Silicon Vias (TSVs) to increase the power and speed of the electronic components. These vias must be free of voids and other nano defects to ensure high reliability (see examples in Fig. 16) [17].
Acoustic Microscopy Conventional Acoustic Microscopy (Scanning Acoustic Microscopy – SAM) uses ultrasound trasducers in immersion testing. Typical frequencies of the ultrasound are below 100 MHz. The corresponding wavelength in water is from 15 micrometers and more [20]. High Frequency Scanning Acoustic Microscopy (HF SAM) can use frequencies up to 2 GHz. Coupling of the high aperture acoustic lenses in front of the transducers to the material surface requires only a single water droplet. The absorption of ultrasound increases with frequency. For HF SAM, the absorption is so high that only the material surface and near-surface regions can be examined. The critical angels (for longitudinal waves) are usually included in the sound beam due to the high aperture of the lenses used. As a result, Rayleigh waves are generated by these lenses. The wavelength in water for such high frequencies (1 to 2 GHz) can range between 1500 and 700 nm. This is comparable to visible
11
“Moore’s Law” of NDE
283
Stack-Metal Passivation TEOS CMP Spacer 1 0.7 um
Spacer 2
0.6 um
TiN CVD Top-Si (~ 10 µm)
W or Cu CVD Polyimide TEOS CMP
Bottom-Si (~ 630 µm) 5 um
Fig. 16 TSVs connect several chip layers (left) and void localization in the center of a TSV (size 0.6 μm) imaged with X-ray nano tomography (right) [19]
Fig. 17 Schematic of an HF SAM and types of interaction of Rayleigh waves with surface defects
light. According to the definition of resolution in optics (Abbe limit), the resolution can be roughly estimated to be half of the wavelength. This results in a resolution of several hundred nm. However, the detectability for surface cracks can be significantly better [21] (Fig. 17). The reason is that the main contribution to the image is the interaction of Rayleigh waves with surface defects. A microcrack with a gap of only a few nanometers will modify the propagation of the wave and creates a contrast in HF SAM. In a detailed study on fatigue in stainless steel, it was possible to create some statistics about microcracks at the surface as function of the fatigue life (in percent). The smallest detected cracks had a length of only 10 micrometers (see Fig. 18). Almost none of these cracks were visible in scanning electron microscopy (SEM) images in the attempt to verify the HF SAM results with SEM. The explanation is
284
N. Meyendorf
Fig. 18 Microcracks at the surface as function of the fatigue life [21]
that the SEM averages the signal over the focal spot of several tens of nanometers; however, the HF SAM signal detects the separation at the surface by the cracks that modify surface waves. This means the oscillation amplitude of the atoms at the fee crack surfaces is important (see Fig. 17b). That might be in the range of 1 nm (at 1GHz).
Modifications of the Atomic Force Microscopy Method Atomic force microscopy (AFM) measures the deflection of a sharp tip with a typical radius of 10 to 40 nanometers by scanning across the surface of the test objects. The resulting image displays the surface topography. Several modifications are published in the literature that display magnetic interactions (MFM) electrostatic interactions (EFM) or use a miniaturized thermocouple at the tip (SthM). These techniques use typical interaction with the martials that are applied in NDE and are therefore considered to be Nano-NDE techniques. AFM modifications that apply an elastic interaction between the tip and the material surface are called Ultrasonic Force Microscopy (UFM) [22] or Acoustic Force Atomic Microscopy (AFAM) [23]. A typical UFM setup is shown in Fig. 19. The UFM creates simultaneously a topography image (tip deflection, as in classical AFM) and an image of the elastic interaction of the tip with the surface. Therefore, the material is stimulated with an acoustic excitation. The acoustic signal is created by the transducer below the specimen. Typical frequencies used are between several hundred kHz (UFM) and several MHz (AFAM). In both cases, the wavelengths are significantly longer than the tip diameter of, for example, 30 nm. The surface can be considered to be like a vibrating drum whereby the tip is forced to move up and down. The oscillation amplitude and phase shift depend upon the elasticity at the surface and a thin layer below due to the elastic stress field (Fig. 20).
11
“Moore’s Law” of NDE
285
Fig. 19 Setup for UFM
Fig. 20 Tip–surface interaction for UFM or AFAM
In both AFM or UFM modes, the images will have a resolution determined by the cantilever. However, due to the effect explained above for HF SAM, the acoustic images show significantly more details. The AFM image in Fig. 21(left) shows a topographic height difference between the surfaces to the right and left of the crack. But in the UFM image in Fig. 21 (right), this topography does not affect the image and more details are visible. Dislocations and the deformation fields that are created at the crack tip by increasing the load (stretch zone). They show typically an angle of 45 degrees relative to the crack.
286
N. Meyendorf
Fig. 21 AFM (left) and UFM (right) image of a propagating crack in an aluminum alloy (AA2023) at a crack stopping point. (Scan size 20 20 μm, excitation frequency 146.3 kHz)
Fig. 22 AFM(left)/UFM(right) image of a crack in AA 2023 that is deflected by precipitations (scan size 800 800 nm, excitation frequency 146.3 kHz)
Figure 22 shows a crack with a higher magnification. Some larger precipitations that are typical for this aluminum alloy are visible in the UFM image (right). It can be clearly seen that the crack is deflected by the precipitation and even separates in front of the course precipitations. The crack propagation direction is from the top of the image to the bottom. The right crack wing is formed probably due to the stress field created by the precipitations and is then stopped by the smaller upper precipitation.
11
“Moore’s Law” of NDE
287
These are exciting results created from a material block under atmospheric conditions. Such high resolution images showing the precipitations in aluminum alloys are usually only possible using a transmission electron microscopy (TEM). But due to the very thin foils that can be imaged (about 100 nm thickness), such crack propagation experiments are impossible in the TEM.
Detectability or Resolution of NDE Techniques on a Historic Scale As discussed above, Moore’s law has predicted an exponential decease in structural dimension in the semiconductor industry. But on a longer timescale, this trend to miniaturization is not limited to semiconductors and can also be seen in mechanical engineering. Smaller structural sizes and higher reliability requirements did also require NDE technique with better sensitivity or resolution. This encouraged the development of new NDE techniques. Figure 23 is an attempt to indicate how the structural sizes, which can be detected by an NDE technique, decreased with the time of the appearance of new techniques. This can be only a very rough comparison because various NDE techniques have different definitions for resolution. Sometimes the detectability can be much smaller than the resolution, as explained above for ultrasonic testing. However, a clear trend toward higher resolution and better detectability of defects with time is obvious that can be compared to Moore’s law. It follows the industrial trend to higher precision and miniaturization.
Fig. 23 “Moore’s law of NDE”
288
N. Meyendorf
The Resolution of an NDE Technique The Modulation Transfer Function (MTF) is used to characterize the resolution for optical and X-ray imaging systems. The MTF is a plot of the signal oscillation amplitude over the structural size (e.g., lines pairs per millimeter) of a high contrast modulation structure. Figure 24a shows a test star for an X-ray microscope. The structural size increases with the distance from the center. Some typical MTFs are illustrated in Fig. 24b. For optical techniques, the resolution is the structural size where the MTF takes off from the background. A similar definition is applied for X-ray microscopy using Fresnel lenses. This is useful because it indicates the smallest structure that can be visible in the image. In conventional radioscopy, the duplex wire is used to define the resolution. In this case, the resolution is considered to be the wire pair size where the contrast drops between the two wires by 20% of the maximum contrast of the two wires. A completely different situation appears if the technique is used to measure the property of a material such as the coercivity or local stresses in a magnetic layer with Barkhausen microscopy. In this case, the dimensions of the test object should not affect the measurement result. For Eddy current measurements, a 3% signal drop is considered to be acceptable. That is why for Barkhausen microscopy the resolution is described as 20 micrometers in the literature. However, imaging of ferrite structures of less than 2 micrometers was possible with the instrument. This discussion shows that it is very difficult to compare the resolution of different NDE techniques. And, as discussed above, in some cases defects can be detected even if they are significantly smaller than the resolution.
Future Vision As mentioned above, Moore’s Law will not be valid forever because the structures on the chips approach the atomic level. In parallel, NDE techniques also approach physical limits. In addition, higher resolutions might not be desired. Indirect techniques are available that do not image defects but measure material structures and properties by characterizing atomic defects. Most of these techniques are know from physics laboratories and find their way into the NDE community. An example is Positron Annihilation where the author was able to demonstrate that this technique can detect lattice defects such as vacancy clusters or nano-precipitations (GP- Zones) of atomic size [24]. The major trend in NDE in recent research was to develop techniques with higher resolution and sensitivity, to detect smaller defects and structures. But miniaturization of electronics and MEMS does also allow to produce smaller lighter and more powerful portable NDE devices (micro-NDE equipment) which can then be combined with small UAVs to fly through the systems and assemblies – submarines, space station, and other high value assets. NDE research has to follow the general trend in industry and in society. This will require making smart decisions about the lifecycle of unique objects, individually
11
“Moore’s Law” of NDE
289
Fig. 24 Resolution star (a) and two typical MTFs (b)
tailored to the consumer’s needs. This means a shift from statistical-based POD concepts towards individual decisions similar to medical diagnosis (Fig. 25). The future trend will be the introduction of artificial intelligence to support human decisions and plan efficient NDE procedures. Figure 26 illustrates the opinion of the author about the important technologies that are significant for the previous two and the present century. It is the author’s opinion that the present century will be characterized by capabilities that are enabled by various forms of big-data processing and artificial intelligence. These changes will be represented by
290
N. Meyendorf
Fig. 25 Learning from medicine [25]
Fig. 26 Technology in the last 300 years and important inventions relevant to this development
• Digitizing of all (or much) of our information so that it can be stored effectively, forever. • Information networks that allow not only real-time telecommunication but also remote control of processes and activities everywhere in the world. • Smart robots that can interact with humans, including much beyond keyboard control. • Machines that have the ability to learn and make decisions on their own.
11
“Moore’s Law” of NDE
291
• Small handheld devices equipped with numerous sensor systems and the ability to communicate in real time with powerful date banks and data analyses and systems. These tablets or cellphones can act as NDE devices and human-machine interfaces. Due to the availability of these devices for anybody, this can trigger new applications businesses cases for NDE.
Summary Due to the still continuing increase in miniaturization of electronic circuits, in the past successfully described by Moore’s Law [4], together with a significant decrease in the price and energy consumption of circuits, it can be expected that within just a few years, the society will use computers that have the computational power of the human brain. Even at present many traditionally human functions and decisions can be replaced by computers but in the future advanced “smart” computers will be able to learn and to adapt and to respond to new situations. For NDE, this means that there is the potential, for at least some routine inspections, to have initial characterizations of parts and potentially initial data evaluations performed by smart inspection robots. Such smart robots can enable new applications, for example, operating in harsh environments. These capabilities can even be remote-controlled form anywhere in the world. New inspectors training will be required to operate these systems. And new standards will be required, for example, for remote NDE. New tools like the Internet of things, big data, digital twins, and artificial intelligence, for instance, will give now opportunities to perform NDE in the lab, in the field and also aloe remotely. This will also raise new question how to integrate and execute NDE in production in a cyber-physical environment and how to process, analyze, and protect NDE data. This will challenge the NDE stuff that can apply powerful tools for decisions making like advanced NDE modeling tools and conjunction with enhanced visual techniques and augmented reality.
References 1. Patent US3138743: Miniaturized electronic circuits. Published Mai 23, 1964, Inventor: Jack S. Kilby. 2. Patent US2981877: Semiconductor device and lead structure. Published am 25. April 1961, Inventor: Robert N. Noyce. 3. https://en.wikipedia.org/wiki/Invention_of_the_integrated_circuit 4. Moore GE. Cramming more components onto integrated circuits. Electronics. 1965:114–7. 5. https://de.wikipedia.org/wiki/Intel_4004 6. Lee H, et al. Sub-5nm All-Around Gate FinFET for Ultimate Scaling. Symposium on VLSI Technology; 2006. p. 58–59. https://doi.org/10.1109/VLSIT.2006.1705215. hdl:10203/698. ISBN 978-1-4244-0005-8, S2CID 26482358. 7. https://uk.pcmag.com/processors/41844/gordon-moore-predicts-10-more-years-for-moores-law 8. Brock DC, editor. Understanding Moore’s law: four decades of innovation. Philadelphia: Chemical Heritage Foundation; 2006. ISBN 978-0941901413
292
N. Meyendorf
9. Fraunhofer Institute for Photonic Microsystems IPMS: Capacitive Micromachined Ultrasound Transducers (CMUT) 10. Meyendorf N, et al. Degradation of aircraft structures. In: Meyendorf NGH, Nagy PB, Rokhlin SI, editors. Nondestructive materials characterization – with applications to aerospace materials. Springer Nature; 2003. 11. Gleiter H. Nanostructured materials: basic concepts and microstructure. Acta Materialia. 2000;48:1–29. Elsevier 12. Altpeter I, Dobmann G, Faßbender S, Hoffmann J, Johnson J, Meyendorf N, Nichtl-Pecher W. Detection of residual stresses and nodular growth in thin ferromagnetic layers with Barkhausen and Acoustic Microscopy. In: Green Jr RE, editor. Nondestructive characterization of Material VIII. New York: Plenum Press; 1998. 13. Hoffmann J, Meyendorf N, Altpeter I. Characterization of Softmagnetic Thin Layers using Barkhausen Noise Microscopy, MRS online Proceedings, Volume 674 (Symposia T/U/V – Applications of Ferromagnetic and Optical Materials, Storage and Magnetoelectronics). Cambridge University Press; 2011. 14. Wolter K-J, Oppermann M, Heuer H, Köhler B, Schubert F, Netzelmann U, Krüger P, Zhan Q, Meyendorf N. Micro- and Nano-NDE for Micro-Electronics (back end): IV Conferencia Panamericana de END. Buenos Aires; 2007. 15. Zhan Q. Vectorial optical fields: fundamentals and applications. World Scientific Publishing; 2013. ISBN 978-9814449885. 16. Gluch ML, Meyendorf N, Oppermann M, Röllig M, Sättler P, Wolter KJ, Zschech E. Multiscale radiographic applications in microelectronic industry. AIP Conference Proceedings 1706, 020026; 2016. https://doi.org/10.1063/1.4940472 17. Zschech E, Niese S, Gall M, Löffler M, Wolf MJ. 3D IC Stack Characterization using MultiScale X-Ray Tomography. Proceedings of the 20th PanPacific Microelectronics Symposium, Kolao/HI; 2015. 18. Zschech E, Yun W, Schneider G. Appl Phys A. 2008;92:423–9. 19. Gelb J. Xradia Inc. and Peter Krueger IZFP-D, personal communication. 20. Hoffmann J, Sathish S, Shell EB, Fassbender S, Meyendorf N. Acoustic imaging techniques for characterization of corrosion, corrosion protective coatings, and surface cracks. In: Meyendorf NGH, Nagy PB, Rokhlin SI, editors. Nondestructive materials characterization – with applications to aerospace materials. Springer Nature; 2003. 21. Fassbender SU, Karpen W, Sourkov A, Sathish S, Roesner H, Meyendorf N. NDE of fatigue on metals thermography, acoustic microscopy and positron annihilation method. Proceedings of the 15th WCNDT, Roma; 2000. 22. Druffner C, Schumacker E, Sathish S, Frankel GS, Leblanc P. Scanning probe microscopy: ultrasonic force and scanning kelvin probe force micoscopy. In: Meyendorf NGH, Nagy PB, Rokhlin SI, editors. Nondestructive materials characterization – with applications to aerospace materials. Springer Nature; 2003. 23. Rabe U, Arnold W. Appl Phys Lett. 1994;64:1493. https://doi.org/10.1063/1.111869. 24. Dlubeck G, Meyendorf N. Positron Annihilation Spectroscopy (PAS). In: Meyendorf NGH, Nagy PB, Rokhlin SI, editors. Nondestructive materials characterization – with applications to aerospace materials. Springer; 2003. 25. Meyendorf N, Heilmann P, Leonard J. Bond: NDE 4.0 in Manufacturing: challenges and opportunities for nde in the 21st century – NDE 4.0, Special Issue of Materials Evaluation, Volume 78; 2020.
Part II Technical Disciplines
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
12
Johannes Vrana
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitization, Digitalization, and Digital Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industrial Internet of Things (IIoT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automation Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Idea of the IIoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to Integrate NDE into the IIOT: Basic Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semantic Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interfaces for IIoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to Integrate NDE into the IIOT: How to Proceed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Twin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Twin of a Person . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Twins of Personnel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Twin Type, Instance, and Aggregate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital Twin Interrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference Architectural Model Industry 4.0 (RAMI 4.0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Cyber-Physical Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
296 297 298 298 300 303 305 306 314 316 317 318 319 320 320 321 321 322 324 325 326 327
Abstract
As with the previous revolutions, the goal of the fourth revolution is to make manufacturing, design, logistics, maintenance, and other related fields faster, more efficient, and more customer centric. This holds for classical industries,
J. Vrana (*) Vrana GmbH, Rimsting, Germany e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_40
295
296
J. Vrana
for civil engineering, and for NDE and goes along with new business opportunities and models. Core components to enable those fourth revolutions are semantic interoperability, converting data into information, the Industrial Internet of Things (IIoT) offering the possibility for every device, asset, or thing to communicate with each other using standard open interfaces, and the digital twin converting all the available information into knowledge and closing the cyber-physical loop. For NDE, this concept can be used (1) to design, improve, and tailor the inspection system hard- and software, (2) to choose and adapt to the best inspection solution for the customer, (3) to enhance the inspection performance, and (4) to enable remote NDE interfaces and instrumentation – in summary, enabling better quality, speed, and cost at the same time. On a broader view, the integration of NDE into IIoT and digital twin is the chance for the NDE industry for the overdue change from a cost center to a value center. In most cases, data gathered by NDE is used for a quality assurance assessment resulting in a binary decision. But the information content of NDE goes way deeper and is of major interest for additional groups: engineering and management. Some of those groups might currently not be aware of the benefits of NDE data, and the NDE industry makes the access unnecessarily difficult by proprietary interfaces and data formats. Both those challenges need to be taken on now by the NDE industry. The big IT players are not waiting and, if not available on the market, they will develop and offer additional data sources including ultrasonics, X-ray, or eddy current. This chapter is based on content from “The Core of the Fourth Revolutions: Industrial Internet of Things, Digital Twin, and Cyber-Physical Loops” [2]. Keywords
NDE 4.0 · Use cases · Value proposition · Design thinking · Advanced NDE · Future of NDE · Automation · NDT 4.0 · Industry 4.0 · Industrie 4.0 · NDE challenges · Digital twin · IIoT · OPC UA · Ontology · Semantic interoperability · Industrial Revolution
Introduction The cyber-physical ecosystem introduced by Industry 4.0 and NDE 4.0 [1–7] is based on digitization, digitalization, and digital transformation [8]. Its core component is the cyber-physical loop-processing digitized data representing one or multiple physical properties, such as financial data, design data, data from production, data from operation, data from (NDE) sensors, or data from classical NDE inspections. The accumulated data is used for some data processing, such as feedback, trending, or predictive maintenance. The results are visualized to gain knowledge which can eventually be used to invoke the necessary actions for process improvements [2, 3, 9].
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
297
Fig. 1 Challenges of cyber-physical loops [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
Cyber-physical loops or feedback loops have already been utilized for some decades by implementing the various interfaces from the various data sources and some data processing and visualization (mostly manual, computer-assisted data processing and visualization using, e.g., Matlab or R). Most of those loops (see Fig. 1) use proprietary isolated solutions leading to multiple challenges, in particular if reused or repurposed later for different applications. Cyber-physical loops discussed in conjunction with the fourth revolutions are different: completely digitally transformed cyber-physical loops seamlessly integrating data sources and emerging technologies [1]. This publication focuses on how to build such cyber-physical loops, discusses the core elements, and shows how to connect the emerging technologies, all of it with focus on NDE. This “big picture” ecosystem and the number of points discussed in the following might sound a bit overwhelming. However, if taken on step-by-step and accompanied and tailored to the needs, it can be reached.
Digitization, Digitalization, and Digital Transformation The ecosystem presented in the following is based on all three: digitization, digitalization, and digital transformation. Unfortunately, the ambiguous use of those three terms in public can be quite confusing. Moreover, most languages, such as German, Spanish, and Japanese, do not even differentiate between digitization and
298
J. Vrana
digitalization, even though the digitization and digitalization activities have little in common. The only commonality between the two terms (besides the similarity in notation) is that digitalization requires digitization. In simple terms, digitization is the transition from analog to digital, and digitalization is the process of using digitized information to simplify specific operations [10]. The digital transformation uses digital infrastructure and applications to exploit new business models and value-added chains (automated communication between different apps of different companies) and therefore requires a change of thought process. Digital transformation requires collaboration to improve customer’s digital experience. There is one more term here – informatization, which is the process by which information technologies, such as the World Wide Web and other communication technologies, have transformed economic and social relations to such an extent that cultural and economic barriers are minimized [11]. Informatization is the path from analog, digital, and digitalize to digital transformation [8].
Industrial Internet of Things (IIoT) The first step within the cyber-physical loop is the conversation of one or many physical properties into digital data and the combination or fusion of the data. For most Industry 3.0 applications, this process was mostly implemented in proprietary fashion. A typical example is the so-called automation stack. This section will describe the way from the automation stack to the Industrial Internet of Things, from proprietary interfaces to the digital transformation of communication. The next section will focus on data processing.
Automation Stack In a digitized industrial production environment (“Industry 3.0”), the techniques and systems in process control are classified using the automation stack, which is also called automation pyramid or 5-layer model (see Fig. 2). The automation stack represents the different levels in industrial production. Each level has its own task in production, whereby there are fluid boundaries depending on the operational situation. This model helps to identify the potential systems/levels for Industry 4.0 and NDE 4.0 interaction. However, validity of this model needs to be discussed regarding Industry 4.0 and NDE 4.0. Level 0 (process level) is the sensor and actuator level for simple and fast data collection. The field level is the interface to the production process using input and output signals. The control level uses systems like programmable logic controllers (PLC) for controlling the equipment. Supervisory control and data acquisition (SCADA) of all the equipment in a shop happens at shop floor level. SCADA systems usually also provide some dashboard functionality to monitor production on the shop floor level. Manufacturing execution systems (MES) are usually used for collecting all production data and production planning on the plant level. Finally,
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
299
Enterprise Resource Planning Enterprise Level
ERP MES
Manufacturing Execuon System Plant Level
SCADA PLC
Supervisory Control and Data Acquision Shop Floor Level
Programmable Logic Controller Control Level
Field Level
In-/Output Signals Process Level
Production Fig. 2 The Automation Stack [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0) [3]
enterprise resource planning (ERP) systems control operations planning and procurement for a company. Systems for product lifecycle management (PLM) or design are usually not included in the automation pyramid (as the automation pyramid visualizes the automation during production and not during the lifecycle of a product) but should be connected to both the MES and the ERP systems. NDE equipment, if connected at all, is usually on the control or field level. The information flow for the planning of production comes from the ERP system and is broken down to the field/process level (meaning the communication starts at the top level of the pyramid and is communicated to the bottom layer). Once production is running the data is collected by the field/process level, is condensed in several steps (in the different levels), and finally the key-performance indicators (KPI) are stored in the ERP system (meaning the communication starts at the bottom levels of the pyramid and is communicated to the top level). For this information flow in both directions, interfaces need to be implemented between the levels. Depending on the number of systems or devices in a level, the number of interfaces to be implemented can be exhausting. This is in particular true between the control and the shop floor level. A typical industrial shop floor usually contains several thousands of PLCs. Each PLC is manufactured by an OEM, and most of them have their own proprietary interfaces
300
J. Vrana
or APIs (application programming interface) which all have to be implemented individually to allow the integration into SCADA. Moreover, in many production environments, no digital interfaces are implemented between MES and SCADA. This is why in a lot of production environments, analog (paper-based) or not-machine-readable digital (email or PDF) solutions are still used – like paper-based routing sheets. However, such solutions require human action, are highly error prone (like entering the ten-digit serial number of a certain component), and hinder the information flow of information from production to the ERP system. Thinking about one of the main goals of Industry 4.0 – the improvement of industrial production by analyzing data – this works best if data from operations planning, financial planning, procurement, and sales is combined with the data from production. But this is not working as #1 a lot of devices from the control and field level are not integrated and #2 due to paper-based routing sheets, meaning this system requires a major revision. Another major idea/goal toward Industry 4.0 is the smart factory which requires that every device and system (including all NDE equipment) is able to communicate with each other’s device and with each other’s system. All this is independent of the level of the automation pyramid. Therefore, not only interfaces between two adjacent levels would be necessary but also interfaces between all devices and systems at all levels, as up to this date most interfaces are proprietary the implementation effort for n assets scales quadratically (ðn12 Þ n). In a realistic production environment with a couple of hundred or thousand devices, this leads to unmanageable implementation costs preventing the smart factory an in essence also Industry 4.0. This is one of the reasons why standardized, open, and machine-readable interfaces become key for Industry 4.0, and this is why companies will have to shift from proprietary interfaces to standard interfaces if they want to survive the ongoing fourth industrial revolution. Looking onto the member lists of the ongoing standardization efforts shows that most of the big players (for example, SAP, Microsoft, and Siemens) are beginning to understand this. Unfortunately, a many small and medium companies are still ignoring this development. In other areas of industry (e.g., civil engineering, owner-operator, and recycling), similar automation stacks can be identified with similar issues.
The Idea of the IIoT The idea of the Industrial Internet of Things (IIoT), as shown in Fig. 3, and similar concepts in other areas is to overcome the challenges of the automation stack by eliminating all interfaces between the levels of the automation stack and to integrate all (relevant) data seamlessly. The automation stack is focused on production. In contrast, the IIoT is a holistic approach. It starts with the initial idea, including design and lifing considerations (for example, by integration of CAD, fracture mechanics, or other design and lifing systems or software), production, operation, and maintenance. It even includes EOL
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
301
Cloud
ERP
CAD
PLM
Computer Aided Design
Product Lifecycle Mngmt
MES
NDE
SCADA
Maintenance
IIoT
PLC
SHM
NDE
Structural Health Monitoring
In-/Output Signals Supply Chain Management
Sensors
Production NDE
NDE
NDE
SCM
Fig. 3 Industrial Internet of Things and how currently NDE is integrated [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
(end of life) and enables a circular economy (repurpose, reuse, and recycle). The idea is to integrate all the data before, during, and after the lifetime of the product, to include data captured during production in the supply chain, during product lifecycle management, from service, from structural health monitoring, and from a variety of sensors, including the integration of cloud services and cloud storage, or database systems. Also, data from public information sources or data bought (like market research data) should be integrated. All this integration requires standardized, open, and machine-readable interfaces. NDE inspections are performed during manufacturing in the supply chain, in production, in service, and in maintenance. Also structural health monitoring and sensors in general provide nondestructive evaluation results. NDE inspections are performed manually, automated, or automatically and are evaluated, and a report, containing the key-performance indicators (KPI) of the inspection, is generated. Currently, most of those reports are printed, signed, and archived, either paper or scanned PDF based, all to eventually provide a decision for quality assurance. A GO/NO GO decision. This is why NDE is currently seen, by some customers, as a cost center [1, 3]. With the current way of archiving NDE reports (in paper format or as a scanned PDF in some database), the data stored on the report, the KPIs written down on the report, can only be read by computers under huge effort. So, in most cases, this information is lost for further processing (besides manually retyping all information into data-base systems). Similarly, archiving the raw or processed data and metadata of inspections in proprietary formats of the manufacturer of the system means that the data can only be accessed by other systems #1 if the manufacturer allows and
302
J. Vrana
enables the access and #2 after implementation of a converter. In case a manufacturer decides to terminate certain products or data formats, or the manufacturer goes out of business, or the software is not supported by modern computers anymore, the data will be lost. This current situation is tragic. NDE data should not be limited to use for quality assurance. NDE data provides much more value. NDE data can be used, for example, to make lifing calculations more accurate. NDE data can be used to calculate more exactly when the next maintenance must be performed. NDE data can shift, e.g., from schedule-based and condition-based monitoring to predictive and prescriptive maintenance. But this would require that engineering, in particularly engineering using statistical methods, gains access to the results, to the data. This will convert NDE from a cost to a value center and requires integrating NDE into the IIoT (see Fig. 4). NDE 4.0 is the chance for NDE to free itself from being considered a pure cost center and to free itself from the niche it is currently restrained in. Comparing with other fields, NDE has the benefit that a lot of inspection equipment is already installed. Meaning the hardware investment for the customers to integrate NDE into the IIoT is negligible. Mainly, the software needs to be enabled. Such an integration requires a change of the thought processes of the inspectors. Inspectors are trained to provide the information critical for quality assurance, but they do not necessarily understand the needs of engineering. The authors have heard comments such as “I don’t inspect chips.” Meaning the inspectors were not willing to inspect areas which will, in later manufacturing steps, be removed. In the context of Industry 4.0, all information is important. Test results from areas that will later be
Cloud
ERP
CAD
PLM
Computer Aided Design
Product Lifecycle Mngmt
MES
Maintenance
SCADA PLC
SHM
IIoT
Structural Health Monitoring
NDE Nondestrucve Evaluaon
In-/Output Signals
Supply Chain Management
Production
Sensors
SCM
Fig. 4 Industrial Internet of Things and how NDE should be integrated [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
303
machined also contain valuable information that can be used, for example, to improve lifing models.
How to Integrate NDE into the IIOT: Basic Ideas NDE as an integral part of the product development process, industrial production, and industrial operation provides the quality assurance means needed by industry. As Fig. 5 shows, NDE is typically performed during initial production (at the asset OEM and/or in the supply chain), at certain intervals in operation, and after the EOL of an asset. In addition, nondestructive sensor technology is used for monitoring and evaluation during production and assembly as well as structural health monitoring (SHM) or condition monitoring (CM) during operation. In the following, it will first be discussed how NDE is integrated into the product development process, second, in more detail, how NDE is integrated into serial production and maintenance, and third which interfaces need to be implemented for a holistic implementation of NDE into the IIoT. During the product development process (see Fig. 6), the specifications for production and inspection are created through the cooperation of experts from design, material sciences, production, and NDE. Those specifications are field tested to optimize design and inspections. The value of NDE can already be seen here, as Asset OEMs & Supply Chain Idea
Raw Material
Design
Component
Sensors
EOL
Owner-Operator
Production Inspection
Assembly
Operation
Service Inspection
Circular Economy
NDE
Sensors
SHM & CM
NDE
NDE
Sensors
Fig. 5 NDE during the lifetime of a product [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0) Design
Field Test
Producon
Standards
Design
Producon Prototype(s)
Producon
Field Test Inspecons & Addional Inspecons
Inspecon
Material Science
Manufacturing
Opmizaon Design
Opmizaon NDE
Commissioning
NDE
Design, …
NDE Specificaons
Feedback
Fig. 6 Typical product development process [3]. (Fig. 7 provides a more detailed description of the situation during inspection in serial production and operation). (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
304
J. Vrana
NDE offers a look into the prototypes and can therefore make a significant contribution to improving design and production. This requires interfaces for the statistical evaluation of the data (together with the process data from the inspections). The data that can be obtained during the subsequent serial production and service give an even better picture of the components produced and their joints. This allows further improvements in design and production. In addition, they allow the next generation of products to be optimized (feedforward). Figure 7 shows a closer look at the serial production and the inspections in the supply chain and during operation – starting with material suppliers, who already carry out inspections on the raw material, through inspections at the component suppliers to the inspections at the OEMs, who assemble the final product. After all, the user is responsible for commissioning and service checks after certain operating times until the asset reaches its end of life and NDE supports reuse, repurposing, or recycling. All these inspections provide results that could be integrated into an Industry 4.0 world through appropriate interfaces and thus, as described above, could contribute to improving production, and design and maintenance. Figure 8 shows the interfaces of each individual inspection step. The input interfaces marked in green supply the order data, provide the inspector with information on the component, and serve to correctly set the devices, the inspection, the mechanics, and the evaluation and to document the results in accordance with the specifications. Digital transformation of these input interfaces will help to support the inspector in his/her work, to avoid errors in the inspection, to optimize the inspection, and to ensure a clear, revision-safe assignment of the results by digital machine identification of a component. On the output side, the inspection system status information and the inspection results are generated. The inspection system status information could be used for Material Suppliers
Component Suppliers
OEM
User
Producon
Producon
Receiving & Acceptance Inspecon
Commissioning & Acceptance Inspecon
Inspecon
Inspecon
Machining
Operaon
Machining
Inspecon
Service Inspecon
Assembly
EOL
Potenally addional steps between component suppliers and OEM, like machining shops
Circular Economy Inspecons
Fig. 7 Typical supply chain with inspection steps in serial production and operation [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
Component Informaon
Inspecon Equipment
Environmental Parameters
Inspector Cerficaon
305
Procedures and Specificaons
Equipment, Tesng, Mechanical und Processing Sengs Exisng Documentaon
Inspecon
Data Processing
RAW Data
Interpretaon
Documentaon of Results
Processed Data Reports with Indicaons, Sensivies, OK NOK, … & Deviaon Reports
Inspecon System Status Informaon Meta Data
Fig. 8 Typical sequence of an (automated) inspection in serial production or during operation [3] (can in principle be used for manual testing). (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
maintenance and to improve the inspection system itself. The inspection results consist of the actual test data, the raw and processed data and the metadata (meaning the framework parameters of the inspection and evaluation), and finally the reported values. As mentioned above, the reported values represent the key performance indicators (KPIs) of the inspection. For industry, interpreted data are the easiest to evaluate. Therefore, the reported values are currently the most relevant data of the inspection. Consideration should be given to whether the currently reported values are sufficient for NDE 4.0 purposes or whether the results to be reported should be extended for statistical purposes and thus for greater benefit to the customer.
Semantic Interoperability If a human sees some data, some information, like a certain number, humans are in some cases able to directly understand its meaning, like the number 42. Most of the readers will immediately know that 42 is the answer to the ultimate question of life, the universe, and everything according to the novel “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams [12]. If the number 42 is entered in a computer, it will be converted into a binary number. However, the computer will not know how to do the interpretation of such a number. A computer could conduct a search. The current Wikipedia article on the number 42 provides 18 mathematical, 6 scientific, 5 technological, 5 astronomical, 10 religious, and multiple other meanings in popular culture. The number 42 could also be (see Fig. 9) the length of a truck, the gain of a UT instrument, the 42nd day of the year, or the authors’ weight. Even knowing that the number 42 represents the gain of a UT instrument, the questions arise: Was the gain established before or after calibration, on which day, which probe, which component, which unit of measurement, which instrument, according to which specification, etc. For a computer to identify a certain information without doubt, to exchange data between computer systems with a unique, common meaning, and to enable machine
306
J. Vrana
» Length of a truck « Which truck? Unit of Measurement?
» Gain of UT Instrument « Before or aer calibraon? At which day? Which probe? Which component?
» 42nd day of the year « Which year? What is the informaon?
42 » My Weight in kg « Which day?
» Answer to the ulmate queson of life, the universe, and everything « [Douglas Adams, The Hitchhiker's Guide to the Galaxy]
Fig. 9 What could be the meaning of the number “42”? [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
readability, semantic interoperability is needed. This is achieved by adding metadata to the data which is to be stored and linking each data element to a controlled, shared vocabulary. The meaning of the data is transmitted with the data itself in a selfdescribing “information package” that is independent of any information system. It is this shared vocabulary and the associated links to an ontology, which form the basis and capability for machine interpretation, inference, and logic. Semantic interoperability converts data into information, it allows combining information from different sources within a so-called namespace, it allows datafusion, it allows creating data formats and data base structures, and it allows the direct use of the information in digital twins and for further data processing with, for example, using AI.
Interfaces for IIoT The need for standardized, vendor-independent interfaces was discussed before. But what are the interfaces in this context? Is it the question regarding the physical interface, regarding USB, WIFI, or 5G, or regarding TCP/IP, http, XML, or OPC UA? Before further discussion, the term interface must be specified in more detail.
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
307
OSI Level The OSI model, see Fig. 10, gives an overview of the different abstraction layers of digital interfaces and helps to select the interfaces that are decisive for NDE 4.0. The lowest level represents the physical connection, i.e., the cable or the radio connection; this is where WIFI, the various 5G variants, or low-power wide-area networks (LPWAN) can play important roles. The first OSI layer, the transmission of the individual bits, runs via this connection. The information to be transmitted is combined with transmitter and receiver addresses and other information in the data link layer to form frames. Information packets are “tied” in the network layer and combined into segments in the transport layer. The layers above are the so-called host layers. The session layer is responsible for process communication. The presentation layer is responsible for converting the data from a system-independent to a system-dependent format and thus enables syntactically correct data exchange between different systems. Tasks such as encryption and compression also fall into this layer. Finally, the application layer provides functions for applications, for example, with application programming interfaces (API). The application layer is the communication layer that is decisive for Industry and NDE 4.0. However, semantic interoperability (not to be confused with syntactic) needs be added on top for an appropriate Industry 4.0 communication. The physical connection (USB, WLAN, 5G. . .) is more or less irrelevant. An example of an application layer protocol is HL7 (Health Level 7). HL7 is the protocol used in healthcare to ensure interoperability between different information systems. HL7 (besides DICOM – see below) should therefore be one of the interfaces for Medicine 4.0, and the communication can run over various physical connections. Other protocols such as OPC UA, Data Distribution Service (DDS),
Reciever
Host Layers
Transmitter 7: Application 6: Presentation 5: Session
Media Layers
4: Transport 3: Network 2: Data Link 1: Physical
Physical Connection Fig. 10 The OSI layers – a model for visualizing the degree of abstraction of interfaces [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
308
J. Vrana
or oneM2M are gaining ground in the industrial world (For those who might ask: the authors do not see an application of HL7 for NDE or Industry.).
IIC Core Connectivity Standards The Industrial Internet Consortium (IIC) approaches the Industrial Internet of Things in its specifications. In volume G5 [13], Industry 4.0 interfaces are discussed. Those discussions are based on the Industrial Internet Connectivity Stack Model, which is similar to the OSI model; however, compared to the OSI model, it combines the three host layers to one so-called framework layer. Based on this model, it compares the interface protocols OPC UA, DDS, and oneM2M with web services (see Table 1). Every interface protocol is considered a connectivity core standard, and the need for core gateways between the connectivity core standards is emphasized. This brings the benefit that every connectivity standard can be used, and the information combined using the gateways between the standards. DDS is managed by Object Management Group (OMG) and focuses on low-latency, low-jitter peer-to-peer communication with a high quality of service (QoS). It is data-centric and does not implement semantic interoperability. OneM2M is a connectivity standard mainly for mobile applications with intermittent connections and low demands regarding latency and jitter. Semantic interoperability implementation is ongoing.
Table 1 The IIC core connectivity standards [3] DDS
oneM2M
WebServices
OPC UA
Managed by
Object Management Group
oneM2M Partners
OPC Foundation
Origin Focus
N/A Data-centric Peer-to-peer Low-latency Low-jitter High-QoS
Telecommunications Mobile applications Intermittent connections
World Wide Web Consortium (W3C) Internet Human user interaction interfaces
Semantic interoperability
N/A
oneM2M Base Ontology
Usable for NDE 4.0
Limited
Mobile devices
Web Ontology Language (OWL) Humancomputer interfaces
Manufacturing Object oriented Client/server and Pub/sub Simple devicecentered programming Companion specifications Computercomputer interfaces
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
309
Web services use the Hypertext Transfer Protocol (HTTP) known from the Internet. It is primarily for human user interaction interfaces, but implementations such as REST (representational state transfer) or SOAP (Simple Object Access Protocol) allow the use of web services for computer-computer communication. Semantic interoperability can be reached using the Web Ontology Language (OWL). OPC UA, discussed in detail below, is mainly used in the manufacturing industry. In contrast to DDS, it is object oriented and provides semantic interoperability. For NDE applications, oneM2M could be of benefit for mobile devices. Web services are ideal for human-computer interaction and could be used for operator interfaces to store and read information regarding the component to be inspected. Low-latency and low-jitter communication is not necessary for typical NDE equipment; therefore, DDS will not be considered further. OPC UA, being the standard protocol for manufacturing and due to the included semantic interoperability, seems like the ideal interface for NDE 4.0.
OPC UA The high-level communication protocol/framework that is currently established in the manufacturing Industry 4.0 world is OPC UA [14, 15]. OPC UA has its origin in the Component Object Model (COM) and the Object Linking and Embedding (OLE) protocol. OLE was developed by Microsoft to enable users to link or embed objects created with other programs into programs and is used extensively within Microsoft Office. COM is a technique developed by Microsoft for interprocess communication under Windows (introduced in 1992 with Windows 3.1). This standardized COM interface allows any program to communicate with each other without having to define an interface separately. With the Distributed Component Object Model (DCOM), the possibility was created that COM can also communicate via computer networks. Based on these interfaces, a standardized software interface, OLE for Process Control (OPC), was created in 1996, which enabled operating system independent data exchange (i.e., also with systems without Windows) in automation technology between applications from different manufacturers. Shortly after the publication of the first OPC specification, the OPC Foundation was founded, which is responsible for the further development of this standard. The first version of the OPC Unified Architecture (OPC UA) was finally released in 2006. OPC UA differs from OPC in its ability not only to transport machine data, but also to describe it semantically in a machine-readable way. At the same time, the abbreviation OPC was redefined as Open Platform Communications. OPC UA uses either TCP/IP for the binary protocol (OSI layer 4) or SOAP for web services (OSI layer 7). Both client-server and pub-sub architectures are supported by the OPC UA communication framework (see Fig. 11). Based on this, OPC UA implements a security layer with authentication and authorization, encryption, and data integrity through signing. APIs (application programming interfaces) are offered to easily implement OPC UA in programs. In the .net framework, OPC UA is even an integrated component. This means that the users do not have to worry
310
J. Vrana
Vendor Specific Extensions Companion Information Models (e.g. FDI, Robots, Scales, NDE)
Core Information Models (e.g. Analog Data, Alarms, State Machines, File Transfer)
Information Model Building Blocks (Meta Model)
Information Model Access Browse and Access Data and Semantics Execute Methods, Configure
Data and Event Notifications
Client-Server
Pub-Sub
Use Case Specific Protocol Mappings Fig. 11 OPC UA Architecture [3] (© Vrana GmbH, based on [18]). (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
about how the information is transmitted. This is done completely in the OPC UA framework. The only thing that matters is what information is transmitted. As Fig. 11 shows, the OPC information model already defines some basic core information models in which models are defined that are required in many applications. In addition, companion specifications exist for product classes such as field devices (FDI), robots, or scales. These companion specifications provide semantic interoperability and are therefore the basis for Industry 4.0, the basis for smooth I4.0 interfaces and communication and result in any OPC UA-enabled device being able to interpret data from others. In addition, there may also be manufacturer-specific specifications for the exchange of data between the devices of one manufacturer. OPC UA Pub/Sub enables one-to-many and many-to-many communications. Moreover, OPC UA TSN (time-sensitive network) will make it possible to transfer data in real time and to extend OPC UA to the field level. The OPC UA specifications are also currently being converted into national Chinese and Korean standards. Moreover, it is planned to start the development of an NDE companion specification for OPC UA in a joint project between DGZfP, VDMA, and OPC Foundation. OPC UA is, like HL7 in healthcare, the standard for an interface to the manufacturing Industry 4.0 world. In the same way as in medical diagnostics, large amounts of data are in some cases generated with NDE (in OPC UA, larger files are split into smaller packages, e.g., the OPC UA C++ Toolkit has a maximum size of 16 MB). Computed tomography (CT), automated ultrasonic testing, and eddy
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
311
current testing can easily result in several GB per day that need to be archived long term. In the healthcare sector, those large data files resulted in the development of DICOM (Digital Imaging and Communications in Medicine) alongside HL7.
DICOM DICOM is an open standard with semantic interoperability for the storage and communication of documents, image, videos, and signal data and the associated metadata as well as for order and status communication with the corresponding devices. This will enable interoperability between systems from different vendors, as Industry 4.0 is striving for. In health management, this leads to the necessity of interfaces between HL7 and DICOM (see Fig. 12). This interface is usually found in the PACS (picture archiving and communication system) server. In the process, patient and job data are translated from HL7 to DICOM for communication to the imaging devices. Information about the order status, about provided services (e.g., “X-ray image of the lung”), as well as written findings and storage locations of the associated images are communicated back. The returned data, texts, and references would usually be referred to in industry as KPIs (key performance indicators). The central system for the “process logic” in hospitals is the HIS (hospital information system, comparable to an ERP system in industry), which communicates with all other systems via HL7. All image, video, and signal data are stored in DICOM format in PACS, which is designed to handle large amounts of data and is the central system for archiving and communicating the data.
Historical Data CHC
Hospital
HIS
Registration, Updates
RIS
Worklist
Notification
Accounting Findings
Hospital Information System
PACS
Planning, Updates
Findings Radiology Information System
CT
Status Information Images, Videos, Signal Data
Picture Archiving and Communication System
MRI
Angiography
Images Laboratory, OR Planning, etc.
CD Production for Patients
X-Ray Doctor
Ultrasound
Referrer
HL7
DICOM
Fig. 12 Interaction between HL7 and DICOM (©DIMATE GmbH, Germany)
312
J. Vrana
Digital Workflow in NDE with OPC UA and DICONDE For the NDE world, this system can be transferred from HL7 and DICOM as follows (see Fig. 13): The Industry 4.0 world consists of ERP (enterprise resource planning) or MES (manufacturing execution system) servers for production planning or as a production control system and asset supply data via OPC UA. A transmission of order data for inspections as well as a return transmission of notifications and inspection results (KPIs for storage in the MES) can be mapped via OPC UA. An integration of maintenance and calibration data of NDE equipment via OPC UA is also conceivable. With few exceptions, however, the raw data generated during tests are too large to be communicated reasonably via OPC UA. Like HIS in a hospital, ERP and MES are not designed for the administration, communication, and archiving of large amounts of image, video, or signal data, such as is generated in radiography, computed tomography, automated ultrasound, and eddy current testing or SAFT/TFM. Therefore, it makes sense to store the raw data outside the OPC UA world in a revisionproof way. The DICONDE standard offers itself as protocol and data format offering semantic interoperability. DICONDE is based on DICOM and has been adapted by ASTM to the requirements of the various NDE inspection methods [17–22]. In radiography, the DICONDE standard fits very well to the requirements of the users. There are already many manufacturers who store their data in the DICONDE format and have implemented the DICONDE communication interfaces, for example, for the digital query of inspection orders, whose IDs are then automatically stored in the
Offshore / Remote
Industrial Facilities / Plants
ERP
Inspection Planning
Branch Office
Archive
MES
Inspector Data Interpreter Image Distribution
OPC-UA
DICONDE
Fig. 13 Proposed interaction between OPC UA and DICONDE (©DIMATE GmbH, Germany)
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
313
metadata of the DICONDE files and thus ensure structural integrity between NDE raw data and ERP/MES. DICONDE is also currently established as the standard in the field of computer tomography. Similar to healthcare, an entity that “translates” order data and reported values between OPC UA and DICONDE makes sense. In ultrasonic and eddy current testing, however, the medical requirements are further apart from the requirements of NDE. Although the DICONDE standard strives to define suitable data formats [17–22], these are currently not supported by device manufacturers. It is necessary to clarify at which points the manufacturers still see a need for action. On the other hand, DICONDE can be easily implemented for the connection of visual inspections, e.g., photos in the field of dye-penetrant and magnetic particle inspection and videos in the field of endo- and boroscopic tests.
Data Security, Data Sovereignty, Connected World, and Data Markets The basis for all interfaces is confidence. This is why data security and sovereignty are key, in particular considering a connected world, where every company is connected with every other company and where data will be a commodity. Data security [23] is a means for protecting data (for example, in files, emails, clouds, databases, or on servers) from unwanted actions of unauthorized users or from destructive forces. Data security is usually implemented by creating decentralized backups (to protect from destructive forces) and by using data encryption (to protect from unwanted actions). Data encryption is based on mathematical algorithms which encrypt and decrypt data using encryption keys. If the correct key is known, encryption and decryption can be accomplished in a short time, but if the key is not known, the decryption becomes very challenging for current-day computers (several months or years of calculation time), and the data is therefore secured from unwanted access. However, with computers becoming increasingly more powerful over time, encryption keys and algorithms need to become more challenging over time. And data encrypted with old algorithms or too short keys need to be re-encrypted after some time to keep it safe. The only measure ensuring data encryption over time is to use keys which have the same length as the data to be encrypted and which are purely random. One of the few methods to create such keys is quantum cryptography, which is still quite expensive in installation. Where data security is the necessary basis, data sovereignty goes one step further protecting data [24]. Data sovereignty guarantees the sovereignty of data for its creator or its owner. Data itself, if not artistic, is legally not protected by any copyright. Therefore, if a dataset is submitted to somebody else currently, only individual contracts hinder the receiver from forwarding or selling the data (even if submitted using data encryption). Therefore, two measures have to be implemented to guarantee data sovereignty. #1 legal documents need to be prepared, and #2 software and interfaces need to be implemented to restrict the use on receiving side to the rules of the submitting side. The International Data Space Association (IDSA) is working on both. IDSA develops standards and de jure standards based on the requirements of IDSA
314
J. Vrana
Legend:
IDS
Non-IDS Data Communication
Data Usage Constraints
IDS Connector
Data Marketplace
Industrial Data Cloud
Internet of Things Cloud
IDS IDS
IDS
Open Data Source
IDS IDS
Enterprise Cloud
IDS
Company 1
IDS
Company 2
IDS
Company n
IDS
Company n+1
IDS
Company n+2
Fig. 14 IDSA: connected Industrie 4.0 world [27]
members, works on the standardization of semantics for data exchange protocols, and provides sample code to ensure easy implementation. One of the key elements of IDSA is implementing the so-called IDS connectors (International Data Spaces Association 2019) which guarantee data sovereignty (see Fig. 14). Both the data source and the data sink have certified connectors. The data provider defines data use restrictions. The data consumer connector guarantees that the restrictions are followed, for example, if the data provider defines that the data consumer is allowed to view the data once the data will be deleted by the consumer connector after the data was viewed. This enables also the producer of the data to decide which customer can use his data in which form as an economic good, for statistical evaluation or similar. Data sovereignty will enable secure digital communication between companies, within the connected world. This information represents a value in itself. Data becomes an asset, a commodity. There is a market for information, and it is important to use it. The way to this market is the interfaces discussed in this publication.
How to Integrate NDE into the IIOT: How to Proceed With IIoT, OPC UA, WebServices, DICONDE, and IDSA, protocols and interfaces have already been created in industry to implement “NDE for Industry 4.0.” In order to make NDE an integral part of the Industry 4.0 world, cooperation is required – for example, by establishing semantic interoperability by using OPC UA companion specifications, and WebServices Web Ontology Language. With DICOM/DICONDE, there is an advanced interface and a well-developed open data format available. DICOM/DICONDE already offers semantic
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
315
interoperability, and its standardized and open ontology can be used as a base for the NDE ontologies for the standard Industry 4.0 interfaces mentioned in the paragraph above. For NDE technologies with large data volumes, DICONDE is an ideal addition to the industrial interfaces (similar to the combination HL7 and DICOM). This means that interfaces/mappings from DICONDE to the Industry 4.0 world (OPC UA) are needed. For NDE technologies with small data volumes, it is necessary to decide, depending on the application, whether a direct interface is created using OPC UA or whether these are first stored in the DICONDE world and then transferred to the OPC UA world, in order to summarize all test results in one place. In addition, it is necessary to check which steps are required to be able to use DICONDE for ultrasound and eddy current. Figure 15 (based on Fig. 8) shows an idea for the integration of an NDE system into an Industry 4.0 landscape, using OPC UA for most input and output files and to use DICONDE for archiving the actual RAW and processed data files including the connected metadata. In general, a revision-safe and secure storage must always be ensured. The retrievability, integrity, and sovereignty of the data is key. Most of those requirements are already implemented in DICONDE and OPC UA. Other open data formats for NDE data, like HDF5, can be seen as alternatives to DICONDE. However, for most inspection situations, the standardized open information models of DICONDE, which enable machine readable data using semantic interoperability, surpass the information models of the other data formats. Also, revision-safe and secure data storage needs to be implemented in addition. In order to ensure the interests of NDE in the Industry 4.0 world and for the development of the necessary ontologies, cooperation with Industry 4.0 must be strengthened. With the integration of NDE into the IIoT and with the incorporation of semantic interoperability models (ontologies), NDE data will be transformed to information
Component Informaon
Inspecon Equipment
Environmental Parameters
Inspector Cerficaon
Procedures and Specificaons
Equipment, Inspecon, Mechanical und Processing Sengs
Exisng Documentaon
Inspecon
Data Processing
RAW Data Inspecon System Status Informaon
Interpretaon
Documentaon of Results
Processed Data Reports with Indicaons, Sensivies, OK NOK, … & Deviaon Reports
DICONDE Meta Data
Fig. 15 Idea for the integration of an NDE system into an Industry 4.0 landscape [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
316
J. Vrana
Information
IIoT Data
Data Processing
Knowledge Action
Fig. 16 The digital transformation of data communication to establish the cyber-physical loop [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
(see Fig. 16). This allows data fusion between different NDE methods, between classical NDE and NDE sensors, between NDE and other material tests, and between NDE and all other data. This is the path from proprietary interfaces to the digital transformation of communication which can now be used for the digital transformation of data processing: digitals twins.
Digital Twin According to MedicineNet, a twin is one of two children produced in the same pregnancy. Twins who develop from a single ovum are called monozygotic or identical twins. They have identical genomes. This leads to the situation that twin studies are a key tool in behavioral studies. An ideal digital twin is a virtual representation of an asset, and like a monozygotic twin it shows the same behavior and development as the asset. An asset can be anything from a manufacturing device, sensor, component, product, system, process, service, operation, plant, business, enterprise, or software, to a person, operator, or engineer. A digital twin connects the physical world with the cyber world. Digital twins can be used for behavioral or development studies of the asset represented. The first idea to this concept was introduced in 2002 by Michael Grieves [26].
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
317
Digital Twin of a Person To create a digital twin of a person (see Fig. 17), information about the person must be collected. This includes information about the type (in the case of the author: mammal, human, and male), about ancestors and relatives, physiology of the body, psychology, and information documenting status, development, and behavior (like financial, occupation, social, family, friendships, partnerships, sexual, leisure, or professionally related information). The physiological information will contain information inherited from the type (like the general constitution of a human body, including the skeleton, the organs, etc.), the peculiarities of the instance, the as-is condition of the body, and some real-time information. One way to determine the peculiarities and the as-is condition of the body is chemical, physical, and radiologic testing (NDE!). Real-time information sensors will provide data regarding heart rate, sleeping patterns, etc. (SHM!) and will support the information from the radiologic testing. All this data, all this information, can then be used to predict the behavior of the person or respectively developments regarding the person. For example, it could be used to predict the financial success of a person, buying patterns, risk propensity, movement profiles, and work attitude or to gain information about the personality. This can be used, for example, to automatically tailor adds, to calculate insurance rates, credit ratings, health ratings, etc. The data could also be used to supports
Fig. 17 The digital twin of a person with the core components of every digital twin [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
My Digital Twin Information
Data Processing
I
Visualization & Action
318
J. Vrana
doctors in providing reasonable diagnostics and treatments, to predict future health issues, or even to calculate the lifetime. This shows the value of the data once the data is processed by an appropriate digital twin. This is why all of the big players in the IT industry are collecting data about everybody. The more data they have – the better the predictions, similarly for insurance companies or government agencies.
Basic Concepts An ideal or complete digital twin of a person would incorporate all data and would be able to answer all questions regarding the person. With the data collected of a person, social media platforms, purchasing platforms, insurance companies, and governments have already implemented partial digital twins. Some of those digital twins are/will be unbelievably valuable to us, and some of them extremely scary or even harmful. This is one of the reasons why protecting personal data is so important. How the create value out of information. The most straightforward way is to visualize data and its cross correlations. This can be supported by statistical evaluation tools, by algorithms, and by simulation tools for data processing. Those algorithms can either be implemented deterministically, based on knowledge about physical or other correlations, heuristically, or statistically. Deterministic solutions are ideal because they will provide accurate results even if the data base is small and even outside of the data spectrum available. Filters and reconstruction algorithms (such as computed tomography, SAFT, or TFM) are considered deterministic algorithms. However, in some cases, the correlations are unknown or so complex that deterministic implementations are not possible or too expensive. Once a sufficient data base exists, heuristic or statistical algorithms, such as artificial intelligence, machine, or deep learning, can become handy. This means: 1. If the correlations are known and not too complex, deterministic algorithms should be used. 2. If the correlations are not known or too complex and if a sufficient data base exists, heuristic or statistical algorithms should be used. 3. If the correlations are not known or too complex and if the data base is not sufficient, more data should be collected, simulations performed, and/or the correlations worked on. Simulations are computer programs for the recreation of specific realities for various purposes, such as training, entertainment, or the analysis of systems too complex for theoretical or formulaic treatment. Simulations are virtual experiments, and like real-life experiments they can be used to gain knowledge about certain processes or system behavior. Simulations are based on input data, and with more data they will provide more accurate results. The data obtained by simulations can be
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
319
used to enhance the data base obtained by experiments. However, simulations need to be validated to assure appropriate results. Simulations are usually used to solve “forward problems” by assuming a certain set of parameters and a simulation model and seeing the output. For digital twins, usually the opposite is of interest, solving “inverse problems” – meaning taking the result, applying the model backward, and obtaining the parameters. Simulations can also be used to solve inverse problems; however, this usually takes tremendous computing power. For example, simulations can be performed for thousands of parameter sets which afterward are used to develop algorithms, for example, by training a convolutional neural network with the results of the simulation. Figure 17 visualizes the digital twin of a person. The digital twin is represented by the blue box and contains the key elements of every digital twin: 1. Information (data with semantic interoperability) 2. Data processing (algorithms, statistical evaluation, and simulation tools) 3. Visualization and action (Action is either generated manually by visually analyzing the data and the results of the data processing, or it is generated automatically.) 4. The cyber-physical loop between the person and the person’s digital twin Like the digital twins of a person, digital twins can be created for every asset. As mentioned above, an asset can be anything from a component, product, process, or system. And digital twins scale from components, processes, operations, plants, businesses, and enterprises, to governments and countries.
Nesting Looking in industrial manufacturing a digital twin for the complete enterprise, digital twins on plant level, digital twins on shop floor level, and digital twins for every single device can be imagined. The digital twin of the enterprise will contain all the digital twins on plant level. A digital twin on plant level will contain all the digital twins on shop floor level, and a digital twin on shop floor level contains all the digital twins of the devices. This is called nesting (vertical axis in Fig. 18: Landscape of digital twins. On the vertical axis, nesting of digital twins is shown, and on the horizontal axis the digital thread [2].) and follows the automation stack showed in Fig. 2. Every digital twin on lower levels will inherit the properties of the digital twin at higher levels. A similar nested structure can be found for every product in operation. For example, the digital twin of a civil airline company will contain the digital twins of all the airplanes. The digital twin of an airplane contains the digital twins of wings, cockpit, fuselage, and engines. And the digital twin of an engine will contain all the components which build the engine. For NDE, this concept of nesting can be taken further. The digital twin of an NDE system will be part of the digital twin of the shop floor. The digital twin of the NDE
320
J. Vrana
Nesting (Equipment Related) Digital Thread (Product Related)
Type
Instances
Fig. 18 Landscape of digital twins. On the vertical axis, nesting of digital twins is shown, and on the horizontal axis the digital thread [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
system will contain the digital twins for the mechanical automation, for the detectors and sensors, for the evaluation software, and for the operator.
Digital Twins of Personnel Personnel, i.e., operators or inspectors, are also represented by digital twins. For example, there may be a digital twin for a level 3 ultrasonic inspector specializing in the inspection of castings. This inspector receives his task via a tablet, or an augmented reality platform, and the results are stored digitally by the inspector. This shows that digitalization, digital transformation, Industry 4.0, and digital twins are not striving for the deserted factory. For Industry 4.0, networking is crucial, and the results must be available digitally. It does not require automation. For some work steps, especially repetitive tasks, it makes more sense to use automated solutions. In other work steps, the human being is more effective.
Digital Thread Another viewing angle into industrial manufacturing is from the viewpoint of the product (see also Fig. 5). First, the idea for a new product is born, the product designed, raw material produced, and individual components manufactured, assembled to a product, and operated until the product reaches its end of life (EOL). After its end of life, the product is disassembled and the material of the components gets recycled. Each of those steps can be represented by a digital twin. And all the digital twins over the lifetime of a product are connected by the digital thread as shown by
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
321
Fig. 18 (horizontal axis). The digital twins during lifetime relate to various companies. Raw material and components will usually be produced by suppliers, assembly will be performed by an OEM, operation by an owner-operator, and the activities after EOL by specialized companies. This means the digital thread needs to be handed from one company to the next during the lifetime of an asset. Such digital threads can be created for every asset. Every manufacturing device should have its digital thread, every process, every software, and even every company. The digital thread for an enterprise will start with the initial idea for the company. The company will grow, will procure other companies, and will eventually go out of business.
Digital Twin Type, Instance, and Aggregate A digital twin type or prototype (DTP) is a digital twin for an asset before starting the production/creation. Such digital twins usually incorporate the initial idea, the design requirements, drawings, CAD models, results of destructive tests, material properties, bill of materials, etc. A digital twin instance (DTI) is a digital twin of a certain instance of an asset. DTI will contain the information from the DTP. Such digital twins usually incorporate the as-is geometry (metrological information from the components and the assembled product), the peculiarities and internal structure of the components and product (NDE), sensor data from manufacturing and operation, bill of processes, service records, operational data, etc. A digital twin aggregate (DTA) is the aggregation of all the DTIs. Considering an airline company, it could be an aggregate of all airplanes, of all airplanes of a certain type, of all engines, or of all seats in all airplanes.
Digital Twin Interrelation Figure 18 shows on the horizontal axis the digital thread of an asset (in this example of a product in production). A machine used for a certain production or inspection operation is applied at a certain point in time regarding the digital thread. Therefore, the nested structure of such a machine is shown vertically and crosses the digital thread at a certain point of time. At that point, the data from the nested production digital twin becomes part of the digital thread. However, it is not just one horizontal and one vertical axis. It is multiple. And all of them are interrelated. To summarize: 1. Every asset has a digital thread. 2. Every asset is part of a nested structure. This leads to a 2D net of digital twins.
322
J. Vrana
Moreover, the different instances of a type constitute a third dimension orthogonal to the paper. As not necessarily always the same machine was used for production, the digital twins of the various instances will interact with different branches of the nested production-related digital twins. This creates a 3D net of digital twins.
Reference Architectural Model Industry 4.0 (RAMI 4.0) As discussed above, a digital twin can be created for every asset. Either a complete digital twin or multiple partial digital twins. Figure 18 shows horizontally the life cycle and value stream of an asset which is described by its digital thread, containing multiple digital twins, starting with DTPs for the type, and continuing with the DTIs for the instance. Moreover, Fig. 18 showed that every asset is contained in hierarchical structure. This leads to a digital nesting structure for the digital twins. There is a third dimension which was not discussed up to the moment: the abstraction layers, the architecture: starting with the asset, continuing with the integration, which represents the connection between the physical and the cyber world, the communication of the data, the conversion of data into information, the actions gained from the information, and finally how all of this influences the business. This structure, with its three dimensions, was already identified and visualized by the Plattform Industrie 4.0 in 2015 (see Fig. 19) and was given the name Reference Architectural Model Industry 4.0 (RAMI 4.0). RAMI 4.0 is described in detail in DIN SPEC 91345 [27].
Layers Business Functional Information Communication Integration Asset
Type
Life Cycle & Value Stream
Instance
Hierarchy Levels
Fig. 19 Slightly modified RAMI 4.0 [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
323
The life cycle and value stream (IEC 62890 [28]) axis of RAMI 4.0 represents the value chain and the life cycle of an asset, starting with the development and usage of a new type, through the production of the instance to the usage of the instance. Compared to RAMI 4.0 as detailed in DIN SPEC 91345, Fig. 19 contains in addition the initial idea, EOL, and potential production within the supply chain, just like Fig. 5. The digital implementation of the life cycle and value stream is the digital thread as shown in Fig. 18. The term “type” is used to identify a new asset type, such as a new X-ray inspection system. Instance refers to the test facilities that have been built. The hierarchy levels (IEC 62264 & IEC 61512) correspond to the layers of the automation stack (refer to Fig. 2), besides the top level “Connected World,” and to the digital nesting structure of digital twins (Fig. 18). On the architecture axis (layers), the lowest layer (asset) represents the physical object. The “integration layer” is the transition layer between the physical and the information world. “Communication,” “information,” and “functional layer” are abstraction layers for the communication, and the “business layer” describes the business perspective. RAMI 4.0 provides a nearly complete picture of the cyber-physical landscape and provides an excellent possibility to locate interfaces, digital twins, cyber-physical loops, etc. within the Industry 4.0 landscape. However, RAMI 4.0 is asset centric, or to be more exact – it considers single instances of assets. Therefore, it does not consider the interaction with other assets, and it does not consider aggregation. This is where the digital twin interrelation, as proposed by the author, takes RAMI 4.0 to the next level.
General Concept of a Digital Twin As already indicated above, every digital twin consists of three main elements shown in Fig. 20.
Digital Twin Information
Data Processing
Data with Semantic Interoperability & Reliability Information
Algorithms Statistical Evaluation Prediction Simulation …
Visualization
Action
Cloud
IIoT
Fig. 20 Illustration of the concept of a digital twin [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
324
J. Vrana
Information is generated out of data by establishing semantic interoperability. Moreover, the data needs to have a reliability information. This does not only count for data from NDE, but also financial data is only accurate to a certain account. This reliability information is needed so that the data can be used appropriately in data processing. For NDE, this means that any data which is supposed to be used in a digital twin should contain reliability information. This is why the importance of PoD (probability of detection) will drastically increase with NDE 4.0. The information used in the digital twin will most likely not be stored within the digital twin. More likely, the digital twin will have access to database systems, to hard drives, to clouds, and to the IIoT which is combined using their semantic interoperability. This is the big data input for the digital twin. Data processing uses, as mentioned above, algorithms, statistical evaluation, and simulation tools for the big data processing. For such calculation’s conventional computers, AI, ML, DL, or even quantum computers can be used. Typical data processing approaches are the following: feedback, trending, predictive and prescriptive maintenance, probabilistic lifing, behavioral analytics, risk modeling, and reliability engineering. Visualization using extended reality, dashboards, or other aids to visualize the information, the cross correlations of the data, and the results of data processing will lead to a gain of knowledge. This knowledge can finally be implemented to improve production, maintenance, and design. This conversion from knowledge into action can either happen manually, after interpretation of the visualization, or automated. The ideal scenario for a digital twin is to perform all those actions in real time. Digital twins are living, learning models. The difference of a digital twin to the data processing implementations of the last decades is the step from digitalization to digital transformation, from proprietary implementations to a data-processing ecosystem – an ecosystem allowing to implement all data and information sources, an ecosystem allowing to use various applications and visualization tools available or to implement new ones, and an ecosystem allowing to automatically create action. And it needs to be scalable so that it can be used both for the “low-hanging” use cases using partial digital twins or the more challenging, more complete digital twins.
The Cyber-Physical Loop The digital twin is the core element of the cyber-physical loop as shown in Fig. 21 and closes the cyber-physical loop: The data is collected and digitized, converted into information by semantic interoperability, combined with other information in the IIoT, and processed by the digital twin to create knowledge, which finally leads back to actions in the physical world. As mentioned before, those actions can be triggered manually after gaining knowledge by performing an interpretation visually, or the actions can be automatic.
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
325
Information
IIoT Data
Knowledge Action
Fig. 21 The digitally transformed cyber-physical loop [3]. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
A digital twin is the living, learning key-component of the cyber-physical loops. Digital twins are connected in a 3D net (perhaps, further dimensions will be identified in the future). The current digital twins have to be identified as partial digital twins as not all data is incorporated and as the data-processing capabilities are not designed for all possible purposes. Comments like “CAD is the digital twin” are kind of correct. CAD can be seen as a very simple digital twin, which encapsulates some dimensional data, and provides some data processing and some visualization capabilities. A complete digital twin would use all data – inlcuding all NDE data. This shows the statements like “NDE is the digital triplet” must be identified as what they are: marketing. Just like Industry 5.0 or NDE 5.0.
Discussion and Outlook The cyber-physical loop, including its core components IIoT, semantic interoperability, and digital twin, is the core of the digital transformation, the core of the technology behind the various fourth revolutions – in industry and in NDE. IIoT allows to seamlessly connect every device with every other device, within the company, between companies, with the cloud, and within the connected world. This requires data transparency (see Fig. 22) enabled by standardized data formats and interfaces, semantic interoperability to achieve machine readability and data fusion, data security, and data sovereignty.
326
J. Vrana
Data Transparency
Standardized open and well-documented data formats and interfaces
Data formats and interfaces with a semantic interoperability based on standardized informaon models
Data security
Data sovereignty
Fig. 22 The four pillars of data transparency – the basis for the fourth revolutions. (Author: Johannes Vrana, Vrana GmbH, Licenses: CC BY-ND 4.0)
IIoT is also the means to establish remote support and remote NDE by data transparency. Remote support by other inspectors can help in inspection situations where a second opinion is needed, where an in-depth evaluation of indications identified by the inspector at location needs to be conducted, and where local (potentially inexperienced) inspection personnel must be used (for example, due to travel restrictions). Digital twins close the cyber-physical loop by taking data from the IIoT, processing and visualizing the data and creating knowledge, which can be used to create action in the physical world. For sure, not only the input to the digital twin is established by the IIoT but also the output. This can eventually lead to multiple digital twins interacting with each other. The cyber-physical loops created using IIoT and digital twin lead to digital transformation, replacing proprietary interfaces and applications by a new scalable open ecosystem. This ecosystem will also enable the data market. For NDE, this is great news. NDE can pick up the existing interfaces to the IIoT, which allows access to the IIoT, to digital twins, and to the cyber-physical loops. NDE is one of the most valuable data sources, and the NDE community needs to get onto this subject by defining the ontologies. If data is the new oil, then NDE 4.0 is the new oil rig. Ripi Singh [29]
In an upcoming chapter, the concept of the cyber-physical loops will be taken one step further by discussing the various loops and by identifying the associated business cases.
Cross-References ▶ Are We Ready for NDE 5.0 ▶ Digitization, Digitalization, and Digital Transformation
12
Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0
327
▶ Digital Twin and Its Application for the Maintenance of Aircraft ▶ Introduction to NDE 4.0 ▶ Probabilistic Lifing ▶ Semantic Interoperability as Key for a NDE 4.0 Data Management ▶ Value Creation in NDE 4.0: What and How
References 1. Vrana J, Singh R. NDE 4.0 – a design thinking perspective. J Nondestruct Eval. 2021;40:8. https://doi.org/10.1007/s10921-020-00735-9. 2. Vrana J. The core of the fourth revolutions: industrial internet of things, digital twin, and cyberphysical loops. J Nondestruct Eval. 2021;40:46. https://doi.org/10.1007/s10921-021-00777-7. 3. Vrana J. NDE perception and emerging reality: NDE 4.0 value extraction. Mater Eval. 2020;78(7):835–51. https://doi.org/10.32548/2020.me-04131. 4. Vrana J. ZfP 4.0: Die vierte Revolution der Zerstörungsfreien Prüfung: Schnittstellen, Vernetzung, Feedback, neue Märkte und Einbindung in die Digitale Fabrik. ZfP Zeitung. 2019;165:51–9. 5. Vrana J. Welcome to the world of NDE 4.0. YouTube. 2020. https://youtu.be/MzUKHmp4exE. Published 24 Mar 2020. 6. Vrana J. The four industrial revolutions. YouTube. 2020. https://youtu.be/59SsqSWw4b0. Published 30 Mar 2020. 7. Vrana J. The four NDE revolutions. YouTube. 2020. https://youtu.be/lvLfy4zfSYo. Published 14 Apr 2020. 8. Vrana J. Digitization, digitalization, digital transformation, and informatization. YouTube. 2020. https://youtu.be/8Som-Y37V4w. Published 21 July 2020. 9. Vrana J, Kadau K, Amann C. Smart data analysis of the results of ultrasonic inspections for probabilistic fracture mechanics. VGB PowerTech. 2018;2018(7):38–42. 10. Bloomberg J. Digitization, digitalization, and digital transformation: confuse them at your peril. Forbes. 2018. https://www.forbes.com/sites/jasonbloomberg/2018/04/29/digitization-digitaliza tion-and-digital-transformation-confuse-them-at-your-peril. Accessed 27 Sept 2020. 11. Kluver R. Globalization, informatization, and intercultural communication. Am Commun J. 2000;3(3). 12. Douglas A. The Hitchhiker’s guide to the galaxy. Pan Books; 1979. 13. Industrial Internet Consortium. The Industrial Internet of Things Volume G5: Connectivity Framework, IIC, IIC:PUB:G5:V1.01:PB:20180228. 2018. 14. OPC Foundation. Interoperability for Industrie 4.0 and the Internet of Things, OPC Foundation, Verl. 2018. 15. IEC 62541. OPC Unified Architecture. 2010–2019. 16. OPC Foundation. 2021. https://opcfoundation.org/about/opc-technologies/opc-ua/. 15 Mar 2021. 17. ASTM E2339. Standard Practice for Digital Imaging and Communication in Nondestructive Evaluation (DICONDE). 2015. 18. ASTM E2663. Practice for Digital Imaging and Communication in Nondestructive Evaluation (DICONDE) for ultrasonic test methods. 2018. 19. ASTM E2699. Practice for Digital Imaging and Communication in Nondestructive Evaluation (DICONDE) for Digital Radiographic (DR) test methods. 2018. 20. ASTM E2738. Practice for Digital Imaging and Communication Nondestructive Evaluation (DICONDE) for Computed Radiography (CR) test methods. 2018. 21. ASTM E2767. Practice for Digital Imaging and Communication in Nondestructive Evaluation (DICONDE) for X-ray Computed Tomography (CT) test methods. 2018. 22. ASTM E2934. Practice for Digital Imaging and Communication in Nondestructive Evaluation (DICONDE) for Eddy Current (EC) test methods. 2018.
328
J. Vrana
23. Vrana J. Basics of data security. YouTube. 2020. https://youtu.be/iXZlzQ_M7nM. Published 16 June 2020. 24. Vrana J. How to ensure data security and ownership in the IIoT and in the connected world. YouTube. 2021. https://youtu.be/JEaZfwOlIHE. Published 16 Feb 2021. 25. International Data Spaces Association. Reference architecture model, IDSA, Version 3.0. 2019. 26. Grieves M, Vickers J. Origins of the digital twin concept. Florida Institute of Technology/ NASA; 2016. 27. DIN SPEC 91345:2016-04. Reference Architecture Model Industrie 4.0 (RAMI4.0). 2016. 28. IEC 62890. Industrial-process measurement, control and automation – life-cycle-management for systems and components. 2020. 29. Singh R, Vrana J. The NDE 4.0 – call for action! Indus Eye. 2021;8(2):32–36.
Compressed Sensing: From Big Data to Relevant Data
13
Florian Ro¨mer, Jan Kirchhof, Fabian Krieg, and Eduardo Pe´rez
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretic Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compressed Sensing for Relevant Data Extraction for NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications in Ultrasound Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications in X-Ray Computed Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications in Terahertz Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
330 333 336 336 342 346 348 349 350
Abstract
Though the ever-increasing availability of digital data in the context of NDE 4.0 is mostly considered a blessing, it can turn to a curse quite rapidly: managing large amounts of data puts a burden on the sensor devices in terms of sampling and transmission, the networks, as well as the server infrastructure in terms of storing, maintaining, and accessing the data. Yet, NDE data can be highly redundant so the storage of massive amounts of data may indeed be wasteful. This is the main reason why focusing on relevant data as early as possible in the NDE process is highly advocated in the context of NDE 4.0. This chapter introduces Compressed Sensing as a potential approach to put this vision to practice. Compressed Sensing theory has shown that sampling signals with sampling rates that are significantly below the Shannon-Nyquist rate is possible without loss of information, provided that prior knowledge about the signals to be acquired is available. In fact, we may sample as low as the actual information rate if our prior knowledge is sufficiently accurate. In the NDE 4.0 context, prior F. Römer (*) · J. Kirchhof · F. Krieg · E. Pérez Fraunhofer-Institut für Zerstörungsfreie Prüfverfahren IZFP, Ilmenau, Germany e-mail: fl[email protected]; [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_50
329
330
F. Ro¨mer et al.
knowledge can stem from the known inspection task and geometry but it can also include previous recordings of the same piece (such as in Structural Health Monitoring), information stored in the digital product memory along the products’ life cycle, or predictions generated through the products’ digital twins. In addition to data reduction, reconstruction algorithms developed in the Compressed Sensing community can be applied for enhanced processing of NDE data, providing added value in terms of accuracy or reliability. The chapter introduces Compressed Sensing basics and gives some concrete examples of its application in the NDE 4.0 context, in particular for ultrasound. Keywords
Compressed sensing · NDE 4.0 · Structural health monitoring · Ultrasound · Terahertz · X-ray
Introduction One of the trends in Nondestructive Evaluation 4.0 (NDE 4.0) is the deployment of larger and larger quantities of sensors, which can range from monitoring stations up to networks of autonomous low-cost sensor nodes. This leads to increasing quantities of data being recorded. The universal availability of large amounts of digital data in the context of NDE 4.0 seems to be a blessing at first sight. However, its drawbacks have become increasingly prominent as well. For example, the amount of data acquired by integrated sensors in Structural Health Monitoring (SHM) applications quickly grows to a size where it is impossible to analyze every single measurement [1]. Fabricating multi-channel sensor arrays with increasing numbers of sensors can lead to data rates that reach the limits of current interfaces, prohibiting the transmission of all available measurements. In addition, employing an increasing number of independent/autonomous sensors that continuously monitor a specimen sparks the need to limit power consumption, as these sensors may not have an independent power supply. Yet, one of the core ideas of NDE 4.0 is to feed data from recurrent measurements using such devices into the digital product memory [2] and/or use it to train a digital twin which can provide predictions about the product’s status [3]. This raises the question whether all of the recorded data carries information or whether a different representation or compression would be more suitable. In fact, it can be observed that Nondestructive Evaluation (NDE) data is often highly redundant so the storage of massive amounts of such data may indeed be wasteful. Addressing these redundancies can decrease the overhead for data storage and communication and may even be the enabling step for some application to be fit into a NDE 4.0 framework. One approach to avoid measuring redundant data in the first place is given by the framework of Compressed Sensing (CS). CS allows to sample signals at rates significantly below the sampling rates dictated by classical sampling theory, bringing the required sampling rate closer to their actual information rate. In so doing, it can
Compressed Sensing: From Big Data to Relevant Data Sparse representation
A-Scan 1
Pulse 1
1
0.5
=
0 −0.5
Amplitude
Amplitude
331
0.5
∗
0
0
1
2
3 t in P s
4
0
1
2
3 t in P s
4
Amplitude
13
0.5 0 −0.5
−0.4 0 0.4 t in P s
Fig. 1 Simplified model of sparsity in an ultrasound measurement signal: The measurement signal, or A-Scan, is composed of several time shifted and scaled echoes of a known pulse. Hence, the signal is reproduced by a convolution of the pulse-shape with a spike train. Knowing only the amplitudes and time delays of the 5 echoes, i.e., the sparse representation, suffices to describe the measurement, which is in contrast to the 400 time values required to store the full A-scan in the given example
alleviate the redundant sampling that appears in many NDE modalities. As such, it can represent a relevant building block to future NDE systems. In CS, the redundancy is formalized by the concept of sparsity in the sense that it is assumed that our signals of interest admit a sparse representation in a known basis. The concept is best explained by considering a concrete example. Let us take ultrasound as a widely applied NDE modality (though the idea applies to other modalities as well). Now, intuitively, ultrasonic signals can often be described by a small set of relevant (unknown) parameters, e.g., time of flights and amplitudes of a series of echoes, and a set of known parameters, e.g., the (approximate) pulse shape that an ultrasound transmitter inserts into the specimen. An example is given in Fig. 1. If the data follows the model closely enough, it means that a significant fraction of the signals’ energy has a sparse representation, i.e., it can be expanded in a known basis with only few basis vectors. Such signals are said to be sparse in their respective (sparsifying) basis. In ultrasound NDE, the data that a single sensor receives typically comprises (approximately) a train of time-shifted pulses that arise when a scatterer partly reflects the inserted ultrasonic wave. Most of these reflections are caused by features of the object under test (known a priori) and only a small number of remaining echoes are interesting or relevant for the testing task. This assumption is a necessary condition for ultrasonic inspection, since without any prior knowledge, the ambiguity in ultrasound data quickly reaches a point that prohibits any analysis of it at all. In addition to the temporal structure, the spatial correlation of the signals is quite significant as well, since the acoustic wavefield inside a homogeneous section of material is a continuous and smooth function, so measurements that are taken in close proximity correlate strongly with each other. This induces a significant redundancy in synthetic aperture or multi-channel measurements. As this example shows, ultrasonic signals can have significant redundancies in the temporal domain (being comprised of a finite set of echoes) and the spatial domain (due to spatial correlation), which means that conventional point-wise sampling might be wasteful. In cases like this, CS can help to avoid redundancies and deliver data that is much more focused on the relevant parts of the signal.
332
F. Ro¨mer et al.
Conventionally, the incoming analog signals are converted to digital data points by taking equidistant samples of it in time domain at the receiver. The rate at which these samples need to be taken depends on the occupied bandwidth of the signal, as established in the famous works by Shannon and Nyquist [4]. Throughout the chapter, we will refer to this by the term Nyquist sampling. Similar arguments for the spatial domain led to the widely spread use of Uniform Linear Arrays (ULAs) that densely sample the occupied aperture with equidistant spacing of less than half a wavelength between the elements. In practice, sampling frequencies even much higher than the Nyquist frequency are used to acquire measurements. This is based on the assumption that denser sampling grids increase the accuracy when directly extracting information from the raw data. To give an example, thickness measurements based on the time-of-flight of the ultrasound echo can be performed by employing a simple peak detection and the position of the peaks can (presumably) be found more precisely the denser the sampling grid. This, however, leads to a large proportion of the measurement samples containing no information at all that is relevant to the measurement task, since the thickness (and therefore the time-offlight) is much larger than the target accuracy, creating wasteful amounts of data. It illustrates how the employed data acquisition scheme is unsuited for the actual measurement task and does not incorporate any of the existing prior knowledge. The above observations underline why focusing on relevant data as early as possible in the NDE process is highly advocated in the context of NDE 4.0. This chapter introduces CS as a potential approach to put this vision to practice. CS theory has shown that sampling signals with sampling rates that are significantly below the Shannon-Nyquist rate is possible without loss of information, provided that prior knowledge about the signals to be acquired is available. In fact, we may sample as low as the actual information rate if our prior knowledge is sufficiently accurate. The reduction in data size compared to the traditional sampling approaches coined the term compressed sensing, though it rather means a tailoring of the sampling scheme to the relevant information. In the context of the example given in Fig. 1, the relevant data are the delays and amplitudes of the time shifted echoes, i.e., the parameters that form the sparse representation. The prior knowledge of the pulse-shape is required to model the time-domain measurement. Using the theory detailed in the later sections, it is possible to formulate an alternative sampling scheme that allows fewer measurements to be taken without losing relevant information. The relevant information can both be retrieved from the original time domain samples as well as from fewer measurements obtained by a specifically tailored sampling scheme. The potential benefits that arise from this vision are as follows. Reduced sampling rates can lower the hardware complexity. This is especially beneficial when the necessary sampling rate (i.e., the Nyquist rate) is already larger than existing hardware can handle (at reasonable cost) and in the multi-channel case, since even a reasonable sampling rate at each individual channel quickly leads to intractable data rates for the overall array. By reducing the hardware complexity, the overall cost of the inspection system can be cut, it can lead to reduced energy consumption in the
13
Compressed Sensing: From Big Data to Relevant Data
333
front end, or to a reduced size. Eventually, reducing the amount of data that needs to be measured can lead to lower acquisition times and therefore make testing faster. This is opposed by the so-called economy of scale, meaning that assembly of NDE 4.0 systems should be possible from general basic building blocks that can be produced in large numbers. Thus, a practical solution always needs to balance tailoring the acquisition system to the concrete task and having practical systems for a wider range of applications. A lower complexity of the sensor front end may be one of the key enablers for deploying NDE sensors in the form of mobile/handheld devices or even autonomous sensor networks, comprised of small nodes that run on a low power budget. This ties in nicely with the vision of the industrial internet of things (IIoT) that is envisioned for NDE 4.0 and may, among other things, enable remote inspection and collaborative decision making by virtue of autonomous sensor networks [3]. Such sensors need to save power when making the measurements and when transmitting their information so data reduction is a crucial task to enable this vision. Finally, note that acquiring only the relevant information can reduce the burden in the sense making or data interpretation stage, since reducing the data size (not the size of the data set) reduces the complexity in the development of classification models (e.g., using machine learning techniques). One of the reasons CS is so fitting in this application is that the data compression can be carried out in a very simple manner that is even data-agnostic, i.e., the sensors do not need to be aware of the signal structure. This information is only required at the reconstruction stage, which could be carried out by a fusion center that collects all the compressed representations and has sufficient computation power. Alternatively, cloud computing (which is another innovation envisaged for NDE 4.0 [3]) can be invoked at this point to offload the cumbersome reconstructions to server farms with even more compute capabilities. The remainder of this chapter is organized as follows. Section “Theoretic Foundations” introduces the basic theoretical concepts of CS and lists some applications where CS has led to major breakthroughs in the respective field. Section “Compressed Sensing for Relevant Data Extraction for NDE 4.0” presents existing applications of CS to NDE 4.0 applications for three prevalent modalities: ultrasound testing, X-ray Computed Tomography (CT), and terahertz imaging. Finally, section “Summary and Outlook” draws conclusions and provides an outlook.
Theoretic Foundations In this section, we introduce the basic concepts of CS. We focus on the intuitions here and take a couple of short-cuts, for more thorough derivations and the exact proofs, the interested reader is referred to e.g., [5, 6] as well as the seminal papers by Candès and Tao [7], and Donoho [8]. CS is a paradigm in sampling theory that provides an analytical framework for the sampling of signals below the Shannon-Nyquist sampling rate. In essence, it has
F. Ro¨mer et al.
334
been shown that signals can be acquired at sub-Nyquist sampling rates without loss of information, provided they possess a sufficiently sparse representation in some domain and that the measurement strategy is suitably chosen. We consider a signal x ℂM1, meaning that x consists of M samples taken at Nyquist rate. Note that this does not mean we actually need to have access to this sampled observation, but instead we use it as a discrete and finite, hence convenient, representation of the continuous signal. One advantage of this representation is that it allows us to write linear transforms on the signal as matrices. The signal is said to be k-sparse if it contains at most k nonzero elements, which is formalized by writing |k xk0 O k in terms of the “pseudo-norm” ||kk0 which counts the number of non-zero elements. We can also consider signals that are k-sparse in some basis Ψ ℂM M, which is true if x ¼ Ψs for some s with ksk0 O k. Now, one can show that s (and hence also x) can be reconstructed from the measurements y ℂm 1, m M provided that y ¼ Φx ¼ ΦΨs,
ð1Þ
where Φ ℂm M is the so-called measurement or compression matrix (hence the term Compressed Sensing). Again, Φ is just a discrete and convenient representation of the sampling operator, which can in general be a continuous functional that represents the sampling kernels (one for each rows of Φ) in the analog domain. Note that we limit ourselves to the noise-free case in this overview. Bounds and guarantees in the noisy case exist as well, see, e.g., [7, 8]. The sparse coefficients of s can be reconstructed by solving min ksk0 s:t:ΦΨs ¼ y: s
ð2Þ
However, finding a solution to (2) is in general an NP-hard problem. For this reason, it is important to consider convex relaxations to (2). In particular, one can show that the solution to min ksk1 s:t:ΦΨs ¼ y s
ð3Þ
is equal to the solution to (2) if Φ fulfills the so-called null-space property (which is a rather technical condition we omit here for brevity). In contrast to (2), problem (3) is a convex optimization problem and several algorithms to compute a solution exist. A good overview of the vast number of algorithms to reconstruct s from y can be found in [9]. As the null-space property is hard to verify, simpler proxies are needed to assess the quality of sensing matrices. One example is given by the coherence, defined as ϕH ϕ i j μ ¼ max : i, j, i6¼j ϕ j kϕi k 2 2
ð4Þ
13
Compressed Sensing: From Big Data to Relevant Data
335
Now, the uniqueness of the reconstruction can be guaranteed via the coherence of Φ, i.e., all k-sparse solutions with 1 k < ð1 þ 1=μÞ, 2
ð5Þ
can be uniquely recovered since the condition ensures that all matrices formed by restricting Φ to k arbitrary columns are injective [8]. However, the drawback of this analysis is that the bound may be overly pessimistic. In particular, in the regime interesting for CS (where m < M ), μ is bounded from below by the Welsh bound. This limits the scenarios where (5) is satisfied to small values of the sparsity levels k. Improved stability bounds can be found by introducing the Restricted Isometry Property (RIP). While the coherence only considers pair-wise comparisons between the columns of Φ, the RIP extends this to comparisons of all sets of 2 k-columns. It is defined as finding the smallest nonnegative value of δ2k that satisfies ð1 δ2k Þkxk22 kΦxk22 ð1 þ δ2k Þkxk22 :
ð6Þ
for all x. The constant δ2k is called Restricted Isometry Constant (RIC) for sparsity order 2 k and one states that Φ has the RIP if δ2k is sufficiently small. In particular, one can show that if δ2k < 13, then Φ fulfills the null-space property and therefore the minimizer to (3) is the unique sparsest solution to (1). Several other bounds involving RICs of different orders that prove the latter exist as well. Unfortunately, deterministic constructions of matrices with a sufficiently small RIC are difficult, in fact, even computing the RIC of a given matrix is again an NP-hard problem. For deterministic constructions, so far, only results that require the number of measurements m to be proportional to k2 exist. In contrast, it can be shown that Φ satisfies the RIP with very high probability if the elements of Φ are chosen at random from certain distributions such as a Gaussian or a Bernoulli distribution. For such random matrices, it can then be concluded that successful signal recovery is possible with a very high probability, provided that the minimum number of measurements m scales with mPC k log ðM=kÞ, where the constant C > 0 is independent of k, m and M. It is important to realize that, in contrast to classical sampling theorems, the number of required measurements is directly proportional to the sparsity order k, i.e., the number of unknown elements that are to be detected (i.e., the “complexity” of the signal). It does not depend on generic characteristics of the signals or the sensor itself (such as the bandwidth). The ramifications of this are as follows: The measurements taken by a CS system are designed to only acquire sufficient information to reconstruct the k unknown elements in s based on a priori knowledge, which is conveyed by the sparsifying basis Ψ. In theory, this enables to design sensors that are specifically tailored to a given measurement task by optimally
336
F. Ro¨mer et al.
leveraging the a priori knowledge. Finally, in many applications, the sparse expansion of x cannot be expressed in terms of a basis (which needs to be invertible) but the physically motivated descriptions lend themselves to employing overcomplete dictionaries. The theory presented so far does not apply to this scenario, however, extensions exist. The interested reader is referred to [10]. Many of these theoretical developments were motivated by the observation that in tomography, valid reconstructions are possible from only a few measurements in the Fourier space (as acquired in Magnetic Resonance Imaging (MRI)) – in [7] this was called a “puzzling numerical experiment.” In fact, CS has reduced the necessary measurement time in clinical MRI drastically and has nowadays become industry standard [11]. In cognitive radar [12], based on CS the hardware requirements of the front ends can be relaxed enabling the use of larger bandwidths without the need to sample at (or above) the (then also higher) Nyquist frequency at the receiver. This development was previously prohibited by the fact that available Analog-to-DigitalConverters (ADCs) did not reach the necessary sampling rate (at least at reasonable cost). To give a final example, the concept of the Single Pixel Camera [13] has led to improvements in imaging, e.g., using Near-Infrared (NIR)/Far-Infrared (FIR) cameras (where pixels are very expensive), as it allows to take images with a larger number of pixels at lower costs.
Compressed Sensing for Relevant Data Extraction for NDE 4.0 Having introduced the underlying principles of CS, this section establishes the link between CS and advanced sensor concepts in NDE applications. In general, as CS allows to reduce the irrelevance in the data, sensors that embody CS principles provide more meaningful data that can be fed into the digital product memory. Furthermore, they possess features that may be key enablers for use in future NDE 4.0 applications such as reduced power consumption and reduced data amount/data rate, which is particularly attractive for use in autonomous sensor networks. In this section, we present some concrete examples of CS implementations in NDE 4.0 applications, namely, ultrasound testing (section “Applications in Ultrasound Testing”), X-ray CT (section “Applications in X-Ray Computed Tomography”), and terahertz testing (section “Applications in Terahertz Imaging”). Each subsection starts with a short overview on the general state of the art and then continues by stating how CS is employed as an improvement. Note that this list is not comprehensive as CS is prominent in many more modalities such as radar, MRI, NIR/FIR imaging, and others.
Applications in Ultrasound Testing Ultrasonic inspection has a long-standing history in various fields of NDE such as the inspections of metal parts, e.g., railways [14], or the inspection of concrete
13
Compressed Sensing: From Big Data to Relevant Data
337
structures [15]. It originated in a time before digital signal processing was ubiquitously available [16]. In fact, some ultrasonic inspection tasks can be carried out entirely in the analog domain, using a scope to display the voltage signal of a piezoelectric transducer element. When digital devices were introduced in the process, with a few exceptions care was taken to make sure the digitalization would not cause any unwanted changes to the signal and it would represent as closely as possible the original analog signal [2]. One such notable exception is the so-called ALOK method [17]. It directly reduces a measured ultrasound A-scan to the time-of-flights and amplitudes of the reflections by using a simple heuristic that compared the rising edge of each detected echo with its falling edge. This pre-processing was performed close to the sensor using a tailored ultrasound hardware and only the pre-processed data was used for imaging. It can be viewed as an example of reducing the measurement data in an ultrasound NDE application to only the information relevant to the testing task. The digitized ultrasound measurements can be used for imaging of the interior of the specimen. Imaging techniques range from simple Delay-and-Sum (DAS)schemes such as the Synthetic Aperture Focusing Technique (SAFT) [18] or the Total Focusing Method (TFM) [19] to complex model-based Full Waveform Inversion (FWI) [20]. More recently, sparsity enforcing techniques have become a powerful alternative. As described in section “Theoretic Foundations,” sparsity is one of the main pillars of CS theory. For this reason, we discuss some of the main sparse recovery based approaches in ultrasound NDE in more detail in the following. In time domain, the individual A-scans of pulse-echo measurements can be modeled as a sum of a small number of time-shifted pulses. The time of flights are then recovered by solving a sparse deconvolution problem [21]. In spatial domain, the region of interest can be discretized by forming a 2-D or 3-D grid of pixels/voxels. Assuming that only a small number of grid elements is active, i.e., that the number of reflectors is small, sparsity can be enforced [22]. When it comes to defect detection, sparsity can even be increased by assuming that echoes from known geometry features (frontwall, backwall) can be eliminated by appropriate windowing/background subtraction, or similar techniques. A similar approach has been applied to transmission tomography [23]. Using a ULA, a Full Matrix Capture (FMC) measurement X can be collected. A single element of the array transmits and the reflected echoes are received by all elements of the array (including the transmitting one). This procedure is repeated using every element as a transmitter once, yet since transmitting and receiving element are interchangeable the resulting matrix of measurements is symmetric along its main diagonal. In practice, only the upper or lower triangle of the matrix is measured and then mirrored to create the remaining part [19]. A straightforward approach to exploit spatial sparsity in the measurements is by reformulating one of the classic DAS schemes as a linear forward model and extend the reconstruction process by a sparsity enforcing regularizer [22]. The reconstruction of the TFM image from the FMC measurements can be well approximated by [24].
F. Ro¨mer et al.
338
s ΨT x,
ð7Þ
where x ¼ vec X are the vectorized FMC measurements and s ¼ vec S is the vectorized TFM image. A “sparse TFM image” can then be reconstructed by formulating the TFM reconstruction as an ‘1 regularized minimization problem as min kx Ψsk2 þ λksk1 : s
ð8Þ
The operator Ψ in (7) can also be replaced by a more sophisticated physically motivated model [25]. As another application example for sparsifying models in ultrasound, guided waves are usually modeled by their dispersion curves in f-k domain. It has been observed that few non-zero components are already sufficient to achieve an accurate approximation [26], i.e., guided waves can be assumed to be sparse in f-k domain. This method is called Sparse Wavenumber analysis (SWA). With the described sparsifying models at hand, compression approaches can be designed. To this end, the question of which and how many measurements are required needs to be answered. As advocated in section “Motivation and Context,” the idea of CS is to compress the incoming data at or close to the sensor. In many cases this means that the sensor itself has to be adapted to the novel CS-based measurement principle. For that purpose, measurement schemes that lead to a feasible hardware implementation are needed. As already illustrated in Fig. 1, an ultrasound A-scan can be modeled as a train of k shifted pulses. CS suggests to design the measurement kernel (i.e., Φ) incoherent to the incoming signal. For a train of short pulses, a natural choice for Φ is therefore to choose a subset of the rows of a Fourier matrix [27, 28]. So instead of sampling the ultrasound echoes on a dense grid in time domain only a small number of Fourier coefficients are directly measured. Figure 2 illustrates these two strategies. Intuitively, the benefit of Fourier sampling is easily justified, since by measuring Fourier coefficients, each measurement sample contains information of the complete time window the transducer was measuring instead of only the short time frame of a single sampling period when sampling in time domain. Hardware implementations to directly measure the Fourier coefficients of ultrasound echoes exist [28]. They can be extracted from low-rate measurements that have been filtered by an appropriately designed Sum-of-Sincs filter [29]. The number of samples required can be as low as the number of target Fourier coefficients. In the case of arrays, one driving force to implement CS is to reduce the overall number of channels that need to be processed in parallel during reception and/or the amount of measurement data in postprocessing. Especially in the case of 2-D arrays the number of channels can quickly grow to a size, where current data bus interfaces prohibit the acquisition of all channels concurrently, because the data rate exceeds their limitations. To reduce the number of channels, channel sub-sampling can be employed. In the context of CS theory, Φ can then be viewed as a Bernoulli matrix where an element is equal to one for each active channel and zero otherwise. The entries of that matrix can be chosen at random, however, formulating the question of which and how many
13
Compressed Sensing: From Big Data to Relevant Data
339
Sparse deconvolution
0.5 0 −0.5
Sparse representation 0
100
200
300
Amplitude
Conventional sampling
Amplitude
Time-domain data 1
sample index
Compressed Sensing
Amplitude
Compressed samples
1 0.5 0 0
1
2 t in
0.5
4
Ps
0 −0.5
Reconstruction 0
5
10
sample index
Fig. 2 Illustration of the difference between conventional sampling and CS on the example of the ultrasound data shown in Fig. 1. The relevant data, i.e., the sparse representation, can be obtained from the conventionally sampled data given the prior knowledge of the pulse shape, e.g., by sparse deconvolution. On the other hand, by using CS theory, a different sampling scheme can be developed using the same a-priori knowledge. The measurement data obtained by that scheme can be cast into an interpretable representation by computing a reconstruction step and conveys the same relevant information as the conventionally sampled signal, using fewer observations
channels are necessary as a sensor placement problem [30, 31] yields configurations with improved performance. A second driving force is the reduction of measurement cycles to reduce the overall measurement time. For example, in FMC the measurement time scales linearly with the number of channels of the array. Instead of cycling through all elements, only a sub-set can be used for transmission. Note that, in the case of array measurements, Fourier sub-sampling and spatial subsampling can be straightforwardly combined further reducing the amount of measurement data [31] and potentially allowing for low-cost and low-footprint multi-channel front ends. Regarding the measurement of guided waves, note that the wavefields of guided waves propagating on a plate are typically measured using a Laser-DopplerVibrometer. In this measurement setup, the surface of the plate is scanned by the laser and at each scan position the wavefield is captured over time. This procedure is time consuming, since conventionally the laser has to scan all positions of a dense equidistant grid on the surface to fulfill the spatial Nyquist rate. To reduce the number of scan points and by that speed up the measurement procedure, subsampling has been employed, either by selecting a reduced number of scan points uniformly at random on the surface (this represents again a random Bernoulli version of Φ) or by employing the so-called jittered subampling [32]. In jittered subsampling, a dense equidistant grid is divided into subsets of size d, if d is the target subsampling or compression ratio. From each subset, one position is randomly chosen to be sampled. Compared to “purely” random sampling this has two practical advantages: First, the maximal spatial distance between two adjacent sampling positions is fixed to d grid points minimizing possible artifacts due to spatial aliasing. It is the best trade-off between randomness (and therefore incoherence) and
F. Ro¨mer et al.
340
z in mm
y in mm
minimizing the maximum distance between two samples. Second, the scan path of the laser can be easily implemented by constantly moving it by d grid points and then randomly pointing it to the selected position in the subset (hence the term “jittered” subsampling). Using spatially subsampled measurements of guided waves, defect detection is performed by comparing the reconstructed wavefields using different sparsifying bases Ψ (e.g., Fourier coefficients, Curvelets, and Wave atoms have been used) from spatially subsampled laser-Doppler measurements. The reconstruction is computed under the assumption that the wave is propagating freely on the plate, i.e., there is no obstacle disturbing the wavefield. In the case of a defect, the reconstruction considerably differs at the defect location, since the underlying model(s) do not take a defect into account, leading to a high model error in this region [33]. In this study, by using jittered subsampling the number of scan points was reduced by up to 80% compared to Nyquist sampling. As for the FMC measurements, spatial and temporal subsampling can also be combined in SWA [34]. By using the jittered subsampling approach, the number of spatial measurement positions in synthetic aperture measurements could also be reduced. This would provide a straightforward extension to the setup in [28], again combining spatial and temporal compression. For illustration purposes, let us consider a few concrete examples. As a first example, we consider defect detection and localization as one of the most widespread applications in ultrasound NDE. Figure 3 shows an example reconstruction from synthetic aperture single-channel pulse-echo compressed Fourier measurements from a steel specimen [28]. The specimen contains flat bottom holes with diameters ;2 mm, ;3 mm, and ;5 mm as artificial target defects. The top image shows a so-called C-scan image, i.e., a top view of the maximum amplitude along the z-axis (depth). The bottom image shows an equivalent side view computed along the y-axis. The transducer has scanned at all positions on a 2-D synthetic aperture sampling grid with a spacing of 0.5 mm. At each scan position, the measurement consists of only a single Fourier coefficient (for details on the employed Fourier sampling strategy refer to [28]). The reconstruction is then performed using the Fast Iterative Shrinkage Thresholding Algorithm (FISTA) [9]. Two aspects about
20
0 70 80 0
25
50
75
100 x in mm
125
150
175
Fig. 3 Temporal CS for single channel synthetic aperture NDE. (Figure reproduced from [28])
13
Compressed Sensing: From Big Data to Relevant Data
341
this reconstruction are notable: High resolution 3-D reconstructions superior to state of the art imaging algorithms such as SAFT are possible, even from a single Fourier coefficient per A-scan. Further, the most information is added by taking measurements from different spatial positions. Since the defect positions are unknown and the field-of-view of the transducer is limited, a dense spatial sampling grid is necessary. So, from a purely theoretic CS point of view, the given scenario is still “oversampled,” which is why the temporal compression can be maximized. As a next example, we discuss the virtue of data reduction for real-time processing of ultrasonic data in the context of manual inspection supported by an assistance system. One such system is the SmartInspect system [2]. During the manual inspection process, the position of the probe being moved by the engineer is tracked by the assistance system. This information is combined with the actual measurement data and forwarded into the digital product memory, automatically documenting the inspection and making freehand or manual measurements reproducible. At the same time, the additional position information is used for real-time signal processing to create a live feedback for the testing engineer. In doing so, the system provides an assistance not only to ease the inspection task, it can also be used to assure the quality and reproducibility of the inspection, providing significant added value as a smart sensor system for NDE 4.0. To achieve feedback for assistance systems in real-time, the processing of the data needs to be rapid. While a full 3D reconstruction of the data might be tempting, carrying out this step in real-time can be challenging [35]. In this aspect, CS concepts can be of help by virtue of data reduction since less data may be faster to process. A particular instance of CS that fits well into this concept is Fourier sampling, where only a small number of Fourier coefficients are measured and imaging is carried out directly in the frequency domain. This is achieved via a concept similar to beamforming for providing an image: a set of potential point sources on a virtual imaging plane is generated and for each point source, the spatial matched filter to the observed Fourier coefficients is computed. The magnitude of the matched filter provides an intensity that is displayed as a pseudo-reconstruction. It closely resembles a C-image obtained from a 3-D reconstruction. However, its computation can be carried out in real-time during the manual inspection, since the computation of an image directly requires much less compute power compare to a 3-D volume that needs to be calculated in a real 3-D reconstruction. This example is further discussed in [2]. As a final example for applying CS concepts in the context of multi-channel ultrasonic NDE, Fig. 4 shows an aluminum specimen with several side-drilled holes of diameter ;2 mm as artificial target defects. An array of 16 channels is placed on the surface of the specimen as depicted by the red crosses in the center image. The measurement is performed in FMC. However, in reception only a subset of the Fourier coefficients are measured. Further, only four of the elements are used as transmitters and six of the elements are used as receivers for each transmitter. The reconstruction is computed again using FISTA. Note that only the central four defects are within the physical aperture of the full array. These four defects are
F. Ro¨mer et al.
342
15 mm
10 mm 30 mm
75 mm 80 mm 90 mm 95 mm 105 mm 110 mm 125 mm 115 mm 135 mm
120 mm
35 mm 30 mm 45 mm 50 mm 60 mm 65 mm
Sensors Visible Region ROI
Fig. 4 Experimental setup for spatiotemporal CS for multi-channel FMC measurements. The figure was adapted from [31], which was distributed under the Creative Commons Attribution (CC BY) https://creativecommons.org/licenses/by/4.0/. The log-scale reconstruction shows good agreement with the schematic and the picture of the specimen, in spite of the compression
clearly visible employing only 9.375% of the channels. Similar techniques to reduce the number of channels (in transmit and/or receive) have been employed in tomography for defect detection [23] but also for corrosion monitoring [36]. So far, with respect to ultrasound as an NDE modality, CS has only been applied (with the exception of Lamb waves, which are highly dispersive in nature) to scenarios with homogeneous and isotropic materials that can be well represented by a linear model. The development of measurement strategies for more difficult specimens/materials could be fostered by the combination of data-driven modeling approaches and CS. In the spirit of NDE 4.0, the incorporation of information from prior measurement data (or other reference measurements) as for example exploited in differential Simultaneous Algebraic Reconstruction Technique (SART) for X-ray CT as described in section “Applications in X-Ray Computed Tomography” is another promising open research direction. Another core idea of NDE 4.0 is to establish independent low-power sensors that are embedded into a structure that they measure continuously. This requires the investigation of the influence of environmental changes on (CS) measurements. Finally, CS could enable building small arrays with a very large number of active channels, e.g., using the Capacitive Micromachined Ultrasound Transducer (CMUT) technology. Measurements are then taken by only using a subset of all channels for transmission and reception in each measurement cycle to limit the data rate to a feasible number.
Applications in X-Ray Computed Tomography X-ray CT has been used in industrial applications for decades [37]. Nowadays, applications range from the characterization of concrete and asphalt [38], the monitoring of additive manufacturing [39], or inline inspection of metal parts during fabrication [40] to detect defects such as inclusions or pores. In addition to classical image reconstruction techniques such as the Filtered Back Projection (FBP) [41], iterative methods such as the SART [42] or the Direct Iterative REconstruction of Computed Tomography Trajectories (DIRECTT) algorithm [43] have received
13
Compressed Sensing: From Big Data to Relevant Data
343
increased attention recently as they allow the reconstruction of high resolution 3-D images of an object, even from a low number of projections. In X-ray CT the image is reconstructed from projections of the radiation traversing the specimen under different angles or from different spatial positions. This aperture is usually created by either rotating or moving the specimen. Ignoring the diffraction, the relation between the sought image s and the measured projections (in terms of total absorption, i.e., negative logarithmic intensities) x can be well approximated using a linear model, i.e., [44] x Ψs:
ð9Þ
The simplest approach to reconstruct s from x is the FBP, which uses the back projection operator ΨT that can in turn be implemented efficiently by virtue of the Fast Fourier Transform as well as filtering operations. While this is an attractive approach in terms of the computational complexity, it requires a regularly and densely sampled scan (in terms of both, the scanning angles as well as the spatial sampling) since any deviations from this sampling grid will lead to visible streaking artifacts. Such artifacts can partially be prevented by means of iterative reconstruction methods. Using SART, s is reconstructed from x by approximating a solution to min kx Ψsk2 s:t:s≽0: s
ð10Þ
The solution to (10) can be found using gradient descent, which leads to a sequence of forward and backward projection operations. If additional prior knowledge is available, the image quality of the SART reconstruction can be improved through the usage of regularization, i.e., extending (10) by an additional term as min kx Ψsk2 þ λ hðsÞ s:t:s≽0, f
ð11Þ
where λ is a regularization constant and h(s) is a regularizer that favors solutions with properties known from a priori knowledge. We discuss common examples shortly. The solution to (11) can be found by iterating between the gradient step of the standard SART reconstruction and the gradient step on the regularization term [45]. One drawback of X-ray CT is data acquisition time, which is usually long [38]. The acquisition time depends on the number of projections that are required to guarantee reconstructions without artifacts due to sub-sampling. To reduce the number of required projections, CS has been employed. The simplest embodiment of CS in this context is a subsampling of the angular domain, which leads to fewer projections that need to be taken and thus speeds up the data acquisition process significantly. At the same time, a unique reconstruction from subsampled data is only possible if prior knowledge is available. This prior knowledge can come in various forms. Firstly, the fact that (especially in the industrial context), the materials are typically piecewise continuous, leading to a piecewise constant image s (i.e., its spatial
F. Ro¨mer et al.
344
gradient is sparse). This prior knowledge can be enforced by the Total Variation (TV) regularization, i.e., setting h(s) ¼ kskTV. Efficient implementations of the (approximation to the) gradient of the TV norm are discussed in [45]. In addition, a more direct form of sparsity of the images can be enforced when we have a prior image that is similar to the one we seek to obtain. This is common in Inline-CT, where all the objects that are scanned are similar and hence a prior image sref can for instance be obtained from one full (Nyquist) scan of a previous measurement of the same specimen, or from a reference specimen. This approach was originally developed in medical CT and has been termed Prior Image Constrained Compressed Sensing (PICCS) [46]. PICCS defines the regularization as h(s, Δs) ¼ α kΨ1sk1 + (1 α) kΨ2Δsk1, where Δs ¼ sref s is the difference to a known reference image. The matrices Ψ1, 2 are sparsifying transforms as introduced in section “Theoretic Foundations.” With this regularization, (11) closely resembles the classic ‘1-problem as introduced in (3). In [44], instead of reconstructing s, the authors propose to reconstruct only Δs, which is sparse by design. Subsequently, s can be calculated since sref is known. The reconstruction problem is reformulated as min kΔx ΨΔsk2 þ λkΔsk1 s:t:sref ≽Δs Δs
ð12Þ
and termed differential SART. An alternative is given by the DIRECTT algorithm [43]. Here, sparsity is assumed in the image domain, leading to approximating a solution to (11) where h(s) ¼ ksk1. The solution is calculated iteratively by an adapted version of the Iterative Shrinkage Thresholding Algorithm (ISTA) [9]. DIRECTT has been developed without establishing clear links to CS theory, due to which the literature lacks a thorough investigation in terms of achievable compression or subsampling ratios. However, exemplary reconstructions for the standard “missing wedge” problem of the X-ray CT literature exist [43] and suggest the algorithm to be similarly capable of producing artifact free high resolution reconstructions from a low number of projections. In the following, we show a concrete example of the application of CS in X-ray CT for inline inspection of castings. In particular, we consider X-Ray measurements of a combustion motor piston (aluminum casting) with a clearly visible void in its mounting area. The scan was carried out with a 432 432 pixel detector and a reconstruction grid of 400 400 400 (with a voxel size of 0.5 mm) was chosen. This leads to around 400 projections that need to be taken to avoid artifacts according to the Nyquist criterion. Figure 5a shows a conventional SART reconstruction based on these 400 projections. The defect is clearly visible on the right edge. Figure. 5b shows the SART reconstruction from only 50 projections (using no regularization besides the non-negativity). Compared to a), artifacts arise and the defect is less pronounced (and more realistic, smaller defects with a size of only 2–3 voxels would vanish entirely with a high probability). Figure 5c shows the reconstruction from 50 projections using TV regularization. The smoothness constraint improves the contours; however, the defect geometry is still partially compromised.
13
Compressed Sensing: From Big Data to Relevant Data
345
Fig. 5 (a) SART reconstruction using 400 projections (Nyquist). (b–d) CS reconstruction from 50 projections using different levels of prior knowledge: (b) no prior knowledge, (c) piecewise constant, (d) image from reference object [44]
Lastly, Fig. 5d shows the reconstruction from only 50 projections using a reference image from a defect free object and the differential SART approach [44]. There is almost no visual difference between the reconstruction using SART and the reconstruction using differential SART although differential SART only requires 12.5% of the measurement data. This leads to a reduction of the measurement time by a factor of up to 8. A more complete study of this setup (including detection probabilities of small defects over a sample of 800 specimen) is available in [47]. There are many active research directions to push the applicability of CS in the context of X-ray CT further. Firstly, the models need to be improved, e.g., to account for X-ray diffraction which can be seen not only as a distortion to the projections but also as an additional source of information. However, forward modeling of
346
F. Ro¨mer et al.
diffraction remains challenging. Inverting such models in reasonable time requires sophisticated numerical procedures. Secondly, frequency/energy dependence of the X-ray spectra are often ignored but can lead to significant distortions, in particular in presence of beam-hardening artifacts. More elaborate reconstruction algorithms can partially alleviate the effect; alternatively, multi-energy detectors can be applied to restore energy-sensitivity, at the cost of having to deal with even more data and therefore, also more complicated post processing (which actually increases the need for CS concepts). Thirdly, a major issue in the industrial practice is data quality and missing data, as X-ray penetration of larger parts may be difficult and accessibility may limit the angles under which projections can be recorded. This leads to robustness requirements in terms of the algorithmic framework. Machine learning can provide some assistance here and recent work into ML-aided tomographic reconstructions has been quite encouraging [48].
Applications in Terahertz Imaging Terahertz imaging is commonly classified into two broad data acquisition modalities: pulsed spectroscopy and continuous wave spectroscopy [49, 50]. These modalities are then adapted to match the specific applications. Due to the way terahertz radiation interacts with non-conducting materials, its applications fulfill a special niche in NDT. Polymers and thermoplastics are often translucent in the electromagnetic terahertz range [51] which, when paired with the inherent temporal resolution at these frequencies, provides a powerful inspection tool. Time of flight imaging can be performed by detecting reflected and scattered waveforms, while the transmission spectrum can be studied to detect abnormalities [49, 51]. Terahertz spectroscopy can also be employed to estimate the complex refractive index of materials [52], quantifying their refraction and absorption behavior at different frequencies, with the bandwidth being determined by the measurement modality. This makes it possible to track the curing of reinforced polymer composites, after which time of flight tomography can detect voids within the specimen [53]. Thickness measurements can be performed with either modality by extracting phase information in the case of continuous waves, versus time delay estimation with pulsed spectroscopy. Such measurements find applications in the testing of thin coatings such as thermal barriers and paints [50]. Other applications include the testing of foams through scattering, detection of metal corrosion through reflections, and more [50]. Although terahertz imaging enjoys many benefits and finds special applications in the industry, it is not without flaws. Traditional data acquisition techniques often require a time-consuming raster scan, in which a directed beam inspects a small region of the object under test, or a receiver collects data pertaining only to a particular region of the object [49, 51, 54]. This drawback can be circumvented through CS by, for example, leveraging ideas from the single pixel camera [13] and Fourier optics [55]. In a scenario in which transmission imaging is to be performed, a beam of terahertz radiation can be directed at a specimen behind which a sensor is placed. By positioning a plastic lens at the correct distance between the object under
13
Compressed Sensing: From Big Data to Relevant Data
347
test and the sensor, it is possible to directly measure the Fourier coefficients of the object [55]. If, additionally, a mask which selectively allows radiation through is placed between the object and the lens, only some of the Fourier coefficients will be observed. Lastly, the sensor can behave as a single pixel camera that detects the superposition of all of the coefficients [13]. In a scenario dealing with a single directional transmission, the desired image 2 s ℂn comprises the vectorized complex amplitudes corresponding to each pixel of the 2D projection of the object under test. The sparsifying basis Ψ in this scenario is the matrix that performs a 2D Discrete Fourier Transform on the vectorized image s. N 2 2 Considering a single frequency, the matrix is given byN Ψ¼F F ℂn n , where n n each F ℂ is a standard Fourier matrix and is the Kronecker product. Depending on the placement of the masks and the lens, however, the image can be acquired in the usual spatial domain and Ψ is simply an identity matrix of the 2 appropriate size. The compression matrix Φ ℂmn , with m < n2, contains zeros and ones so that its entries can be modeled as independent Bernoulli random variables, each with a probability p of being equal to one, and zero otherwise. If the sensor performs a raster scan one pixel at a time, an image obeying 2
b ¼ Ψs ℂn
ð13Þ
is obtained. The raster scan must obtain each of the n2 pixels sequentially. By placing masks between the specimen and the transforming lens, as well as allowing the sensor to collect the sum of the resulting coefficients (i.e., by not using a directive sensor), one can instead measure y ¼ Φb ¼ ΦΨs ℂm :
ð14Þ
In (14), each of the samples in y is a snapshot taken by the single pixel camera. Each of these snapshots corresponds to a different one of the masks stored as rows of the compression matrix Φ. The sparsifying properties of the dictionary Ψ can be understood by observing that the 2D Fourier transform is separable into two 1D Fourier transforms, and that each of these transforms utilizes the entirety of its corresponding spatial axis to compute its coefficients. Each pixel in Ψs then contains information about all of the pixels in the corresponding row and column of the original pixel in s. In the alternative scenario, the image may already be spatially sparse, and no transformation is necessary. The imaging task consists in reconstructing the complex amplitudes s of the object through which terahertz radiation is transmitted. This is expressed as min ky ΦΨsk22 þ λ hðsÞ, s
ð15Þ
where the regularization term h(s) can be chosen, for example, as ||s||1 or ||S||TV to promote sparsity. A synthetic example of this procedure is shown in Fig. 6, where a target, its 2D spectrum, and reconstruction from a series of single pixel camera snapshots are illustrated.
348
F. Ro¨mer et al.
Fig. 6 Authors’ own synthetic recreation of the reconstruction figure presented in [54] by using a single pixel camera approach instead of a raster scan. (a) shows the original image, i.e., the terahertz radiation passing through an object of interest. (b) is the 2D spectrum of (a) as formed by a lens. (c) is the reconstruction of (a) obtained using CS with ‘1 regularization, p ¼ 0.05 for every element of Φ, and n2/2 snapshots of a single pixel camera
The physical implementation of (14), particularly regarding the compression masks Φ, has received much interest. The approach in [54] relies on a sparse raster scan. The work in [56] replaces the traditional raster scan with a single pixel camera, but the masks are now printed circuit boards whose copper layers determine the parts of the spectrum that can be observed. More sophisticated incarnations of the technique include spinning disks on which the masks are printed, imparting a block-Toeplitz structure to Φ [57], and photo-conductive modulators on which the masks are created through a second pump beam [58]. Such implementations have high affinity with tasks such as polymer weld inspection [51], in which 2D images of the transmission spectrum are sped up by completely avoiding a raster scan. Both pulsed [59] and continuous wave [60] synthetic aperture imaging applications are also limited by the measurement speed. Due to the high frequencies employed in terahertz imaging, mechanical delay lines and repeated excitation are often employed [61]. As a result, samples are collected at rates on the order of kilohertz which, when paired with common procedures such as averaging for SNR improvement, lead to lengthy measurement procedures. In these settings, the movement of the transmitter and receiver in a raster scan fashion is unavoidable. However, there is yet potential to exploit the similarities between terahertz, X-ray, and ultrasound imaging so as to translate compression techniques from these and other related fields, enabling the reduction of scan positions.
Summary and Outlook In this chapter, we discuss the framework of CS and its suitability for data reduction in the context of NDE 4.0. After introducing some of the mathematical basics, we discuss the suitability of CS in some exemplary NDE modalities, in particular ultrasound, X-ray, and terahertz. We show that CS can provide relevant benefits in all of them, in particular, reducing the required data rate when sampling as well as the
13
Compressed Sensing: From Big Data to Relevant Data
349
total data amount that needs to be processed. Moreover, concepts and insights from the CS field provide relevant methodologies for data reconstruction, leading to improved detection or imaging quality. The developments in this regard benefit greatly from the availability of advanced computing resources (directly in the devices or indirectly via the cloud) which help us to employ more powerful algorithms with enhanced numerical stability. In the near future, we expect an even stronger drive in this direction by virtue of a convergence with machine learning concepts and neuromorphic computing. On the other hand, in some applications, the data reduction achieved through CS can improve the processing speed, achieving real-time capabilities, which can be a key enabler for some NDE 4.0 use cases such as assistance systems. At the same time, we can observe that the work in the CS context is in many cases still rather academic. While first hardware implementations have appeared that look very promising, even these are often still in a “laboratory stage” and not yet directly applicable for industrial use. This also means that some of the benefits promised by the CS methodology like a reduction in the power consumption of sensors are yet to be demonstrated using custom-designed electronic frontends under realistic conditions. Significant work in this area is still needed, in particular, bringing the data reduction ideas closer to the sensor or finally integrating them into the sensor itself to actually realize the promised gains in practice. The potential for this is very significant, especially given how much prior knowledge is typically available in the industrial NDE context. Applications like inline inspection or continuous monitoring (SHM) could be the first to benefit but others may follow. Some of the potentials have not been explored yet, like cross-modal prior knowledge in multi-sensoric NDE systems that apply sensor fusion. A significant challenge these ideas are facing is the issue of calibration, as exploiting prior knowledge about sensor signals is only feasible if the sensor responses are actually known. This is typically simple when the sensors are static but aging as well as changes in the environmental operating conditions may provide significant difficulties that require close attention. Potentially, some of these could be solved by self-calibrating autoadaptive sensors that use machine learning concepts to observe themselves and infer about their states. Machine learning methods also have strong ties to the algorithmic frameworks that drive the iterative reconstruction methods discussed above, as recent work into the area of algorithm unfolding has shown. Overall we can expect that the fields of compressed sensing and machine learning will approach even further and share their insights to the mutual benefit of both. After all, it is not uncommon that it takes the joint effort and the experience of more than one engineering community to put a larger vision to practice. The vision of intelligent, cognitive, auto-adaptive sensors in NDE 4.0 will not be an exception.
Cross-References ▶ Applied Artificial Intelligence in NDE ▶ Image Processing 2D/3D with Emphasis on Image Segmentation
350
F. Ro¨mer et al.
Acknowledgments This work was supported by the Fraunhofer Internal Programs under Grant No. Attract 025-601128 as well as the German research foundation (DFG) under grant number GA 2062/5-1 “CoSMaDU.”
References 1. Cawley P. Structural health monitoring: closing the gap between research and industrial deployment. Struct Health Monit. 2018;17(5):1225–44. 2. Valeske B, Osman A, Römer F, Tschuncky R. Next generation NDE systems as IIoT elements of industry 4.0. Res Nondestruct Eval. 2020;31:340. 3. Vrana J, Singh R. NDE 4.0 – a design thinking perspective. J Nondestruct Eval. 2021;40(1):8. 4. Unser M. Sampling – 50 years after Shannon. Proc IEEE. 2000;88(4):569–87. 5. Foucart S, Rauhut H. A mathematical introduction to compressive sensing. Birkhäuser; 2013. 6. Eldar YC. Sampling theory: beyond bandlimited systems. Cambridge University Press; 2015. 7. Candès EJ, Romberg J, Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory. 2006;52(2):489–509. 8. Donoho DL. Compressed sensing. IEEE Trans Inf Theory. 2006;52(4):1289–306. 9. Marques EC, Maciel N, Naviner L, Cai H, Yang J. A review of sparse recovery algorithms. IEEE Access. 2019;7:1300–22. 10. Candès EJ, Eldar YC, Needell D, Randall P. Compressed sensing with coherent and redundant dictionaries. Appl Comput Harmon Anal. 2011;31(1):59–73. 11. Sandino CM, Cheng JY, Chen F, Mardani M, Pauly JM, Vasanawala SS. Compressed sensing: from research to clinical practice with deep neural networks: shortening scan times for magnetic resonance imaging. IEEE Signal Process Mag. 2020;37(1):117–27. 12. Cohen D, Eldar YC. Sub-nyquist radar systems: temporal, spectral, and spatial compression. IEEE Signal Process Mag. 2018;35(6):35–58. 13. Duarte MF, Davenport MA, Takhar D, Laska JN, Sun T, Kelly KF, Baraniuk RG. Single-pixel imaging via compressive sampling. IEEE Signal Process Mag. 2008;25(2):83–91. 14. Rockstroh B, Kappes W, Walte F, Kröning M, Bessert S, Schäfer W, Schallert R, Bähr W, Joneit D, Montnacher A, et al. Ultrasonic and eddy-current inspection of rail wheels and wheel set axles. In: 17th world conference on nondestructive testing, p. 25–8, 2008. 15. Núñez DL, Molero-Armenta MÁ, Izquierdo MÁG, Hernández MG, Velayos JJA. Ultrasound transmission tomography for detecting and measuring cylindrical objects embedded in concrete. Sensors. 2017;17(5):1085. 16. Yee BGW, Couchman JC. Application of ultrasound to NDE of materials. IEEE Trans Sonics Ultrasonics. 1976;23(5):299–305. 17. Rieder H, Salzburger H-J. Alok-imaging and-reconstruction of surface defects on heavy plates with EMA-Rayleigh wave transducers. In: Review of progress in quantitative nondestructive evaluation. Springer; 1989. p. 1127–35. 18. Spies M, Rieder H, Dillhöfer A, Schmitz V, Müller W. Synthetic aperture focusing and time-offlight diffraction ultrasonic imaging – past and present. J Nondestruct Eval. 2012;31:310–23. 19. Holmes C, Drinkwater BW, Wilcox PD. Post-processing of the full matrix of ultrasonic transmit-receive array data for non-destructive evaluation. NDT & E Int. 2005;38(8):701–11. 20. Nguyen LT, Modrak RT. Ultrasonic wavefield inversion and migration in complex heterogeneous structures: 2d numerical imaging and nondestructive testing experiments. Ultrasonics. 2018;82:357–70. 21. Boßmann F, Plonka G, Peter T, Nemitz O, Schmitte T. Sparse deconvolution methods for ultrasonic NDT. J Nondestruct Eval. 2012;31(3):225–44. 22. Semper S, Kirchhof J, Wagner C, Krieg F, Römer F, Osman A, Del Galdo G. Defect detection from 3d ultrasonic measurements using matrix-free sparse recovery algorithms. In: 2018 26th European Signal Processing Conference (EUSIPCO), p. 1700–4, 2018.
13
Compressed Sensing: From Big Data to Relevant Data
351
23. Jiang B, Zhao W, Wang W. Improved ultrasonic computerized tomography method for STS (steel tube slab) structure based on compressive sampling algorithm. Appl Sci. 2017;7(5):432. 24. Laroche N, Bourguignon S, Carcreff E, Idier J, Duclos A. An inverse approach for ultrasonic imaging from full matrix capture data. Application to resolution enhancement in NDT. IEEE Trans Ultrason Ferroelectr Freq Control. 2020;67:1877–87. 25. Berthon B, Morichau-Beauchant P, Porée J, Garofalakis A, Tavitian B, Tanter M, Provost J. Spatiotemporal matrix image formation for programmable ultrasound scanners. Phys Med Biol. 2018;63(3):03NT03. 26. Harley JB, Moura JMF. Sparse recovery of the multimodal and dispersive characteristics of lamb waves. J Acoust Soc Am. 2013;133(5):2732–45. 27. Semper S, Kirchhof J, Wagner C, Krieg F, Römer F, Del Galdo G. Defect detection from compressed 3-D ultrasonic frequency measurements. In Proceedings of the 27th European Signal Processing Conference (EUSIPCO-2019), A Coruna, Spain, September 2019. 28. Kirchhof J, Semper S, Wagner C, Pérez E, Römer F, Del Galdo G. Frequency sub-sampling of ultrasound non-destructive measurements: acquisition. Reconstruct Perform. 2020. arXiv: 2012.04534. 29. Mulleti S, Lee K, Eldar YC. Identifiability conditions for compressive multichannel blind deconvolution. IEEE Trans Signal Process. 2020;68:4627–42. 30. Pérez E, Kirchhof J, Semper S, Krieg F, Römer F. Total focusing method with subsampling in space and frequency domain for ultrasound NDT. In: Proceedings of the 2019 IEEE international ultrasonics symposium, Glasgow, UK, October 2019. 31. Pérez E, Kirchhof J, Krieg F, Römer F. Subsampling approaches for compressed sensing with ultrasound arrays in non-destructive testing. MDPI Sensors, November 2020. 32. Hennenfent G, Herrmann FJ. Simply denoise: Wavefield reconstruction via jittered undersampling. Geophysics. 2008;73:V19. 33. Esfandabadi YK, De Marchi L, Testoni N, Marzani A, Masetti G. Full wavefield analysis and damage imaging through compressive sensing in lamb wave inspections. IEEE Trans Ultrason Ferroelectr Freq Control. 2018;65(2):269–80. 34. Sabeti S, Harley JB. Spatio-temporal undersampling: recovering ultrasonic guided wavefields from incomplete data with compressive sensing. Mech Syst Signal Process. 2020;140:106694. 35. Krieg F, Kirchhof J, Kodera S, Lugin S, Ihlow A, Schwender T, Del Galdo G, Römer F, Osman A. SAFT processing for manually acquired ultrasonic measurement data with 3D smartInspect. Insight – J Br Inst Non-Destruct Test. 2019;61:663. 36. Chang M, Yuan S, Guo F. Corrosion monitoring using a new compressed sensing-based tomographic method. Ultrasonics. 2020;101:105988. 37. Kruger RP. Computed tomography for inspection of industrial objects. Technical report. Los Alamos National Lab; 1980. 38. du Plessis A, Boshoff WP. A review of X-ray computed tomography of concrete and asphalt construction materials. Constr Build Mater. 2019;199:637–51. 39. Thompson A, Maskery I, Leach RK. X-ray computed tomography for additive manufacturing: a review. Meas Sci Technol. 2016;27(7):072001. 40. Oeckl S, Gruber R, Schön W, Eberhorn M, Bauscher I, Wenzel T, Hanke R. Process integrated inspection of motor pistons using computerized tomography. In: Microelectronic systems. Springer; 2011. p. 277–86. 41. Gordon R, Herman GT, Johnson SA. Image reconstruction from projections. Sci Am. 1975;233(4):56–71. 42. Andersen AH, Kak AC. Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm. Ultrason Imaging. 1984;6(1):8194. 43. Magkos S, Kupsch A, Bruno G. Direct iterative reconstruction of computed tomography trajectories reconstruction from limited number of projections with DIRECTT. Rev Sci Instrum. 2020;91(10):103107. 44. Römer F, Großmann M, Schön T, Gruber R, Jung A, Oeckl S, Del Galdo G. Differential SART for sub-Nyquist tomographic reconstruction in presence of misalignments. In 2017 25th European Signal Processing Conference (EUSIPCO), p. 2354–8. 2017.
352
F. Ro¨mer et al.
45. Sidky EY, Kao C-M, Pan X. Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT. J Xray Sci Technol. 2006;14(2):119–39. 46. Chen G-H, Tang J, Leng S. Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med Phys. 2008;35(2):660–3. 47. Schön T, Römer F, Oeckl S, Großmann M, Gruber R, Jung A, Del Galdo G. Cycle time reduction in process integrated computed tomography using compressed sensing. In: Proceedings of the 13th international meeting on fully three-dimensional image reconstruction in radiology and nuclear medicine (Fully 3D), Newport, RI, May 2015. 48. Wang G, Zhang Y, Ye X, Mou X. Machine learning for tomographic imaging. IOP Publishing; 2019. p. 2053–563. 49. Jansen C, Wietzke S, Peters O, Scheller M, Vieweg N, Salhi M, Krumbholz N, Jördens C, Hochrein T, Koch M. Terahertz imaging: applications and perspectives. Appl Opt. 2010;49(19): E48–57. 50. Tao YH, Fitzgerald AJ, Wallace VP. Non-contact, non-destructive testing in various industrial sectors with terahertz technology. Sensors. 2020;20(3):712. 51. Wietzke S, Jördens C, Krumbholz N, Baudrit B, Bastian M, Koch M. Terahertz imaging: a new non-destructive technique for the quality control of plastic weld joints. J Eur Opt Soc-Rapid Publ. 2007;2. ISSN 1990-2573. Available at: http://www.jeos.org/index.php/jeos_rp/article/ view/07013. 52. Pupeza I, Wilk R, Koch M. Highly accurate optical material parameter determination with THz time-domain spectroscopy. Opt Express. 2007;15(7):4335–50. 53. Yakovlev EV, Zaytsev KI, Dolganova IN, Yurchenko SO. Non-destructive evaluation of polymer composite materials at the manufacturing stage using terahertz pulsed spectroscopy. IEEE Trans Terahertz Sci Technol. 2015;5(5):810–6. 54. Chan WL, Moravec ML, Baraniuk RG, Mittleman DM. Terahertz imaging with compressed sensing and phase retrieval. Opt Lett. 2008;33(9):974–6. 55. Ersoy OK. Diffraction, Fourier optics and imaging, vol. 30. Wiley; 2006. 56. Chan WL, Charan K, Takhar D, Kelly KF, Baraniuk RG, Mittleman DM. A single-pixel terahertz imaging system based on compressed sensing. Appl Phys Lett. 2008;93(12):121105. 57. Shen H, Newman N, Gan L, Zhong S, Huang Y, Shen Y-C. Compressed terahertz imaging system using a spin disk. In: 35th international conference on infrared, millimeter, and terahertz waves. IEEE; 2010. p. 1–2. 58. Stantchev RI, Phillips DB, Hobson P, Hornett SM, Padgett MJ, Hendry E. Compressed sensing with near-field THz radiation. Optica. 2017;4(8):989–92. 59. Palka N, Miedzinska D. Detailed non-destructive evaluation of UHMWPE composites in the terahertz range. Opt Quant Electron. 2014;46(4):515–25. 60. Cristofani E, Friederich F, Wohnsiedler S, Matheis C, Jonuscheit J, Vandewal M, Beigang R. Nondestructive testing potential evaluation of a terahertz frequency-modulated continuouswave imager for composite materials inspection. Opt Eng. 2014;53(3):031211. 61. Mamrashev A, Minakov F, Maximov L, Nikolaev N, Chapovsky P. Correction of optical delay line errors in terahertz time-domain spectroscopy. Electronics. 2019;8(12):1408.
Semantic Interoperability as Key for a NDE 4.0 Data Management
14
Christian T. Geiss and Manuel Gramlich
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semantic Interoperability in a Holistic Asset Management Approach . . . . . . . . . . . . . . . . . . . . . . . . . Basics on Technological Asset Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semantic Operability in Operational Database Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
354 355 357 360 366 366 367
Abstract
In data management, especially when working with significant amount of data as characteristic for NDE, key for an efficient and reliable workflow is the interoperability between data. For advantageous use of using computerized maintenance management systems (CMMS) for future NDE 4.0 data management frameworks, semantic interoperability needs to be reached to ensure a smooth data flow throughout the whole process. However, it is of crucial importance to properly select, implement, and utilize CMMS beforehand. To date, no system has been implemented which pursues a completely holistic approach in all aspects of a CMMS solution. Database models of existing CMMS are based on classic relational database architectures which are missing interconnectivity. A holistic data management is especially required for NDE 4.0 in all industries with high reliability demands, for example, energy conversion and distribution, infrastructures, railway, aviation, public transportation, and the manufacturing and process industry. Existing CMMS solutions focus on explicit strengths to analyze single components of a physical asset in a technological way. Superior data analytics C. T. Geiss (*) · M. Gramlich clockworkX GmbH, Ottobrunn, Germany e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_4
353
354
C. T. Geiss and M. Gramlich
focusing on chain of effects and tracking of root causes integrated throughout the whole plant life cycle do not exist so far. However, this would be a major step toward gaining a complete understanding to optimize asset maintenance management strategies and tactics. The following chapter introduces the key facts using examples from the wind industry. Keywords
Semantic interoperability · Digital twin · Operational database management system · Predictive maintenance · Artificial intelligence · Asset management · Data warehouse
Introduction The concept of maintenance has undergone notable changes in the last decades. The main reason for the shift of approach was increasing pressure to achieve higher plant availability while containing or reducing costs. However, the increasing understanding of quality and the growing complexity in technical systems were also driving forces. Finally, the development of better maintenance techniques and applied sensor and nondestructive testing systems made asset management in many technical areas an essential discipline in the pursuit of more efficiency and sustainability. Maintenance management can be traced through three generations since the 1930s [1]. The first generation is the period up to World War II on the timescale. The industry was not very highly mechanized, and downtimes did not matter much. Most of the equipment applied in industry was simple, over-designed, and easy to repair. Thus, the early prevention of equipment failure was not a very high priority and systematic maintenance approaches were not needed. A simple repair-based maintenance policy was mostly applied and, in most cases, sufficient. However, servicing and lubrication routines were already in place for more complex systems. The second generation is placed between the 1950s and mid-1970s on the timescale. In the post-war time, the pressure of demand for goods increased, while industrial manpower dropped. This situation drove industry to a higher level of mechanization, more complex machines were designed and operated, and industry and economy began to rely on them more and more. In this atmosphere, the concept of preventive maintenance was born and downtime minimalization came into sharper focus. The conclusion that equipment failures could and should be prevented acted as a catalyst. This development caused a rise of maintenance cost relative to other operating costs. The first maintenance planning and control systems emerged. Furthermore, engineering theory gained a better understanding of the dependency of failure probability in the service life of components. The development of the wellknown “bathtub” curve replaced the concept of linear increasing failure rates over the lifetime of a component in many applications. However, maintenance strategies in the second generation still consisted of equipment overhauls at fixed intervals to a lion’s share. Another side effect of the increased mechanization and more complex
14
Semantic Interoperability as Key for a NDE 4.0 Data Management
355
Fig. 1 Evolution of maintenance strategies in the last decades: toward maintenance 4.0
machinery was that the amount of capital invested in fixed assets rose, pushing people to investigate ways to maximize the service life of their assets. The time frame between the 1970s and 2000s is stated as the third generation. Reliability and availability are now key topics in applied methodology and research. Furthermore, the structural integrity of physical assets was given more focus in many branches, especially in offshore and aerospace environments. Also, the concept of failure mode and effect analysis (FMEA) extended from aerospace applications to other industrial asset management applications. An important side effect was that more and more component-based views entered the maintenance policies. Upcoming and applicable sensor technology incrementally enabled the usage of conditionbased maintenance strategies. Also, the incorporation of asset management tools to maximize the return on investment of physical asset was stressed more extensively. Maintenance or NDE 4.0 constitutes the latest development stage of maintenance as a discipline. Sensor applications in use for condition-based maintenance strategies are used on a broad basis. Also, the benefits of wireless sensor systems are in focus for subsequent sensor installations on existing and operating assets. In environments of data in surplus, data analysis methods are fundamentally important for predictive maintenance strategies, which are employed to avoid unplanned downtimes. Connectivity and system thinking introduce holistic views on modern maintenance optimization problems and find use in modern maintenance management systems. Figure 1 displays the main development stages on a generic timescale.
Semantic Interoperability in a Holistic Asset Management Approach The status quo in asset management is describable as a transitional state. To guarantee marketable prices, integral asset management concepts will play a key role. The industry has not yet achieved an integrative asset management framework. One of the main deficits is the missing systematics in the overall asset management goal on a strategic and operative level. The results are data solutions coming from the equipment manufacturer who is missing the holistic picture. The optimal maintenance and spare part strategy is optimized only on an isolated point of view. The current data availability and data quality are low. There is no consistent information
356
C. T. Geiss and M. Gramlich
structure. A uniform incident classification system throughout the industry is not state of the art, therefore operational and maintenance incidents cannot be described one-to-one. The description of failure modes, root causes, and subsequent failures is biased, and logical conclusion and historical analysis for sustainable system knowledge to prevent severe failures is only possible to a limited extent. Lifetime documentation is incomplete and fragmentary, not digitalized and sometimes nonexistent. If some data is available, technical incident data and the resulting operation and maintenance cost data are acquired and stored in different systems with no logical connection. To enable a holistic view semantic interoperability can be seen as a problem solver for the plethora of different data types. Considering maintenance software and computerized maintenance management systems (CMMS) for NDE 4.0, proprietary and isolated software solutions inhibit integrative and holistic views on complex maintenance problems. General integrative standards are missing or in the stage of development and not ready for broad industry application. A major future topic will be lifetime-extension programs of many technological assets reaching the end of their original design life, carrying considerable structural wear reserves which need to be used for a more sustainable and economical operation in the future. To optimize the cost-yield ratio of technological assets from an operator’s view, increasing the annual output by reducing unplanned maintenance downtimes, lengthening the service life while retaining a reasonable reliability level, and reducing the overall maintenance and repair cost are the main levers. A holistic asset management concept should mainly provide methods and tools to ensure an optimal service lifetime – also beyond the originally designed design life, while guaranteeing a high availability of the system at optimal cost. Especially in focus are all load transferring components that are relevant for the structural integrity of the system and the control and protection system. The asset assessments in an integrated asset management approach shall always be based on a combination of an analytical part and a practical part. Simplified approaches using data available to the operators – such as SCADA data – must play a key role in an integrated asset management framework. The application of generic and deterministic engineering models needs consideration of the respective uncertainty. Load measurements with structural health and condition monitoring sensor systems as well as nondestructive inspection techniques should also be included to reduce model uncertainties. An asset-specific inspection program considering scope and intervals must be developed based on the calculation results. Furthermore, the following information must be taken into consideration as part of the assessment: • • • • • • •
Operational history Maintenance history Reports from inspections Failure reports/reports on extraordinary maintenance activities Documentation on exchange of components Documentation on changed controller settings Field experience with asset type
14
Semantic Interoperability as Key for a NDE 4.0 Data Management
357
All information available to the operators must be integrated into a semantic interoperability scheme as a standard, and once that is achieved, an operational database management system can assist in logically linking and storing all different data streams. Otherwise interfaces between said domains tend to forego information which inevitably leads to inferior data and downfall decisions.
Basics on Technological Asset Management The named scheme needs to be described with the purpose of its need and not the use of its component. Since testing is part of an integral asset management strategy, the interoperability scheme should include standardized wording and semantics. The Institute of Asset Management (IAM) defines asset management as the following: “Systematic and coordinated activities and practices through which an organization optimally manages its physical assets, and their associated performance, risks and expenditure over their lifecycle for the purpose of achieving its organizational strategic plan” [2]. There are two approaches of maintenance optimization: qualitative and quantitative. Qualitative maintenance optimization is often biased with subjective opinion and experience. Quantitative maintenance optimization employs mathematical models in which the cost and benefit of maintenance are quantified and an optimum balance between both is obtained. The concept for an integrated asset management system, subject in this chapter, is developed under the objective of designing a holistic and integrated asset management system for technological assets from the management process level to the specific sensor application – and back. The basic design is displayed in Fig. 2.
Service strategy optimization Risk quantification & controlling Technical condition assessment Data processing methods & tools
Economical asset analysis
Planning & Construction
Manufacturing & Installation
Operation & Maintenance
Decomissioning & Recycling
LC 1 - 2
LC 3 - 5
LC 6 - 8
LC 9 - 11
Operational data base management system
Sensor Technology
LCP: Life Cycle Phase
Fig. 2 Holistic asset management framework as basis for semantic operability models
358
C. T. Geiss and M. Gramlich
A core element of a future integrative asset management system must be a holistic operational database management system, which integrates all data streams relevant for operation and management activities of each life cycle phase, in this chapter illustrated for wind turbines. Of course this approach is feasible within any industry. From an asset management view point, the operation and maintenance time frame is the most important, wherein asset management activities can directly influence the plant performance. However, all other life cycle phases must also be contemplated for their asset management requirements. For instance, the basic plant configuration and quality framework is set at the beginning of the whole plant life cycle and therefore defines specific boundary conditions in which asset management optimization must take place. Most of the failures which lead to unplanned maintenance downtime and directly influence the technological and economic performance are rooted in the design, construction, and erection phases. Finally, the decommissioning and recycling phases have to be considered as last and sustainable life cycle phase of technological assets, in which criteria and strategies for lifetime enhancement activities, de-investment strategies, and recycling possibilities have to be considered. As an integrated economical approach – implying all life cycle phases – holistic costassociated evaluation methodologies must be established. Economical asset analysis for most operators is paramount on a microeconomical scale; however, macroeconomic scales must also be considered to empower a higher level of sustainability. Naturally, economic performance is strongly dependent on the technical condition in which the system and its components are, however, an integrative asset management system which must always consider the optimization behavior of technical maintenance and enhancement activities according to the economical dimension. A certain level of risk of unplanned downtimes and unreliable systems needs to be considered from an operator’s view point; however, current risk analysis methodologies make it difficult to quantify operational risks, thus risk and cost optimal maintenance strategies are not yet deployed, but strongly needed in a future highly competitive market. To seek and find such optimal asset management strategies, operators depend on adequate field information. Concerning this matter, current sensor technology for monitoring and inspection techniques should be analyzed according to their suitability in specific monitoring or inspection tasks, deployed, and integrated in a future holistic asset management framework. Data acquired with sensor technology must be processed with specific algorithms, integrating all those views, and enable optimization in such a framework. Within this scope an asset management system can be defined as the coordinating and controlling system of all activities in planning and target attainment of the predefined asset management goals. As an ISO management standard, ISO 55000 primarily defines specifications on an organizational level. Definitions and specifications on activity level in the different application branches are not provided in detail. However, the general framework for the design of an asset management system is well defined and specified. ISO 55001 introduces top-level information requirements and performance evaluation requirements for an asset management system on the management standard level.
14
Semantic Interoperability as Key for a NDE 4.0 Data Management
359
• General information requirements (7.1) for an asset management system, information on: – Risks – Roles and responsibilities – Asset management processes – Exchange of information between different roles – Impact of quality, availability and management of information on organizational decision making • Furthermore, detailed information requirements (7.5) should consider: – Attribute requirements of identified information – Quality requirements of identified information – How and when information is to be collected, analyzed and evaluated – Processes to maintain the information should be specified – Traceability of financial and technical data must be considered The conception of performance evaluation within the ISO 55001 framework defines requirements for monitoring, measurement, analysis, and evaluation of technical and financial data in an asset management system. • Performance evaluation requirements (9) should determine: – What needs to be monitored and measured – Methods for monitoring, measurement, analysis and evaluation – When monitoring/NDE should be performed – When monitoring/NDE data should be analyzed – Reports on the asset performance and the effectiveness on the AM system • Based on the performance evaluation the standard defines specific management reviews on – Non-conformities and corrective actions – Monitoring and management results – Change of risk profiles Besides, the standards also specify potential information sources for an asset management system. Figure 3 displays the general asset management process as defined in ISO 55000. According to the ISO 55000 understanding of asset management, the asset management strategy of an organization should be defined in coordination with the general strategic goals of an organization. Thus, an organization’s asset management strategy will bear a decisive importance looking at general business strategies and goals in a global market in every maintenance-intensive industry, especially in the energy sector. Derived strategic asset management goals should be summarized in a strategic asset management plan (SAMP) on a higher level. Based on those strategic asset management guidelines, plant-specific asset management plans (AMP) will be implemented to translate those strategic views into technological dimensions, holistically considering maintenance needs and requirements of assets under consideration. Simultaneously, also organizational parameters and future requirements on asset management organizations, to run new asset management
360
Organizational Environment – Market -
C. T. Geiss and M. Gramlich
Organizational Strategy
Asset Management Strategy Strategic Asset Management Plan (SAMP) - Asset Management Goals -
Asset Management Plan (AMP)
Development of an Asset Management System
Asset Management System Organisation Implementation of Asset Management Plan
Asset Portfolio
Performance evaluations and improvements
Fig. 3 Basic asset management process according to ISO 55001
strategies, have to be considered, which should lead to a holistic asset management organization. Last, to realize a sustainable process organization in the sense of continuous improvement, performance evaluations of the asset management systems must be carried out to adjust and optimize such asset management systems constantly.
Semantic Operability in Operational Database Management Systems The core focus of a holistic operational database management system (ODBMS) for asset management purposes is the integration of information over the whole life cycle of assets and interdisciplinary cooperation and communication of data. Therefore, a horizontal (different life cycle phases, contractor, OEM, operator) as well as a vertical integration (actuator, sensor, control level management level) of its systems needs to be realized and tightly interconnected through semantics. A modern ODBMS should be postrelational with no need for entity relationship models to be compatible with big data analytics, IoT-technology, and to run predictive analytics to its best performance. In an integrated asset management system, a main task is remote monitoring and diagnostics of many subsystems and components. The process of gathering data relevant to the analysis tasks is most time consuming in many cases. Ontology-based data models are semantically rich conceptual domain models and can support NDE engineers in their data analysis tasks, because ontologies describe the domain of interest on a higher level of abstraction in
14
Semantic Interoperability as Key for a NDE 4.0 Data Management
361
a clear manner. In addition, ontologies have become a common and successful way of describing application domains in biology, medicine, and semantic web services [3, 4, 5]. The use of ontologies in means of describing systems is naturally not limited to those domains though. There are several available formal languages in designing ontologies, for example, the Web Ontology Language (OWL) is standardized by the World Wide Web Consortium (W3C). A mapping concept connects the relationship between the ontology and the scheme of the data, which by combining explicit and implicit information enhances the diagnostic performance of integrated asset management systems. Furthermore, it also enables the combination of static data and event streaming data, such as SCADA (Supervisory Control and Data Acquisition), sensor data or event data for example. Compton describes best practices for semantic sensor network ontologies, integrating sensor data into a semantic asset management system [6]. Besides risk quantification and controlling, applying qualitative and quantitative methods, and deploying obsolescence management, the ability to track the economical asset performance in such frameworks is essential, for example, analyzing Life Cycle Costs (LCC) or the Total Cost of Ownership (TCO). The processes in operation and maintenance of technological assets are skill and knowledge intensive. The required knowledge for holistic maintenance strategies must be managed and structured. Semantic networks interconnect information. Due to the network structure, there are no limitations as is the case for common tree structures. The basic elements in semantic networks are Objects or individuals. Objects are assigned to Types. Objects and Types have characteristics and features. Features only related to Objects are so called Attributes. If characteristics interconnect Objects they are called Relations. The knowledge base in the semantic network emerges from these interconnecting relations. To model arbitrary circumstances, Attributes and Relations are to be defined accordingly. The gist of the Types and Objects concept is that every element occurs only once in the system – there is no redundancy. All relevant information is connected only once with the element – no dispersion of information. The basic ideas and name assignments go back to Ross Quillian (* 1931), which described the human representation of knowledge as semantic network [7]. The nonredundant elements can occur in different contexts via their interconnections in the semantic network. On principle the single elements are also separated from their designation in the system. Designations are inserted in the system via Attributes. Therefore, the multilingualism of a system can easily be implemented. For structuring reasons, the Objects connected to a Type can be grouped under Subtypes. Every Attribute and every Relation is defined and connected to one Type. Through the interconnection of Types and Objects, the predefined characteristics are also valid for the single elements in the system. A further basic concept in semantic networks is the concept of inheritance. Through inheritance, characteristics of Types are relayed to their Subtypes and Objects. Due to this, all relevant characteristics in the system are defined at the highest rank possible and are nonredundant.
362
C. T. Geiss and M. Gramlich
In 2015, a normative guideline for the design of an IAMS database for renewable energy plants was implemented in DIN SPEC 91303:2015-03. The guideline committee aimed at defining basic requirements for a so-called Life Cycle File (LCF). Based on the basic set of requirements defined there, one can derive specific requirements for an ODBMS system for renewable energy systems. The basic functions of a LCF are storage, administration, and management of all information incurring in the life cycle of an asset. Thus, all incidents and the following actions during the operation phase of the turbine are managed. First and foremost, an LCF must incorporate all necessary information for a safe and economic operation of a system in chronological order and contain information on the entire life cycle. Note that this set of information differs from asset to asset. Furthermore, it must provide information on property and responsibility in all aspects of asset management. At best, the file is edified during the planning phase of a system or at the beginning of the operational phase at the latest. Exceptions for already existing systems are possible. An LCF must be applicable for different systems and be flexible in terms of integrating an LCF file of a different subsystem from another system. LCFs should not be deleted at the end of the service life. Performance-intensive data analytics should not be conducted in an LCF, but in separate software modules, which are related to the LCF. Plant-based light statistic evaluations – histograms, etc. – should be possible. In addition to the already stated requirements, a digital life cycle file should represent an open application for every type of system. Furthermore, the information sets must be available over the whole life cycle. If relevant, there must be an accurate balance between quick data access and archiving data. In addition, the digital architecture should consider scaling and expansion of the data structures as a key requirement. Data exchanges must be possible in a common format and data security at its best practice is to be applied, including allowing only authorized persons to access data. Optionally, there should be a possibility to reference specific plant data to external data pools for benchmarking purposes. An LCF should be structured based on international guidelines for plant documentation systems, describing the relationship between information. A consistent plant documentation system will ease and structure the exchange of information of different players along the life cycle of a system. Figure 4 depicts the top-level life cycle file architecture. The plant structure – Fig. 5 – inherits a decisive role within the framework of a life cycle file, because it acts as a data backbone. All information sets refer to an element of the plant structure and carry a functional, product, and/or locational aspect. The plant structure is described by a system hierarchy based on the primary function of the subsystem. The structural framework is independent of the product aspect. This metadata is especially important for designing, planning, monitoring, and analyzing maintenance processes. The documents are structured along their document classes. The basic hierarchy can be derived from main and sub classes along DIN EN 61355-1. Every time step in the life cycle of a wind turbine for example is defined by a certain state or condition of the plant. An example of interoperability in this case is
14
Semantic Interoperability as Key for a NDE 4.0 Data Management
owns
needs
Use Case
Meta data
363
Role
owns is used in
owns
is used by consists of
consists of
Information set
Life cycle file
indicates
consists of
Indication
relates to
Sub system
relates to
realizes
Life Cycle Phase
relates to
consists of
Gear part
Fig. 4 Top-level life cycle file architecture
Fig. 5 Plant structure data
DIN ISO/TS 16952-10 Function
Product
Location
364
C. T. Geiss and M. Gramlich
FGW TR 7 D2
ZEUS Block 01
… … 13 blocks
ZEUS Block 01
… 13 blocks
Fig. 6 Example of incident data structure – ZEUS in the wind industry
the so-called ZEUS-key (Zustands-Ereignis-Ursachen-Schlüssel; State-EventCause-Identifier). Basically, the ZEUS-key consists of two blocks of numerical code, whereby the first block code describes the overall condition of the plant and the second block code considers a component-based failure description and classification and defines a resulting or triggered maintenance task, Fig. 6. Of course, one can add more information into standardized descriptions. In general information contained in semantics and therefore in the LCF as well must be consistent, up-to-date, complete, and reviewed and approved by the operator’s designated role. It will need interfaces (that themselves are standardized and ideally described semantically where possible) to the SCADA system, the CMS systems as well as to the SHM system. Every information set must relate to a plant subsystem or an object within the systematic plant structure. Life cycle phases are defined for the system-as-a-whole. The different subsystems and components operate in their own defined product life cycle on the level of the smallest replaceable unit (SRU). Figure 7 displays the defined life cycle phases within the service life of a plant. In operating life cycle files, it is important to transfer all relevant information into the next life cycle phase and to intercommunicate information between the different life cycle phases. Figure 8 displays the different roles and tasks within an integrated asset management system. The operator’s main interest is an economic operation of the plant, thus economic evaluation possibilities must be provided – for example, life cycle costing analysis. Furthermore, information enabling the operator to conduct optimization of maintenance planning activities and the output and availability data are of interest. The maintenance service provider – could but must not be a third party – requires information on the operating and wearing conditions of the system components. Furthermore, he needs deeper knowledge and historical data to conduct risk and criticality analysis in different levels and with different focus within the system. Additionally, the LCF-system should provide him with possibilities to automate and control maintenance workflows for a higher level of transparency and efficiency conducting his performance. The infield service engineers might need PDA-support for performing their scheduled and unscheduled maintenance tasks, for documentation and communication within the IAMS framework.
14
Semantic Interoperability as Key for a NDE 4.0 Data Management
Life cycle phase (LCP)
Explanations
1
Planning
Technical analysis; economic analysis; market analysis; site analysis
2
Construction
Construction concept of plant; Construction concept of sub-components
3
Sourcing/Manufacturing
Provision of material and components
4
Erection
Erection of plant on selected site
5
Commissioning
Commissioning test of plant on site
6
Production
Power production phase; Operation phase of plant; Deisgn life consumption
7
Maintenance
Servicing of plant and ist components
8
Shut-down
Turbine out of operation
9
End-of-life planning
Analysis of end-of-life opportunities
10
Demolition
Deconstuction of wind-turbine
11
Rehabilitation / Recycling
Rehabilitation and another site; Recycling of plant components
Remaining Useful Service Life
No.
365
Fig. 7 Example – definition of life cycle phases for a renewable energy plant
Roles
Tasks
LCP
Operator
Legal responsibility for all aspects of the turbine
All phases
Plant Manager
Taking care of the technical and economical operation management; site supervision
5/6/7/8
Assessor
Inspections of the plant creation of technical surveys
1/4/5/6/7/8
Maintenance Service Provider
Planning maintenance tasks
7
Service Engineer
Performing of maintenance tasks
7
Manufacturer
Planning and erection of the turbine
1/2/3/4/5/6
Regional authority
Supervision of the legal regulations
All phases
Fig. 8 Roles and tasks in an integrated asset management framework
Managing all different streams of data, being able to identify key metrics and at the same time correctly use the data to leverage possibilities within sustainability, availability, reliability, and also safety is the main challenge for highly technologized assets. Mastering all these disciplines at once will be one option, if not the only one,
366
C. T. Geiss and M. Gramlich
for applying NDE 4.0 and with that a future proof operational asset management. At the core of this transition sits a broad and widely used semantic interoperability scheme.
Summary In the future market environment economic and cost-effective operation of technological assets will play a key role. This prognosis is valid both for macro- and microeconomic frameworks, for example, for the global environment as well as for the single asset operator. The application of condition monitoring and nondestructive testing techniques within an integrated and holistic asset management approach can significantly contribute to this. To optimize the cost-yield ratio from an operator’s view, increasing the annual output by reducing unplanned maintenance downtimes, lengthening the service life of the system while retaining a reasonable reliability level, and reducing the overall maintenance and repair cost are the main levers. A core element of a future integrative asset management system must be a holistic operational database management system, which integrates all data streams relevant for operation and management activities of each life cycle phase.
Cross-References ▶ Artificial Intelligence and NDE Competencies ▶ Basic Concepts of NDE ▶ Best Practices for NDE 4.0 Adoption ▶ Characterization of Materials Microstructure and Surface Gradients using Advanced Techniques ▶ Digital Twin and Its Application for the Maintenance of Aircraft ▶ Digitization, Digitalization, and Digital Transformation ▶ From Nondestructive Testing to Prognostics: Revisited ▶ History of Communication and the Internet ▶ Industrial Internet of Things, Digital Twins, and Cyber-Physical Loops for NDE 4.0 ▶ Inspection of Ceramic Materials ▶ Introduction to NDE 4.0 ▶ “Moore’s Law” of NDE ▶ NDE 4.0 in Civil Engineering ▶ NDE 4.0 in Railway Industry ▶ NDE 4.0: Image and Sound Recognition ▶ NDE 4.0: New Paradigm for the NDE Inspection Personnel ▶ NDE in Additive Manufacturing of Ceramic Components ▶ NDE in Energy and Nuclear Industry ▶ NDE in the Automotive Sector
14
Semantic Interoperability as Key for a NDE 4.0 Data Management
367
▶ Optical Coherence Tomography as Monitoring Technology for the Additive Manufacturing of Future Biomedical Parts ▶ Probabilistic Lifing ▶ Registration of NDE Data to CAD ▶ Robotic NDE for Industrial Field Inspections ▶ Smart Monitoring and SHM ▶ Testing of Polymers and Composite Materials ▶ The Human-Machine Interface (HMI) with NDE 4.0 Systems ▶ Training and Workforce Re-orientation ▶ Value Creation in NDE 4.0: What and How
References 1. Moubray J. Reliability-centred maintenance. Repr. Oxford: Butterworth-Heinemann; 1995. ISBN 978-0750602303. 2. The Institute of Asset Management, editor. IAM – knowledge. 2017. Available online at https:// theiam.org/knowledge/. Checked on 5/23/2018. 3. Kharlamov E, Solomakhina N, Özcep Ö, Zheleznyakov D, Hubauer T, Lamparter, S et al. How Semantic Technologies can Enhance Data Access at Siemens Energy. In: Proceedings of the International Semantic Web Conference 2014. 2014. p. 601–19. 4. Horrocks I. What are ontologies good for? In: Evolution of semantic systems. 2013. p. 175–88. 5. Poggi A, Lembo D, Calvanese D, De Giacomo G, Lenzerini M, Rosati R. Linking data to ontologies. J Data Semantics (X). 2008;133–73. https://doi.org/10.1007/978-3-540-77688-8_5. 6. Compton M, Barnaghi BM, Bermudez L, Garcia-Castro R, Corcho O, Cox S et al. The SSN Ontology of the W3C Semantic Sensor Network Incubator Group. In: Web Semantics: Science, Services and Agents on the World Wide Web. 2012;(17):25–32. Available online at https://ac.elscdn.com/S1570826812000571/1-s2.0-S1570826812000571-main.pdf?_tid¼416a8c3b-f7204126-8228-2b3762428490&acdnat¼1534929833_37296b04b19dc1cfb9011c29a3a79378. Checked on 8/22/2018. 7. Minsky ML. Semantic information processing. Cambridge/London: The MIT Press; 2015.
Registration of NDE Data to CAD
15
Stephen D. Holland and Adarsh Krishnamurthy
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Need for Registering NDE Data to CAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2D Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2D Homogeneous Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2D Geometric Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3D Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rotational Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simple Kinematic Models of Measurement Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Propagation Models Such as for Ultrasonic Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3D Homogeneous Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mapping of Coordinate Frames Instead of Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robotic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Propagation Model for Camera-Based Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Calibration of Kinematic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CAD Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Direct 3D Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface Parameterizations as a Domain for Data Storage and Fusion . . . . . . . . . . . . . . . . . . . . . . . . . Representations of Surface Parameterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Surface Parameterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mapping Data onto Surface Parameterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analysis in the Parameterized Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accommodating Geometric Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Persistent Parameterizations: A Potential Tool for Accommodating Geometric Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spatial Database Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
370 373 374 375 375 376 376 377 377 379 380 382 382 385 386 386 389 390 392 392 393 396 396 397 397
S. D. Holland (*) Department of Aerospace Engineering, Iowa State University, Ames, IA, USA e-mail: [email protected] A. Krishnamurthy Department of Mechanical Engineering, Iowa State University, Ames, IA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_5
369
370
S. D. Holland and A. Krishnamurthy
Accommodating Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Need for Open Standards and Open Source Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
398 399 399 400 400
Abstract
NDE 4.0 is about evolving the practice of NDE from independent, perhaps automated, measurements into a new world of interconnected information, where the NDE data connects with measurement models, structural models, and part history to provide an integrated perspective on structural health. Registering NDE data to physical, geometric location is essential to connecting it to outside information. While in the past registration would have been a manual and errorprone process, especially for complicated 3D geometries, improved imaging and 3D camera technology are beginning to make automatic registration practicable. This chapter introduces the principles and approaches to registration of NDE data in two- and three-dimensions, including methods for projecting ray and image data. We discuss the difficulty of accommodating inconsistent geometries and approaches such as persistent parameterizations that may help address such inconsistencies. In order to be accessible, registered NDE data will need to be indexed in a spatial database so that it can be queried by geometric location. Such a database will be more valuable the more NDE and other data can be integrated into it, so common file formats, open metadata, and open source tools are needed so that the broader NDE community can start embedding the necessary registration metadata into our scans.
Keywords
NDE 4.0 · NDT · NDI · Registration · Digital twin · Digital thread · Spatial database · Model-based inversion · CAD · Computer-aided design · Spatial context
Introduction Nondestructive Evaluation (NDE) 4.0 is the emerging revolution in the practice of nondestructive evaluation through the confluence of digital technologies and physical inspection methods (see ▶ Chap. 1, “Introduction to NDE 4.0”). NDE 4.0 involves placing NDE processes in a broader “Industry 4.0” context where digital data such as computer-aided design (CAD) models, manufacturing data, and now NDE data are connected and cross-referenced. In the data-driven world of NDE 4.0, the value of NDE data will be proportional to how it is connected to other information to support activities such as condition tracking [1, 2], digital twins [3], and model-based damage assessment [4]. While NDE 4.0 practices are already being realized industrially in some contexts [5], recording the necessary cross-references
15
Registration of NDE Data to CAD
371
remains difficult and expensive. Fundamentally NDE data will need to be connected to part and product data through the geometric coordinates of the NDE data in the part. Unfortunately, registering geometric coordinates across three-dimensional physical objects is not trivial in the general case, especially in the NDE domain where fine millimeter level accuracy may be desired over large meter- or 10+-meter scale objects. Small errors in orientation can cause large position errors at large distances from the center of rotation. No manufacturing process exactly produces the design geometry or is perfectly consistent or is perfectly rigid once manufactured so additional errors will arise from inconsistency in the geometry itself. The difficulty of the registration process will therefore be directly related to the size of the object and the needed level of accuracy. For large structures registration is inevitably intertwined with the related problem of accommodating geometric inconsistency. New technology arising from the gaming, cellphone, and self-driving automotive industries is enabling computers with suitable sensors to automatically recognize objects and their orientation (“pose”). One example of this technology is augmented reality, where a cartoon character can be superimposed on a live scene, or a maintenance technician can have the assembly process graphically superimposed on her/his workpiece. In the NDE field, these technologies will eventually let the system automatically place measured NDE data in the physical contact of a part’s CAD model, by automatic registration of the part’s perceived orientation. The US Air Force Digital Thread initiative anticipates interoperable digital models across abstraction levels with sensor data that can be integrated into digital twin simulations [6]. Such three-dimensional registration remains a tricky process, but is rapidly emerging. The ability to perform such registration and data integration is beginning to come on line. The NLign products developed by Etegent [7] have been tested by AFRL and Northrop Grumman demonstrating 33% reduction in effort for documenting and resolving manufacturing nonconformities [8]. Work for the US Navy also demonstrated the utility for improving evaluations of damage and repair [9]. Registration can be made simpler by operating in a one- or two-dimensional domain. In one dimension, such as for ultrasonic b-scan data with only one mechanically scanned axis, only a single offset is needed. Ignoring geometric inconsistency, scaling errors, and the like, two-dimensional registration requires finding a rotation and then offsets in each axis as illustrated in Fig. 1. Examples of two-dimensional registration problems include c-scan ultrasonic images of flat plate laboratory samples, along with measurements of cylindrical structures such as pipelines that can be parameterized by axial and circumferential position. In the common situation where the axes are known or already aligned, then just the offsets in the two axes are needed to register the data. Even in one- or two-dimensional registration problems, geometric inconsistency can be significant. For example, in pipeline inspections accurate positioning is important. For example, in pig-based inline inspection of pipes, locations can be evaluated by odometry and inertial measurement [10], but significantly better registration can be achieved by using the joins between pipe segments as additional reference locations [11]. In many cases in the past inconsistency may not have mattered much because each scan was analyzed separately. In an NDE 4.0 world
372
S. D. Holland and A. Krishnamurthy
Fig. 1 Illustration of two-dimensional registration process: Rotation by an angle θ followed by an offset in each axis
where we want to be able to automate trend analysis over multiple inspection cycles or use models to manage degradation risk, such improved consistency in localization will be essential. As we will see, accommodating geometric inconsistency in large three-dimensional structures can be far more difficult. Because of the myriad possible sources of errors and inconsistencies, common practices will develop in particular application domains where certain errors are tracked carefully because they matter to downstage analysis and processing, whereas other errors and inconsistencies are ignored. The field of geographic information systems (GIS) has emerged to address the problem of registering geographic data to the Earth. The size and scope of the GIS field illustrates many of the same challenges of registering NDE data to geometry but with centuries of historical study, a far larger market, and the Global Positioning System (GPS) now providing canonical location references. The world has a variety
15
Registration of NDE Data to CAD
373
of inconsistent map projections [11], a series of inconsistent reference frames such as NAD27, NAD83, and WGS84 [12], and shifting ground due to earthquakes and continental drift. In the much smaller field of NDE we will have to leverage the lessons and technologies developed for larger markets such as GIS.
Need for Registering NDE Data to CAD Registering NDE data to a CAD model allows plotting and imaging the NDE data in 3D physical context. While this is useful, the benefit of pretty pictures on their own compared to direct plotting is limited. The main benefit is the ability to combine with other information. For example, 3D NDE data could be projected over the physical object in an augmented reality environment, helping a technician perform a repair. The need for registration of NDE data will be driven by the utility and benefits of combining the data with other information. The two big applications will be geometry-sensitive model based inversion and data fusion. Geometry-sensitive model based inversion is where your NDE inversion analysis requires information on the local geometry, either to sense the flaw itself or to determine whether a measured discontinuity is a flaw. An example would be ultrasonic pulse/echo testing of a curved, variable thickness composite for delaminations. Knowing the local curvature makes it possible to compensate for refraction and self-focusing artifacts. Any given echo could then be a flaw or the back wall, and knowing the intended thickness provides a way to discriminate between the two. Despite early examples [13–15] data fusion has often not lived up to its hype. Much of the potential benefit can be realized from simple boolean or mixture operations on processed, for example, [16] or slightly more sophisticated consensus methods [17]. The intricate and sophisticated fusion processes sometimes proposed tend to be application-specific if they work at all; there is no general universal approach [18]. In the NDE context, data fusion can be broken down into three categories based on the variable involved: cross-modality data fusion where data is integrated across multiple NDE modalities, temporal data fusion where data is integrated across time, and product line data fusion where data is integrated across multiple serial numbers of the same or similar products. The utility of cross-modality data fusion is limited both by the expense of routinely testing with multiple NDE modalities and the limitations of automated analysis. While combining data from multiple NDE modalities seems very promising at first glance, each modality tends to measure something different, drastically reducing the utility of models and other outside knowledge. Thus physics-based models are not very useful for combining data because even though each modality adds an equation (the measured value) it also adds an unknown (the physical characteristic that is sensed, different for each modality) and thus we are no closer to a solution. Practical fusion usually ends up being some sort of combination of damage indexes from the modalities. Temporal fusion is relatively straightforward and useful for trend analysis, but only a very limited number of extremely critical applications can justify the expense
374
S. D. Holland and A. Krishnamurthy
of repetitive testing. The more common scenario will likely be in product line data fusion where manufacturing NDE scans can be analyzed to optimize manufacturing processes. When manufacturing NDE scans are registered to a CAD model it will be more practical to search for affected serial numbers if, for example, a new failure mode is discovered. Automated approaches to registration, enabling automated model-based inversion and data fusion, will undoubtedly become increasingly significant in the NDE 4.0 practice of the near future. Initial use will be in application domains where it is clear which inconsistencies matter and must be tracked, versus which inconsistencies can be safely ignored. Nevertheless, as the tools and infrastructure matures, registration of NDE data will become increasingly commonplace.
2D Registration The simplest registration problem is two-dimensional alignment, such as between NDE scans of flat plate laboratory specimens or flat or nearly-flat structures. In some cases, NDE scans especially those from image-based methods can be registered to each other through background microstructural patterns using image processing correlation techniques [19]. While such techniques can register adjacent images to each other and potentially images captured over time, without some other reference they do not connect to the CAD model. They also are not useful for registering data from different modalities or across serial numbers as microstructural patterns are unlikely to be consistent in those domains. An alternative is the use of concrete landmarks in the specimen that appear in the data, or fiducial marks intentionally added to facilitate registration. In simple rectangular laboratory specimens, these might be the specimen corners. In such situations the specimen is often rotated parallel to the system axes. So if the axes and scales are known accurately (such as for simple rectangular laboratory geometries and calibrated scanners), then the only variables are the x and y offsets. A single landmark or fiducial mark, visible in both datasets is sufficient to solve for the two offsets and register the data. If needed, a second landmark or fiducial mark is sufficient to solve for axis rotation as well, and multiple landmarks give rise to a least-squares problem to determine the relative angle θ and offsets xo, yo, min
xo , yo , θ
X
ðai cos θ bi sin θ þ xo xi Þ2 þ ðai sin θ þ bi cos θ þ yo yi Þ2 :
ð1Þ
i
In the case where the θ is zero or otherwise known a priori then the axes decouple and the offsets can be solved as xo ¼ mean½ðai cos θ bi sin θÞ xi i
ð2Þ
and similarly for yo. The offsets (xo, yo) represent the coordinates of the (a, b) origin in (x, y) space.
15
Registration of NDE Data to CAD
375
In many circumstances, guides and tooling can be used when the data is originally taken to ensure proper registration. Tooling that matches the shape of the part can ensure that data is always registered consistently. More generic tooling might be sufficient to ensure consistent orientation with, perhaps, the part slid along the tooling to line up a fiducial to a laser mounted to the NDE sensor. In these scenarios consistency of mounting and the internal alignment of the apparatus will determine consistency of the registration, so if a certain registration accuracy is required then procedures will be needed for assembly, maintenance, and use of the apparatus to ensure that the required consistency is achieved.
2D Homogeneous Coordinates It is common in some circles to represent the relative 2D orientation in homogeneous (sometimes referred to as “projective”) coordinates [20], where the transformation between coordinate frames (a, b) and (x, y) can be represented as a matrix multiply. In this case three numbers rather than two are used to represent positions and vectors with the third number being 1.0 for a position and 0.0 for a vector. The above transformation of a position (a, b, 1) to (x, y, 1) coordinates can then be represented as 2 3 2 x cos θ 6 7 6 4 y 5 ¼ 4 sin θ 1 0
sin θ cos θ 0
32 3 a 76 7 yo 54 b 5 1 1 xo
ð3Þ
When the third coordinate is a 1 representing a position, then the offsets (xo, yo) get added and the position is corrected to be relative to the transformed origin. Alternately if the third coordinate is a 0 representing a vector, just the orientation is transformed. The advantage of homogeneous coordinates is that the full transformation including the position offset can be represented in a single matrix and applied with a matrix multiply operation. Other matrix tools, such as inversion, also apply. Transformed positions automatically get the offset subtracted, but vectors (correctly) do not. Two subtracted positions will have a third entry of 0.0, giving (as it should) a vector. The disadvantage of homogeneous coordinates is the redundancy: The transform is represented by 9 numbers rather than 3 and thus can represent other kinds of operations than a simple rotate and shift. In this context those other operations are probably non-desirable. The extra information in the position or vector is also problematic and it is good practice to have some protocol for dealing with scenarios where the third element is not exactly either 1.0 or 0.0.
2D Geometric Inconsistency As indicated with the pipeline example referenced above [21], geometric inconsistency can become significant when attempting to register large structures. Historical
376
S. D. Holland and A. Krishnamurthy
practice has been to address geometric inconsistency on an ad-hoc basis, focusing on those aspects which are significant in a particular situation. If you find a flaw or degradation in a pipeline segment, you might care a little bit about where it is within the segment (so you can track the degradation over time), but you probably care mostly about locating the segment geographically so that the correct segment can be easily dug up and replaced. Obtaining a “ground truth” is difficult, expensive, and error prone in a lot of scenarios. At first glance, the CAD model seems ideal, but that assumes a level of similarity between the as-built object and the CAD part which might not always be achieved. Considering again our pipeline scenario, suppose the construction workers on the ground discover that the planned number of pipeline segments leaves them 12 ft short. They will just install another half-segment, but now the CAD/GIS model and as-built conditions are different. Correcting the CAD/GIS model is not trivial because it may not be clear exactly how that 12 ft of length was lost, and it may make more sense to create an as-built model that aligns with reality.
3D Registration Three-dimensional registration problems have similar challenges as two-dimensional registration, but the difficulty of visualizing three-dimensional data and the ease of distortion (bending) of three-dimensional objects make the problems an order of magnitude more difficult.
Rotational Symmetry Cylindrical or rotationally symmetric geometries, such as for railroad wheels or gas turbine engine disks, are a special case that is relatively straightforward because of the simple (r, θ, z) coordinate frame shown in Fig. 2a. The axis of symmetry is easy to identify, at least conceptually, and that axis plus a single landmark or fiducial point
Fig. 2 (a) Coordinate system for a railroad wheel; (b) robotic NDE system that measures that wheel
15
Registration of NDE Data to CAD
377
are sufficient to register the object. In most cases the object will be mounted using tooling that aligns the rotation access and probably the z axis as well. A simple rotation during mounting to (for example) align a laser to reference point on the disk ensures that data recorded during the NDE process is consistently aligned. Since rotating parts like railroad wheels and turbine engine disks are usually stiff, compact, and very consistently manufactured, geometric inconsistency is probably not very significant.
Simple Kinematic Models of Measurement Systems Nevertheless, aligned does not necessarily mean registered. Registration requires a consistency in coordinate frame (or a known mapping) between measured NDE and CAD model. In a simple case such an ultrasonic transducer scanning a railroad wheel on a rotation stage in Fig. 2b, the center of rotation is probably taken as known because of the tooling. (Ensuring consistency of the center of rotation requires a minimum of play in the system appropriate maintenance procedures to verify the tooling and rotation stage are concentric.) The transducer still has an orientation and coordinates on its R and Z motion stage axes, and the disk has an orientation Θ on the rotation stage. A kinematic model relates the stage positions (R, Θ,Z) to the transducer position in object coordinates (rt,θt, zt). In the case of our rotationally symmetric railroad wheel of Fig. 2, the kinematic model is quite simple, just offsets in each axis: rt ¼ R – r0, θt ¼ Θ – θ0, and zt ¼ Z – z0 where the offsets (r0, θ0, z0) are measured each time or measured once and enforced by procedures and tooling. (Even in this simple case, an additional regularization step may be required to ensure rt. is positive and θt is within particular bounds such as 180 to 180 .)
Propagation Models Such as for Ultrasonic Testing An additional propagation model (sometimes partially included in the kinematic model) is needed to identify the physical locations in the part being probed. This propagation model is usually dependent on the transducer/sensor orientation which would have to be provided by the kinematic model. In this simple example let us assume the transducer orientation is fixed to align to the z axis, and the transducer is operated in a pulse-echo mode. If the specimen surface is flat and parallel to the (r, θ) plane and the ultrasound propagation speed to the specimen surface is cw, then prior to the front surface echo, the measurement position zm as function of time t since the pulse was generated can be mapped as zm ¼ zt cw t:
ð4Þ
If the time of the front surface echo in the ultrasonic waveform is tf, then the surface coordinate of the front surface can be measured as zf ¼ zt – cwtf, which will in
378
S. D. Holland and A. Krishnamurthy
general not precisely match the coordinate from CAD. Likewise if the speed of sound in the specimen is cs, then the ultrasonic waveform v(t) can be mapped into the specimen according to the propagation relation z f z ¼ cs t t f :
ð5Þ
Obviously this case we have outlined is relatively straightforward. If the transducer orientation is not aligned with the z axis or the specimen surface is not normal to the transducer axis, then we have to consider the propagation unit vector, both before and after refraction at the specimen surface. (Depending on the transducer aperture and specimen you may also get echos from transducer sidelobes propagating in different directions altogether, especially if there is a normal surface for them to reflect from.) It is common in such situations to use homogeneous coordinates. In this case if everything is in the (r, z) plane, we can use 2D homogeneous coordinates following the methods of Eq. 3. Restricting ourselves to the (r, z) plane, we can represent the transducer coordinates as a position Rt ¼ (rt, zt, 1)t. For transducer angle in the plane ϕt, the transducer orientation unit vector direction would be 2 3 sin ϕt 6 7 ð6Þ Φt ¼ 4 cos ϕt 5 0 (recall that in homogeneous coordinates positions have a third coordinate of 1 whereas vectors have a third coordinate of 0). Then the ultrasonic waveform v(t) can be mapped to propagation position (Be sure to not get confused between three distinct uses of t here: time since ultrasonic trigger, as a superscript representing transpose of a row vector to a column vector, and as a subscript indicating “transducer”) R ¼ Rt þ cw tΦt
ð7Þ
Representing the Snell’s law refraction as a rotation γ in the (r, z) plane, the transform Γ is (The rotation 7 would be found by applying Snell’s law to the propagation vector relative to the surface normal vector at the front surface reflection point) 2
cos γ 6 Γ ¼ 4 sin γ 0
sin γ cos γ 0
3 0 7 05
ð8Þ
1
and position inside the specimen at time t after the front surface echo is R ¼ Rt þ cw t f Φt þ cs t t f ΓΦt t > t f :
ð9Þ
The terms on the right hand side of Eq. 9 represent the position of the transducer, the propagation from transducer to specimen front surface, and the refracted
15
Registration of NDE Data to CAD
379
propagation beyond the front surface, respectively. Obviously Eq. 9 considers only the first refraction and neglects other reverberations, mode conversions, etc. It also only represents the propagation into the medium, not the reflection back out, but since the reflection follows the same path we can correct for that by halving the t from the measured data to get the time of reflection. By this method the acquired ultrasonic data v(R, Θ, Z, t) acquired as a function of motion stage positions R, Θ, Z, and time since trigger t can be mapped into position (θ, r, z) in the physical context of the part. In this case the physical coordinates are given by θ ¼ Θ – θ0 as well as r and z from R ¼ (r, z, 1)t determined by Eq. 9.
3D Homogeneous Coordinates As we begin to think about situations more complicated than the railroad wheel example above, we will need to be able to represent positions and orientations by a sequence of transforms. The same concept of homogeneous (projective) coordinates introduced in the 2D context in section “2D Geometric Inconsistency” applies equally in three dimensions. In this case the transformation between coordinate frames (a, b, c) and (x, y, z) is represented as a matrix multiply with four coordinates. As before, the final coordinate is 1.0 for a position and 0.0 for a vector. The transformation of a position (a, b, c, 1) to (x, y, z, 1) is then 2 3 2 A11 x 6y7 6A 6 7 6 21 6 7¼6 4 z 5 4 A31 1
0
A12 A22
A13 A23
A32 0
A33 0
32 3 a xo 6b7 yo 7 76 7 76 7 zo 54 c 5 1
ð10Þ
1
where the orthogonal sub-matrix 2
A11
6 A ¼ 4 A21 A31
A12
A13
3
A22
7 A23 5
A32
A33
ð11Þ
represents the rotation from (a, b, c) coordinates (multiplied on the right) to (x, y, z) coordinates (outcome on the left). The column vector [xo yo zo]t contains the (x, y, z) coordinates of the origin of the (a, b, c) frame. We will not be able to use homogeneous coordinate transforms directly on the (r, θ, z) cylindrical coordinates because cylindrical coordinates are curvilinear in that two of their principal axis directions b r and b θ are functions of position. Nevertheless cylindrical coordinates are readily converted to and from local cartesian coordinates which can then be multiplied and added in the pattern of Eq. 9 to represent rotations and shifts.
380
S. D. Holland and A. Krishnamurthy
Mapping of Coordinate Frames Instead of Points Mappings such as Eq. 9 are useful, but they are not general enough for a lot of needs. Imagine for example we were transmitting instead a polarized wave (perhaps a microwave) where the orientation matters. Equation 9 does not tell us the orientation at time t, just the location of the wave. There is a better way. We can replace Eq. 9 with a sequence of operators, each representing a coordinate transform, such that applying the operators transforms coordinates relative to the position and orientation of the wave into coordinates relative to the transducer, laboratory, world, etc. Continuing the above example, but now in 3D homogeneous coordinates, let a new (boldface) 2
cos ϕt
6 0 6 Rt ¼ 6 4 sin ϕt 0
0
sin ϕt
R r0
1
0
0
0 0
cos ϕt 0
3
7 7 7 Z z0 5
ð12Þ
1
be a transform that accepts homogeneous coordinates relative to the transducer multiplied on the right and returns coordinates relative to the lab frame origin, defined for our convenience and consistency at the center of the top of the rotation stage. This new transform encompasses both the location of the transducer (shifted from the origin by R – r0 in x and Z – z0 in z) and its orientation, with its third (z-like) axis (corresponding to the third column of Eq. 12) representing the propagation direction Φt of Eq. 6. Propagation of the wave can be represented by a shift along that third axis, so the wave at some time t prior to intersecting the front surface can be represented by 2
1 0
60 1 6 Rt C w ¼ Rt 6 40 0 0 0
0 0 1 0
0
3
0 7 7 7: cw t 5
ð13Þ
1
If we multiply this by a null position [0 0 0 1]t we get the same coordinates previously predicted by Eq. 7. We can also now multiply it by other positions relative to the wave and get those properly transformed from wavefront coordinates to lab coordinates too! To evaluate the wave position after the front surface intersection we replace Cw which is a function of time with Cwf evaluated at the time the wave reaches that front surface 2
1
60 6 Rt C wf ¼ Rt 6 40 0
0
0
1
0
0 0
0
3
0 7 7 7: 1 cw t f 5 0 1
ð14Þ
15
Registration of NDE Data to CAD
381
Then we add an additional factor Γ (boldface) representing the Snell’s law refraction and a final factor representing propagation Cs inside the specimen 2
cos γ
6 0 6 Rt C wf ΓC s ¼ Rt C wf 6 4 sin γ 0
0
sin γ
1
0
0 0
cos γ 0
32
0
1
6 07 76 0 76 0 54 0 1
0
0
0
1
0
0
3
7 0 7 7: 1 cs t t f 5 0 1
0 0
ð15Þ
Note that the transform portion of Γ is the transpose (inverse) of the original Γ because the original operated on Φ whereas now the order of operations is reversed (a more natural, sensible progression from sensing coordinates to laboratory coordinates). Once again applying a null position gives the wave location, and other positions relative to the wave get transformed correctly as well. The net result of assembling these matrices is a combined kinematic/propagation model RðwÞ ¼
Rt C w ðtÞw,
for t t f
Rt C wf ΓC s ðtÞw,
ð16Þ
for t > t f
for the measurement portion of the apparatus. To relate the measurement to the specimen, a mapping from specimen coordinates to measurement system coordinates is likewise needed, 2
cos ðΘ θ0 Þ 6 sin ðΘ θ Þ 0 6 Rs ¼ 6 4 0
sin ðΘ θ0 Þ cos ðΘ θ0 Þ
0
0 0
3 0 0 0 07 7 7, 1 05
ð17Þ
0 1
where Rs is to be multiplied on the right by vectors or positions in specimen coordinates and will give vectors or position in lab coordinates. The offsets here are zero on the basis that the origin of our specimen lines up with the defined origin of our lab frame, defined above as the center of the top of the rotation stage. It is important to check the sense of such transforms by plugging in simple test cases, as it is easy to get them backwards. Unlike the vector operations of 9, the homogeneous transformations of 16 and 17 are easily invertible using matrix inverse operations. To obtain specimen coordinates from wave coordinates we just multiply 16 on the left by the inverse of Eq. 17, ( SðwÞ ¼
R1 s Rt C w ðtÞw,
for t t f
R1 s Rt C wf ΓC s ðtÞw,
for t > t f
:
ð18Þ
We can likewise obtain wave coordinates from specimen coordinates by multiplying 17 on the left by the inverse of Eq. 16,
382
S. D. Holland and A. Krishnamurthy
( W ðsÞ ¼
1 C 1 for t t f w ðtÞRt Rs s, , 1 1 1 1 C s ðtÞΓ C wf Rt Rs s, for t > t f
ð19Þ
where s represents a point or vector in homogeneous specimen coordinates. Equations 18 and 19 give position relative to the specimen from position relative to the sensing wave and vice versa. The various multiplier matrices dependent on time, wavespeeds, the refraction angle, and the motion stage positions. Developing the kinematic and propagation models this way is nontrivial but reasonably straightforward; arguably it is more straightforward than the simpler process that gave us Eq. 9. With calculations involving full 3D homogeneous coordinates we can now also consider sensors such as cameras where (in this example) there will inevitably be propagation in a third axis (y) that we have not heretofore considered.
Robotic Systems Many robotic NDE 4.0 systems will have more complicated kinematics than the simple linear and rotational offsets of section “Mapping Data onto Surface Parameterizations.” Systems may have a combination of multi-axis linear and rotational stages, rotatable sensors and beyond. Industrial robot arms are finding increasing application to NDE. The principles are the same as in section “Mapping Data onto Surface Parameterizations” but with many more multipliers. Multi-segment robot arms can be represented as a series of joined segments with each joint described by its Denavit and Hartenberg (DH) parameters [23] and represented another homogeneous transformation matrix to be multiplied. NDE sensors tend to be fairly light weight, but heavy duty robot arms are often used because their stiffness and rigidity is required to obtain the mm-level accuracy desired in NDE applications. Using lighter less rigid robots along with live sensorfusion position estimation to track the position and orientation of the NDE sensor [24, 25] offers the potential for significant cost savings compared to traditional heavy and rigid robotic or motion control systems. Such tracking can be by optical imaging with a fixed camera or by an inertial measurement unit mounted with the NDE sensor. A Kalman filter can then combine the known joint position data with the known robot dynamics and the tracking data to generate sensor position and orientation that can be stored with the NDE data and used to replace Rt in models similar to Eqs. 12, 13, 14, 15, 16, 17, 18, and 19.
Propagation Model for Camera-Based Sensing Different NDE sensors will involve different propagation models. Where the sensing involves a camera and lens, we can use the tools of projective geometry to map each pixel in the camera image to a ray that may intersect the specimen, thus mapping that
15
Registration of NDE Data to CAD
383
image pixel to a corresponding location on the specimen. Camera-based NDE methods include optical inspection, thermography, shearography, as well as optical imaging of specimens that have been processed with fluorescent penetrant or magnetic particle methods. As shown in Fig. 3, an idealized camera (“pinhole camera”) maps points P onto image coordinates (u, v) according to where the ray between P and the focal center of the camera Fc intersects the image plane which is presumed to be orthogonal to the camera optical axis Zc. This mapping can be represented as a projective matrix operation 2
32 X 3 c 6 07 4 6 7 5 4 v 5 ¼ 0 f y cy 4 Y c 5 0 0 1 w0 |fflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflffl} Zc u0
3
2
fx
0
cx
ð20Þ
K
of the camera matrix K on the coordinates [Xc Yc Zc]t of the point P in the reference frame of the camera [26]. The projection operation yields unnormalized 2D homogeneous coordinates where the third element w’ may not be 1. The image coordinates (u, v) measured in pixels are found by dividing the unnormalized coordinates by the third element,
Fig. 3 Pinhole camera model from OpenCV [22]. The projection operation maps the point P to a point (u, v) on the image plane of an idealized pinhole camera with focal point F c
384
S. D. Holland and A. Krishnamurthy
2 3 2 0 03 u u =w 6 7 6 0 07 4 v 5 ¼ 4 v =w 5: 1
0
ð21Þ
0
w =w
The mapping parameters fx and fy are usually the same or nearly the same and represent the scaling between the camera and the physical world. The offsets cx and cy represent the pixel coordinates of the camera optical axis. The parameters fx and fy are often referred to as “focal lengths” of the camera measured in pixels. They can be interpreted as the distance from the focal center to the plane where distance units map to pixels. For example fx ¼ 100 px can be interpreted as indicating that a point P on the plane defined by Zc ¼ 100 mm, would have its horizontal pixel coordinate u equal to its x position in mm in the camera frame Xc, shifted by cx, i.e. u ¼ Xc + cx on that plane. A larger f maps to a narrower field of view. The mapping can also be inverted, relating pixel coordinates (u, v) to a ray relative to the camera origin. A pixel with coordinates (u, v) maps to a ray from the focal center. Letting U0 ¼ (u cx)/fx and V0 ¼ (v cy)/fy, the vector to the point on the focal plane will be [U0 V0 1]t. Then the pixel can be represented by a ray originating at the camera focal center in the direction (unit vector) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 2 2 U0 = U0 þ V 0 þ 1 p ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 6 7 6 V 0 = U0 2 þ V 0 2 þ 1 7 4 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 1= U 0 þ V 0 þ 1: 2
ð22Þ
Mapping NDE data from a camera back to the specimen will generally require knowledge of the relative orientation of camera and specimen. Given invertible homogeneous coordinate transforms similar to Eqs. 18 and 19 defining the camera orientation relative to the specimen, applying the transform to a point in specimen coordinates will yield a point in camera coordinates. Those camera coordinates can then be applied to Eq. 20 and normalized according to Eq. 21 yielding pixel coordinates in the camera image for the ray passing through the point. Given pixel coordinates we can similarly find a ray in specimen coordinates. First define the ray by a position in camera coordinates (the focal center at [0 0 0 1]t) combined with the unit vector given by Eq. 22 with fourth element zero. Each of these can be transformed by homogeneous matrix operations similar to Eq. 18 to obtain the ray origin and direction in specimen coordinates. The physical location on the specimen corresponding to the pixel is then found from the intersection of the ray with the specimen CAD model. Mapping the pixel data onto the specimen surface parameterization will be discussed later in section “Surface Parameterizations as a Domain for Data Storage and Fusion.”
Camera Calibration and Distortion Correction The above camera model is for an idealized pinhole camera. Real cameras with lenses also induce distortion that will need to be corrected. The open source package OpenCV
15
Registration of NDE Data to CAD
385
[22] includes a camera calibration algorithm that takes a series of images of square checkerboard patterns and uses them to estimate fx, fy, cx, cy and an array of distortion correction parameters. The algorithm can be called directly by custom software or run through preexisting graphical applications such as camera-calib from the Mobile Robotics Programming Toolkit [27]. Once the calibration is determined, OpenCV provides routines such as initUndistortRectifyMap() and remap(), to apply distortion correction to the raw camera images, as well as getOptimalNewCameraMatrix() to obtain suitable fx, fy, cx, and cy values for the distortioncorrected images. Then you can apply Eqs. 20, 21, and 22 to transform between corrected image pixel coordinates and 3D rays in the frame of the camera. Be aware that some camera lenses (especially cheaper zoom lenses) change focal length as the camera is focused. From experience the fx and fy values generated directly by the calibration may not be accurate enough to give mm-accuracy projections. We have found that it can be helpful to apply small corrections to fx and fy based on an image of a known physical size at a known distance. If the location of the focal center is not known (except that it is roughly behind the lens), then uncertainty in the focal center for such a recalibration can be made negligible by imaging a large object at a large distance.
Pose Estimation The combined position and orientation of an object is known as its pose. An alternative to a kinematic model is to directly track the pose of the specimen relative to the NDE sensor (a propagation model will still usually be necessary). One way to estimate pose is through landmarks or fiducial marks with both known locations on the specimen as well as known locations relative to the NDE sensor. For example, given four visible non-coplanar points at known locations on a specimen and their pixel coordinates in an image from a calibrated camera, it is possible to uniquely determine the pose of the specimen relative to the camera. This problem is known in the computer graphics community as the 4-point pose (P4P) problem, or more generally given n points as the n-point pose (PnP) problem [28]. OpenCV [22] includes a PnP solver solvePNPRansac that estimates relative pose from matched pixel and model coordinates. While the process of identifying and matching fiducials and landmarks is most accurate and reliable when done manually, automated methods are improving. One recent example is the use of image based tracking of a contact transducer [29] implemented with machine learning of image patterns. Pose estimation can be done better with more information, such from a 3D camera that returns depth (z) information. 3D cameras can be based on a variety of technologies, including stereo imaging, structured light (projection of a pattern), and LIDAR. In general the 3D camera returns depth at each pixel, giving a point cloud. Object recognition and pose estimation from 3D point cloud data is an area of active research and there are various methods, for example [30–32]. These technologies are building blocks of augmented reality (AR) systems so there is substantial R&D investment. In some cases pose accuracy may not be to the level needed by
386
S. D. Holland and A. Krishnamurthy
NDE applications, but so long as the pose is known approximately and high-quality point-cloud data is available, the classic and well-established iterative closest point (ICP) algorithm [33] can be used to refine the pose estimate. Sometimes the pose estimation sensors may be mounted in a fixed frame measuring pose of the NDE sensor assembly and specimen independently, rather than mounted to the NDE sensor directly measuring the relative pose of the specimen. In this case the two poses (sensor and specimen) can be combined as with the sensor and specimen models of Eqs. 16 and 17 to determine the relative pose. In an NDE system where the sensor and/or specimen are manually positioned, pose estimation replaces the kinetic model used in robotic NDE systems, but allows the data to still be registered to the CAD model.
Calibration of Kinematic Models Sensing and kinematic models such as Eqs. 18 and 19 often have unknowns that are not easily measured directly. Or there may be a camera co-mounted with the NDE sensor for pose estimation, but the relationship between the camera and the NDE sensor is not known exactly. For example with a camera sensor the focal center (the origin of the camera’s coordinate frame) is usually a short distance behind the lens, but its position may not be known precisely. Also with a complicated kinematic model position and orientation error can accumulate across the transforms giving error that could be calibrated out. By measuring known specimen or laboratory-frame locations with the NDE sensor or a co-mounted camera it is possible to solve for unknown offsets by performing a series of measurements and treating each measurement as an equation. A measurement could be an ultrasound test detecting the motion stage position that generates scattering when edge of the specimen aligns with the ultrasonic focus, or the measurement could be an optical pose estimate. Optimization can then be used to solve for the unknown parameters, providing a spatial system calibration. It is important to carefully define and document the calibration procedures and the conditions under which they need to be repeated, as routine maintenance activities such as disassembly/reassembly can potentially shift the calibration. One special case of this process would be using the front surface echo from an ultrasonic test to either resolve the specimen pose, apply a correction, or have the system follow the contours of the as-built geometry in the scan rather than follow the CAD model. Especially if the ultrasonic data is being located in 3D rather than aligned to a 2D surface parameterization, such a process can reduce apparent 3D misalignment of the data.
CAD Models A solid model is represented in modern CAD systems using boundary representation (B-rep), where the object is defined by its boundary. The B-rep is useful because it is compact, depending on surface area not volume, and straightforward to render for
15
Registration of NDE Data to CAD
387
display. Rendering a B-rep involves tessellating each face of the surface into triangles that can be directly rasterized on the graphics processing unit (GPU). A CAD modeling kernel, the computational engine behind a CAD package, is a tool for managing and storing the B-Rep, and performing geometric operations such as booleans and intersections. There are several general purpose CAD kernels, three of them (ACIS, PARASOLID, OpenCASCADE) are the most widely used, with most major CAD systems built around one of them. OpenCASCADE is of particular note because it is open-source and therefore more accessible for use and application in academic research outside traditional CAD systems. The principle of the B-rep is the separation of topological layout of the boundary structure from the geometry. The object in ℜ3 is generally represented as the material enclosed by an outer shell, with any voids represented by internal shells. Here, shells are the topological element that corresponds to ℜ 2. Shells are composed of faces which are in turn bounded by edges which are in turn bounded by vertices. The complete topology of the object represents the connectivity between the those topological elements, each of which has its own geometric definition. In Fig. 4 the pyramid solid is represented using five planar faces f1 to f5. Each of the faces is represented by the equation of its plane and its bounding edges; for example, f1 is bounded by e2, e3, and e6. Finally, each edge is represented by the equation of its line and 2 vertices; for example e1 is bounded by v1 and v5. This complete relationship between the faces, edges, and vertices form the combinatorial structure of the complete topology of the object, with the geometry of each element defined by its corresponding equation. Thus, complicated shapes can be built up from simple components. Geometric information is stored separate from the topology but must be consistent. For example, the geometry (“metric information”) for a vertex consists of its coordinates. The geometry of a straight edge consists of the line equation; the Topological Elements Shell
Faces
v5
f3 e1
v4
e4
f4
e5
f1
f2
f3
f4
f5
e3
f2
Edges e1
e2 f1
e7 f5
e8 v1
Solid
e2
e3
e4
e5
e6
e7
v3
e8
Topology Combinatorial structure
e6 v2
Vertices
v1
v2
v3
v4
v5
Plane equation for each face Line equation for each edge (x, y, z) coordinate for each vertex
Geometry Metric information
Fig. 4 In a B-rep, the topology represents the connectivity between the different dimensional topological elements, with the geometry separately representing the metric information
388
S. D. Holland and A. Krishnamurthy
bounding vertices must lie on (or be within tolerance of) the line. Likewise the geometry of a planar face consists of the plane equation and the bounding edges should be in the plane. This approach directly extends to curved edges and faces, where the geometry is represented using splines rather than straight lines or planes. Curved edges and faces are usually geometrically represented using nonuniform rational basis splines (NURBS). These splines are a multidimensional extension of 1D spline (connected polynomial) curve fitting and allow direct representations of curved lines and surfaces in multidimensional space. An edge will be defined with one free parameter u representing position along the edge, with separate curve fits giving x, y, and z coordinates as a function of u. A surface is likewise defined using two free parameters (u, v). Knots break the parameter space into regions with different weighted polynomial basis functions, and the (x, y, z) coordinates in each region are calculated from a linear combination of those basis functions that is specified by the locations of “control points.” NURBS is the standard definition for the spline surfaces [34]. The advantage of these NURBS surfaces is that they offer a high level of control and versatility; they can also compactly represent the surface geometry. Figure 5 shows the mapping of a parametric point (u, v) from the unit parametric space to the model space. The knots, control points, and weights affect this mapping. NURBS allow both local control via the knots and the control points and global control via the weights. A disadvantage of the NURBS representation is that an edge may not lie exactly on the face it is supposed to bound, but instead lies within some numerical tolerance of the face. This leads to messy tolerance parameters and occasional errors if the geometry kernel is not able (usually due to numerical roundoff) to fit a curve within the desired tolerance. The NURBS BREP is generally the most space-efficient way to store a CAD model, and has the additional advantage of storing the surface curvature as part of the model (curvatures are useful for modeling behaviors such as refractive focusing of ultrasonic beams). Nevertheless the combined topological and geometric data structures involved in a NURBS BREP are nontrivial, ray intersections such as to project data onto the surface are also nontrivial, and the NURBS BREP representations do not give a simple way to represent surface parameterizations beyond the natural (u, v) parameters of each surface. As such, it is common to mesh the CAD model surface into triangular facets and use the meshed representation in place of the NURBS BREP. Now instead of the Fig. 5 NURBS surfaces map a parametric point (u, v) in [0,1] [0,1] parametric space to the ℜ3 model space
(0,1)
v
v3 v2 v1
(0,0)
z
(1,1)
S(0,1) S(1,1)
S(u, v)
(u,v) S(0,0)
u1 u2 u3 (0,1)
u
Parametric Space
x
y S(1,0)
Model Space
15
Registration of NDE Data to CAD
CAD Model
Mesh Representation
389
Voxel
NURBS
Fig. 6 Different CAD representations that can be used to represent the surface or volumetric information
object being represented by an outer shell of NURBS faces, it is represented by a much finer and more intricate outer shell of triangles. Ray intersection calculations are much simpler with flat triangles than NURBS surfaces. Also, standard meshed surface file formats such as .obj, .vrml, and .x3d support surface parameterizations (texture coordinates). When creating a CAD model for registering NDE data, it is essential to build the model with a suitable level of detail. The CAD model might not need objects or features smaller than the sensing resolution and indeed such features may be detrimental. For example, consider a lap joint with rows of rivets: for a coarse scan, you may want to consider the overall joint as a unit so that you can assign the data to the overall surface of the joint, whereas for a close inspection of an individual rivet, you may want the CAD model to include the layers of material and rivet as separate objects so that you can map data to the individual layers and rivet. Another consideration for registering NDE data depends on the NDE modality. Inspection techniques such as ultrasonic c-scans and thermographic imaging naturally map directly to the faces of the CAD model or project data along a ray into its internal physical location. However, volumetric inspection techniques such as computed tomography imaging require mapping the entire 3D space occupied by the object. There are several volumetric representations. The most common ones are 3D voxels (rectangular blocks similar to pixels in 2D) and volume mesh representation commonly used in finite element analysis. We need to convert the CAD model from a B-rep to a volumetric representation (see Fig. 6) to map such volumetric data, either by voxelization or 3D meshing. This conversion requires specialized algorithms and voxelization of a curved model will create a jagged boundary (aliasing artifacts) as the curved surface is approximated by the block boundaries.
Direct 3D Visualization Three-dimensional data such as ultrasound fields and computed tomography reconstructions can be visualized using 3D rendering techniques. Methods include volume ray casting [35] and splatting [36]. Volume ray casting generates very high quality images from a 3D volumetric mesh by tracing a ray from each pixel in the image to
390
S. D. Holland and A. Krishnamurthy
Fig. 7 Three-dimensional visualization of model-based inversion of impact damage in a composite laminate using a splatting-like approach. The magnitude of reflection from each layer maps to both brightness and opaqueness
be generated through intersecting volume elements. The ray gets its color based on the volume elements and their transparency. Splatting works the other way around, constructing the image by using the graphics accelerator to draw a little semitransparent disk for each data point within the volume. The disks are drawn back to front with transparency. Figure 7 illustrates an example of splatting-like visualization of a multilayer model-based inversion from flash thermography data. Each layer is rendered transparent below a given threshold. Above the threshold, both brightness and intensity are proportional to the inverted thermographic response. The visualization allows the viewer to see the location and shape of impact damage in a composite material. A partially transparent CAD model can help illustrate the data in context, so long as the 3D registration is reasonably accurate.
Surface Parameterizations as a Domain for Data Storage and Fusion Most NDE techniques, with the notable exception of computed tomography, are fundamentally related to the surface being inspected. Visual, fluorescent penetrant, magnetic particle, thermography, eddy current, and shearography methods are all surface based. Ultrasound and terahertz may penetrate the surface but in almost all cases the data is best referenced to the ray intersection on the surface. As we will see
15
Registration of NDE Data to CAD
391
Fig. 8 Meshed 3D model and portion of a surface parameterization created by the Blender [37] tool
below, referencing data to surface coordinates rather than 3D location can also help accommodate geometric inconsistency. So in most cases, when we talk about referencing NDE data to a CAD model we really need to align the data to the surface of that CAD model. A surface parameterization (sometimes referred to as a u, v parameterization) is a mapping between 3D coordinates on the object surface and a 2D space that unwraps (and usually distorts) the 3D surface. Figure 8 illustrates a 3D model of a C channel shape, and part of such a surface parameterization. Each point (x, y, z) on the object surface has a corresponding point (u, v) in the parameterization. Surface parameterizations are often referred to in computer graphics as “texture coordinates” as they are used to map textures onto surfaces for rendering. Surface
392
S. D. Holland and A. Krishnamurthy
parameterizations will always have seams where the surface is split so that it can be unwrapped onto a flat surface.
Representations of Surface Parameterizations The NURBS surfaces used in CAD models do include an implicit surface parameterization but that parameterization may be distorted in ways that are not desirable for storing measured data or have seams in undesirable locations. Standard CAD model file formats do not provide a means to represent an alternative surface parameterization, although it is straightforward to hypothesize such a mapping. For meshed models, the parameterization is represented by additional (u, v) coordinates given for each vertex, creating a triangle in (u, v) parameterization space corresponding to each mesh facet. In most current work, for example [4], the CAD model is represented in meshed format.
Creating Surface Parameterizations Tools from computer graphics such as Blender [37] (used to for Fig. 8), Maya [38], and RenderMan [39] can be used to generate parameterizations of meshed CAD models. These tools were developed to support 3D rendering for the movie industry so they are remarkably sophisticated. Nevertheless generating good parameterizations is likely to be a manual process. Careful seam placement is important because analysis is likely to be interrupted across seams. Also some objects may have natural parameterizations (such as cylindrical coordinates) that are already in use in the field to identify defect locations; it would be foolish to build an NDE 4.0 system that is incompatible. Many or most parameterizations will involve some amount of distortion. Unless the object surface is cut into faces that are developable – meaning that the face can be unwrapped flat without distortion – any parameterization will involve nonuniform stretching or compression. The field of cartography illustrates some of the challenges in defining surface parameterizations, as the Earth’s surface is not developable. Map projections are parameterizations of the Earth’s sphere. (Obviously the standard parameterization of the earth is (Latitude, Longitude), which does not work well at the poles. When we are referring to map projections as parameterizations we are taking the (x,y) position on the map to be the parameterization.) Figure 9 (from [11]) compares Mercator and Eckert No. 6 projections. The Mercator projection of Fig. 9a is conformai – it preserves angles locally – but does not preserve areas. By comparison the Eckert No. 6 projection of Fig. 9b preserves areas. Both projections need seams and neither represent the arctic or antarctic – another projection domain would be needed for that. An example surface parameterization mapping the surface of a cylinder into 2D is illustrated in Fig. 10. Coordinates (z, ϕ) on the outer wall directly map to the parameterization. Mapping (r, ϕ) from the endcaps is impractical because of the singularity at r ¼ 0, so the endcaps are placed into the parameterization in cartesian
15
Registration of NDE Data to CAD
393
Fig. 9 Map projections: (a) The mercator projection, which is conformal (preserves angles locally), and (b) the Eckert No. 6 projection which preserves area. (Figure is a work of the US government and is not subject to copyright)
Fig. 10 Illustration of surface parameterization of a cylinder, illustrating the use of cylindrical coordinates for the main body
coordinates. In this case, by introducing seams along the axis and at the endcaps, each parameterization domain is developable so there is no distortion. In general it is probably wise to ensure that any parameterization in NDE applications be conformal (angle preserving) if it cannot be made developable. It will likely be common to execute analysis and fusion algorithms in the parameterization domain and a non-conformal parameterization would require those algorithms to accommodate local data coordinate axes that are not orthonormal.
Mapping Data onto Surface Parameterizations Once a surface parameterization is created, the NDE sensor and specimen are registered to each other by a relative pose, and NDE data is being acquired, it is still necessary to map the acquired data onto the surface parameterization. The methods of sections and “Propagation Model for Camera-Based Sensing” provide
394
S. D. Holland and A. Krishnamurthy
Fig. 11 Registration and fusion of thermography data on a stiffened specimen: (a) raw distortioncorrected thermal image, (b) re-projection of thermal image onto CAD model via parameterization, (c) fused parameterization of processed data, (d) visualization of fused and processed data
a means to map an ultrasound trace or an image pixel back to a ray in specimen coordinates. It is then conceptually straightforward to intersect the ray with the specimen surface, find the parameterization coordinates, and store the NDE data in the parameterization. Then the parameterization data can be processed and fused. An example is illustrated in Fig. 11. Figure 11a shows a raw distortion-corrected thermal image with fiducial marks identified in red. Based on the fiducial locations, the thermal image is captured and projected onto a parameterization, and then re-projected on to the specimen surface as shown in Fig. 11b. The data from different camera angles is fused and processed, with the result shown in Fig. 11c and reprojected onto the specimen in Fig. 11d. While the steps of mapping data onto a parameterization sound straightforward, in many cases they are less trivial than they sound. The parameterization data will usually be stored as a sampled 2D image (rectangular mesh), possibly with multiple
15
Registration of NDE Data to CAD
395
layers perhaps representing different times. You want to make sure that resolution of the parameterization data is at least slightly higher than the data being store to minimize undersampling artifacts. There is not going to be a one-to-one mapping between rays from the NDE data and pixels in the parameterization image, so a remapping will be needed. In addition, you do not know a priori which triangle in the CAD model the ray will intersect, and with possibly millions or more triangles testing all of them will be prohibitive. The variety of possible solutions to these problems are beyond the scope of this handbook. One way to perform the remapping is projecting the ray to an area in parameterization space instead of a point, where the area is weighted radially according to the distance between NDE test points, and the final parameterization data is constructed as a weighted sum. This method was used in the spatialNDE package [40]. Another approach would be to project only the known points and use Laplacian interpolation to fill in any gaps. Finding the intersecting triangle by brute force can be very compute intensive. One way to find the intersecting triangle efficiently is to sort the triangles into boxes and use the boxes to rule out most of the possible triangles. A single box around the entire object can be subdivided into eight sub-boxes (splitting in half in each axis). The sub-boxes can be recursively subdivided creating an “octree” structure that reduces the search effort from O(n) to O(log n), greatly improving performance. An alternative that works for camera data is to use the graphics card to render a virtual image of the CAD model exactly corresponding to the camera image, with the rendering engine programmed to tag each pixel with the identifier of the rendered triangle. Now for each data pixel you know which CAD model triangle intersects the ray. Either of these methods can mitigate the difficulty of finding the correct triangle.
Data from Different Camera Angles For camera-based methods you will almost certainly be wanting to fuse data from different camera angles as we did in Fig. 11. It is very tempting to fuse the raw data into a single parameterization, but this is a bad idea: The fusion will create artifacts where the different camera angles overlap, and the artifacts may confuse subsequent processing creating false indications or hiding true indications. This is because the camera angles will have different illumination, and there is also the possibility of registration errors that could cause small features to appear twice in overlapping zones. The best practice is to store the data from each camera angle in a separate parameterization with a weighting channel. Perform any processing independently for each camera angle, and if desired fuse the final processed output according to the weighting. Weighting is simply representing the fusion as P Fðu, vÞ ¼
i ðu, vÞwi ðu, vÞ i fP i wi ðu, vÞ
,
ð23Þ
that is, a weighted average (weights wi) of the images fi being fused. Weighting is useful because it helps hide fusion artifacts. For example without weighting the
396
S. D. Holland and A. Krishnamurthy
sharp boundary at the edge of the image from one camera angle will likely create a visible line image in the fused data. A weight reduction for all data projected from the edge of the camera image will make that line disappear. Rays intersecting the surface at a very steep angle are probably not meaningful, but a sharp cutoff will likewise induce an artifact, so the weight can be modulated by the angle of incidence. Additionally, if there is a horizon line in the image with the specimen still visible beyond, there is a high likelihood that small registration errors could cause data from one side of the horizon to erroneously appear on the other, so reduced weight near horizon lines will let better data fill in the gap, if available.
Analysis in the Parameterized Domain Analysis and model based inversion can be performed directly in the parameterized domain. Because of the connection between the parameterization and the CAD model, geometric information such as surface curvatures and expected thickness can potentially be extracted and used in the analysis. The parameterization domain appears similar to the 2D flat geometries used in most academic analysis. Some caution is warranted in such analysis: The parameterization domain may be significantly curved. If the surface was not developable then it will certainly have in-plane distortions and the physical steps corresponding to a single pixel will vary spatially. The parameterization domain also includes seams that will interfere with analysis. For analysis algorithms that operate locally, it may make sense to reproject data involving a seam into a temporary seamless subregion domain and run the algorithm in the subregion.
Accommodating Geometric Inconsistency For large objects such as entire airplanes some amount of geometric inconsistency is inevitable. For example the wings of a transport aircraft can drop by several feet when loaded with fuel, or can be permanently distorted by an unusually high-G maneuver. Direct registration to a 3D model will not work very well when the geometry of the physical object moves around so much! Most structures deform primarily in bending. While inches or feet of bending displacement are not unheard of, in-plane stretching or compression will be an order of magnitude smaller. Most engineering materials yield or fail at strains well below 1% – whereas 3D locations can move large distances relatively easily in bending. One possible way to accommodate inconsistent geometries is by projecting the NDE data onto a surface parameterization of the 3D model rather than onto the 3D model itself. All distortion of the object surface is in-plane where the deformations are an order of magnitude smaller than the out-of-plane bending deformations that plague direct registration in 3D coordinates. In addition, while in-plane distortions are much smaller than out-of-plane bending distortions, they still exist and may need to be accommodated for large structures.
15
Registration of NDE Data to CAD
397
Persistent Parameterizations: A Potential Tool for Accommodating Geometric Inconsistency Persistent parameterizations that can be instantiated across varying geometry offer the prospect of a robust means to accommodate geometric inconsistency. An example of a persistent parameterization would be locating points on an aircraft fuselage surface in cylindrical coordinates: A distance z is measured from a datum such as the nose join, and an angle θ (or circumferential position c) is measured from the fuselage crown. Such a parameterization is straightforward in simple geometries but much more difficult on more general, curved shapes. Unfortunately the tools to automatically instantiate persistent parameterizations in general geometries do not yet exist. What is needed is a notation or language for specifying the parameterization in terms of measurable characteristics of the geometry. One way to do this would be via a series of instructions. For example. 1. Find the nose join from a 3.5–4 m diameter circle of rivets near 2.5–3 m inboard on a smooth surface from an extremum of the aircraft. 2. Find the crown from a 24–26 m straight row of rivets extending from the nose join. 3. z is measured along the crown starting at the nose join. 4. Circumferential position c is measured from the crown at a particular z location, perpendicular to the crown and positive toward the starbord side of the aircraft. Such a series of instructions, written in a form that can be interpreted and automatically executed from CAD or measured geometry, would define a parameterization which could be considered equivalent across varying geometry. Such parameterizations make it possible to relate surface-referenced measurements across geometry.
Spatial Database Storage Once registered, in order to be accessible the NDE data needs to be stored in some kind of database. Traditional databases might key on part number, serial number, date, and perhaps modality. Especially for larger objects where a single NDE scan does not cover the entire object, an NDE 4.0 database also needs to key on physical geometric location, so that data can be queried according to where it is located. Such a database improves the capability of engineers in maintaining structural integrity because they can more easily locate, identify, and address damage “hostspots” [9]. The spatial database keying might be on each data point and/or in the form of bounding box(es) in 3D space or 2D parameterization coordinates for each data set. Motivated by geographical information systems (GIS) applications, a variety of database engines support storing and searching spatial data, but most focus primarily on 2D (map) data as opposed to full 3D representations. As of this writing in 2021,
398
S. D. Holland and A. Krishnamurthy
there is an intricate standard ISO/IEC 13249–3 [41], part of the SQL/MM multimedia framework, which defines a full set of 2D and 3D data types and operations for spatial queries. It is not clear that any database fully implements this standard, especially the 3D portions representing solid objects. Nevertheless, several databases such as Oracle Spatial and Graph [42] and the open source PostGIS [43] seem to support significant three-dimensional functionality. Should the database contain raw (or nearly-raw) data and/or processed output? This is a difficult question because processing and analysis is likely to improve over time. Comparing historical data processed with different algorithms than current data may not be very meaningful. On the other hand reprocessing and replacing the processed output creates a potential traceability problem for any conclusions if the old output is not kept around. A good philosophy would be to keep at minimum nearly raw data where any processing is unlikely to change. This author has had success using “branches” within the Git version control system [44] to manage processed output along with raw data. Unfortunately current database engines do not support branches. Storing the actual geometry and NDE data points of a large structure as elements of the spatial database is probably impractical in current off-the-shelf databases. In practical NDE 4.0 systems it will likely be sufficient for the spatial database to store raw datasets from individual NDE scans as units, indexed by their bounding box and timestamp. A spatial query would then identify a group of relevant datasets based on time and the intersection of the location of interest with their bounding boxes. The datasets can then be opened and rendered for display to the user or any desired automatic processing. If the database is capable, it could potentially directly store final processed output for the entire specimen as well.
Accommodating Errors Registration allows large quantities of NDE data to be accumulated together to support trend analysis, prognostics, etc. As more data is accumulated, it becomes inevitable that errors (registration or otherwise) will creep in. There is a significant danger of building brittle systems that might either draw incorrect conclusions (because they lack sufficient consistency checking) or end up requiring large quantities of expensive manual effort to fix errors. Ambitious NDE 4.0 systems run the risk of failure unless the inevitable presence of error is acknowledged, accommodated, and addressed in the system design. In a cautionary tale from another domain, US retailer Target tried to build a new business in Canada using the SAP [45] software to build an industry 4.0-like system for inventory and logistics. Due in large part to geometry and data entry errors, shipments would not fit into trucks or onto store shelves. The digital system got out of sync with reality and resulting inventory errors left shelves alternately empty or overflowing. Target Canada declared bankruptcy in 2015 less than 2 years after opening and was ultimately liquidated [46]. Sophisticated industry 4.0 systems that get contaminated by too much bad data can be worse than useless!
15
Registration of NDE Data to CAD
399
Need for Open Standards and Open Source Tools Given the complexity and diversity of needs for registered NDE 4.0 data, and the breadth of NDE modalities, no single vendor is going to be able to solve every registration and NDE data integration problem. Equipment support is particularly significant because positioning metadata is essential to the registration process. End-user NDE organizations will need to be able to build on legacy systems, adding instrumentation and new equipment from a variety of vendors. Specialist businesses and consultants will need able to integrate the data from all those systems into a coherent application-specific NDE 4.0 database. A vibrant NDE 4.0 industry will require open standards for registration telemetry and metadata so that NDE equipment vendors can build interoperable equipment at low cost and without specialist knowledge. Simple but flexible cross-modality open standard file formats with standard metadata for facilitating registration will be needed to make routine NDE 4.0 data registration practicable. It will be too expensive to reinvent everything for each modality. The only current open standard format is DICONDE [47], built on the medical DICOM [48] standard. Unfortunately DICOM is mind-bogglingly intricate: The specification is 21 volumes many of which are hundreds or even more than a thousand pages. That DICONDE is hidden behind an ASTM paywall has also limited its adoption by academics and small businesses. Open-source [49] tools built to open specifications have the potential to be the lubricant that enables smooth interoperability in the NDE 4.0 data domain, and therefore enables the vibrant NDE 4.0 industry. Open source, built on the “freedom to fork,” mitigates the risk of building a business dependent on a proprietary external toolkit. Open source tools have the potential to mitigate much of the excess complexity of dealing with 3D geometry, and provide a foundation accessible across the industry from equipment vendors to system integrators to end-user engineers. The aforementioned OpenCV [22] computer vision library has addressed that need for computer vision programming. The famous TensorFlow [50] addresses that need in machine learning. The OpenCascade [51] geometry kernel can help NDE 4.0 systems work with CAD geometry. The geometry management code behind the examples in this chapter has been published open source in the SpatialNDE [40] package, but as an experimental prototype is not very suitable as long term infrastructure. A rewrite, SpatialNDE2 [52] is underway but not ready for broader use as of this writing.
Summary Registering NDE data is important because it allows the data to be viewed, analyzed, and fused in the context of the physical part. Data can be registered in two dimensions with a rotation and a shift, assuming geometry is consistent and accurate. Three-dimensional data can also be registered with a rotation and shift based on the three-dimensional relative pose of object and CAD model, but accurately capturing three-dimensional data locations and measuring the relative pose are significantly
400
S. D. Holland and A. Krishnamurthy
more difficult. Capturing such data will often require both a kinematic model of the measurement robotics and a propagation model of the sensing process. Automatic pose estimation is at the cutting-edge of imaging processing technology and is not yet routine and straightforward. In small objects with consistent geometry, data from modalities such as computed tomography and ultrasound that can be interpreted volumetrically in 3D can be registered and visualized directly in 3D with reasonable accuracy. In modalities where data is more meaningfully referenced to the a surface, or for larger objects where geometric inconsistency and bending distortion are more significant, it makes more sense to register data through the 3D object onto a 2D surface parameterization. The parameterization domain provides an alternative context for NDE data analysis and fusion which may be more robust, more meaningful, and/or simpler to implement. The data needs to be stored in a format where it can be found when needed, for processing or analysis of the specimen or for visualization for a technician. For large specimens such as entire aircraft that do not fit into a single NDE scan, a spatial database is essential so that the data can be searched by location. How best to manage the balance between raw data and processed output in that database is an open question. Either way errors will inevitably creep into the database and the system must be designed so that it is robust in the face of errors. Every organization will have different needs and different goals for their NDE 4.0 systems depending on their specimens and their mission. No single approach will solve the full breadth of problems. Addressing these needs will require common, low-cost tools and standards that can be easily customized and integrated to meet a particular organization’s needs. The potential and scope of the NDE 4.0 market is huge, and straightforward registration will be an enabling component of that market.
Cross-References ▶ Introduction to NDE 4.0
References 1. Mukherjee S, Huang X, Rathod VT, Udpa L, Deng Y. Defects tracking via NDE based transfer learning. In: 2020 IEEE international conference on prognostics and health management (ICPHM). 2020. https://doi.org/10.1109/ICPHM49022.2020.9187034. 2. Gregory ED, Holland SD. State tracking of composite delaminations with a Bayesian filter. In: 2015 IEEE 14th international conference on machine learning and applications (ICMLA). 2015. p. 600–3. https://doi.org/10.1109/ICMLA.2015.189. 3. Tuegel EJ, Ingraffea AR, Eason TG, Spottswood SM. Reengineering aircraft structural life prediction using a digital twin. Int J Aero Eng. 2011;2011:154798. https://doi.org/10.1155/ 2011/154798. 4. Holland SD, McInnis C, Radkowski R, Krishanmurthy A. NDE data analysis and modeling in 3D CAD context. Mater Eval. 2020;78:95–103.
15
Registration of NDE Data to CAD
401
5. Brierley N, Smith RA, Turner N, Culver R, Maw T, Holloway A, Jones O, Wilcox PD. Advances in the UK toward NDE 4.0. Res Nondest Eval. 2020;31:306–24. https://doi. org/10.1080/09349847.2020.1834657. 6. Kobryn P, Boden B. Digital thread implementation in the Air Force: AFRL’s role, Presented at the NIST 2016 Model-Based Enterprise Summit, Gaithersburg, April 12–14, 2016. https:// www.nist.gov/el/systems-integration-division-73400/mbe-2016-presentations 7. Nlign Analytics. Aircraft Structural-Life Management Software. https://nlign.com. Accessed 18 Mar 2021. 8. DT4MRB Team, Digital Thread for Material Review Board, Presented at AA&S 2016, March 21–24, 2016, Grapevine. https://nlign.com/presentation-on-digital-thread-for-material-reviewboard-dt4mrb-at-aas-conference/ 9. Paredes SA. Improving damage and repair evaluation using structural data visualization and archival techniques, Presented at Composites and Advanced Materials Expo (CAMX) 2015, October 26–29, 2015, Dallas. https://nlign.com/navair-describes-their-use-of-nlign-at-2015camx/ 10. Chowdhury MS, Abel-Hafez MF. Pipeline inspection Gauge position estimation using inertial measurement unit, odometer, and a set of reference stations. ASCE-ASME J Risk Uncertain Eng Syst B. 2015;2:021001–10. https://doi.org/10.1115/1.4030945. 11. Alpha TR, Synder JP. The properties and uses of selected map projections. US Geological Survey IMAP 1402. 1982. https://doi.org/10.3133/i1402. 12. US National Geodetic Survey. Datums and reference frames. https://www.ngs.noaa.gov/ datums/index.shtml. Accessed 18 Mar 2021. 13. Mina M, Udpa SS, Udpa L, Yim J. A new approach for practical two dimensional data fusion utilizing a single Eddy current probe. Rev Progress Quant Nondestruct Eval. 1997;16:749–55. https://doi.org/10.1007/978-1-4615-5947-4_98. 14. Gros XE. Applications of NDT data fusion. Springer; 2001. 15. Liu Z, Forsyth DS, Komorowski JP, Hanasaki K, Kirubarajan T. Survey: state of the art in NDE data fusion techniques. IEEE Trans Instrument Meas. 2007;56:2435–51. https://doi.org/ 10.1109/TIM.2007.908139. 16. Horn D, Mayo WR. NDE reliability gains from combining eddy-current and ultrasonic testing. NDT&E Int. 2000;33:351–62. https://doi.org/10.1016/S0963-8695(99)00058-4. 17. Brierly N, Tippetts T, Cawley P. Data fusion for automated non-destructive inspection. Proc Roy Soc A. 2014;470:20140167. https://doi.org/10.1098/rspa.2014.0167. 18. Wu RT, Jahanshahi MR. Data fusion approaches for structural health monitoring and system identification: past, present, and future. Struct Health Monit. 2020;19:552–86. https://doi.org/ 10.1177/1475921718798769. 19. Zitová B, Flusser J. Image registration methods: a survey. Image Vis Comput. 2003;21:977– 1000. https://doi.org/10.1016/S0262-8856(03)00137-9. 20. Hughes JF, Van Dam A, McGuire M, Sklar DF, Foley JD, Feiner SK, Akeley K. Computer graphics: principles and practice. 3rd ed. London: Pearson Education; 2014. 21. Sahli H, El-Sheimy N. A novel method to enhance pipeline trajectory determination using pipeline junctions. Sensors. 2016;16:567. https://doi.org/10.3390/s16040567. 22. Bradski G. The OpenCV library. Dr Dobb’s Journal of Software Tools. 2000. 23. Legnani G, Casalo F, Righettini P, Zappa B. A homogeneous matrix approach to 3D kinematics and dynamics – II. Applications to chains or rigid bodies and serial manipulators. Mech Mach Theory. 1996;31:589–605. 24. Axelsson P. Bayesian state estimation of a flexible industrial robot. Control Eng Pract. 2012;20: 1220–8. 25. Summan R, Pierce S, Dobie G, Hensman J, MacLeod C. Practical constraints on real time Bayesian filtering for NDE applications. Syst Signal Process. 2014;42:181–93. 26. OpenCV: Camera Calibration and 3D Reconstruction. https://docs.opencv.org/master/d9/d0c/ group––calib3d.html. Accessed 18 Mar 2021. 27. MRPT: Mobile Robotics Programming toolkit. https://www.mrpt.org. Accessed 18 Mar 2021.
402
S. D. Holland and A. Krishnamurthy
28. Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM. 1981;24:381–95. 29. Radkowski R, Garrett T, Holland S. 3D machine vision technology for automatic data integration of ultrasonic data, Presented at QNDE 2019 (Portland, OR). https://www.iastatedigi talpress.com/qnde/article/id/8684/ 30. Aldoma A, Vincze M. CAD-model recognition and 6DOF pose estimation using 3D cues. In: 2011 IEEE international conference on computer vision workshops. 2011. p. 585–92. https:// doi.org/10.1109/ICCVW.2011.6130296. 31. Radkowski R, Garrett T, Ingebrand J, Wehr D. Tracking expert – a versatile tracking toolbox for augmented reality. In: Proceedings of ASME 2016 international design engineering technology conference. 2016. https://doi.org/10.1115/DETC2016-60401. 32. Tsai CY, Tsai SH. Simultaneous 3D object recognition and pose estimation based on RGB-D images. IEEE Access. 2018;6:28859–69. https://doi.org/10.1109/ACCESS.2018.2808225. 33. Besl PJ, McKay ND. A method for registration of 3D shapes. IEEE Trans Pattern Anal Mach Intell. 1992;14:239–56. https://doi.org/10.1109/34.121791. 34. Piegl L, Tiller W. The NURBS book. 2nd ed. Berlin: Springer; 1997. ISBN 3540615458. 35. Drebin RA, Carpenter L, Hanrahan P. Volume rendering. Comput Graph. 1988;22:65–74. https://doi.org/10.1145/378456.378484. 36. Crawfis R, Xue D, Zhang C. Volume rendering using splatting. In: Hansen CD, Johnson CR, editors. The visualization handbook. Burlington: Elsevier Butterworth-Heinemann; 2005. 37. Blender Online Community: Blender – a 3d modeling and rendering package. https://www. blender.org. Accessed 18 Mar 2021. 38. Autodesk, Inc. Maya. https://autodesk.com/products/maya. Accessed 18 Mar 2021. 39. Pixar: Renderman. https://renderman.pixar.com. Accessed 18 Mar 2021. 40. Holland SD. SpatialNDE. https://thermal.cnde.iastate.edu/spatialnde.xhtml. Accessed 18 Mar 2021. 41. ISO/IEC JTC1/SC32 Data management and interchange technical committee: ISO/IEC 132493:2016 Information technology – Database languages – SQL multimedia and application packages – Part 3: Spatial. https://www.iso.org/standard/60343.html 42. Oracle Corporation: Spatial and Graph features in Oracle Database. https://www.oracle.com/ database/technologies/spatialandgraph.html. Accessed 18 Mar 2021. 43. PostGIS Project Steering Committee: PostGIS. https://postgis.net/. Accessed 18 Mar 2021. 44. Torvalds L. Git – fast, scalable, distributed revision control system. https://git-scm.com/. Accessed 18 Mar 2021. 45. SAP Software. https://www.sap.com 46. Castaldo J. The untold tale of target Canada’s difficult birth, tough life and brutal death. Can Bus. 2016;89:36–49. https://www.canadianbusiness.com/the-last-days-of-target-canada/ 47. ASTM Subcommittee E07.11 on Digital Imaging and Communication in Nondestructive Evaluation: ASM E2339-15: Standard Practice for Digital Imaging and Communication in Nondestructive Evaluation (DICONDE). https://www.astm.org/Standards/E2339.htm. Accessed 18 Mar 2021. 48. NEMA Medical Imaging Technology Association: DICOM (ISO 12052). https://www.dicom standard.org/. Accessed 18 Mar 2021. 49. Open Source Initiative. https://opensource.org. Accessed 18 Mar 2021. 50. Abadi et al. TensorFlow: large-scale machine learning on heterogeneous systems. 2015. https:// www.tensorflow.org. Accessed 18 Mar 2021. 51. OpenCascade: OpenCascade. https://www.opencascade.com/. Accessed 18 Mar 2021. 52. Holland SD. SpatialNDE2. https://thermal.cnde.iastate.edu/spatialnde2.xhtml. Accessed 18 Mar 2021.
NDE 4.0: Image and Sound Recognition
16
Kimberley Hayes and Amit Rajput
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pattern Recognition in NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sound and Image Data Structures in Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Machine Learning Helps Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Neural Networks for Data Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advancements Enabling Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Available Neural Networks Software and Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Recognition in NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sound Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fundamentals of Sound and Acoustic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sound Recognition in NDT and SHM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Challenges and Insights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of Challenges and Potential Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Potential Solutions and Future Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
404 405 406 406 407 409 410 411 411 411 413 415 415 415 416 418 418 419 420 420
Abstract
Advances powered by Artificial Intelligence (AI) centric technologies have enveloped nearly every aspect of our lives. Of the many aspects of AI, seven patterns have been classified, with the most common being the recognition pattern (Walch, Kathleen-Contributor, Cognitive Word-Contributor Group. K. Hayes (*) Valkim Technologies, LLC, San Antonio, TX, USA A. Rajput XaasLabs Inc, Sunnyvale, CA, USA © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_26
403
404
K. Hayes and A. Rajput
September 17, 2019. The seven patterns of AI. https://www.forbes.com/sites/ cognitiveworld/2019/09/17/the-seven-patterns-of-ai/?sh¼71056b2b12d0). This chapter focuses on pattern recognition with a subset emphasis on image and sound as it may relate to NDE 4.0. Optical character recognition (OCR) leveraged image recognition for the past decade in document conversion and computer-assisted check deposit may present precedence for AI-assisted flaw detection systems for radiographic images. Computer vision (CV) is the base building block for extraction of data from an image and can recognize objects using algorithms and machine learning concepts (Brownlee, Jason. May 22, 2019 (updated January 27, 2021). Deep Learning for Computer Vision. A Gentle Introduction to Object Recognition with Deep Learning. https://machinelearningmastery.com/object-recognition-with-deeplearning/). Computer vision has been integral in detection, segmentation, classification, monitoring, and prediction of radiographs in the medical community and has applicability in visual and radiographic inspection in industry and the NDE community. Sound recognition has a large portion of defect formations and flaw mechanical movements release energy in the method of elastic waves with the broad frequency spectrum. Typically, these signals are digitized and converted into amplitude time series. Regardless of their frequency content, these digital acoustics, or sound, signals can be analyzed and classified by any method that applies to time series data, including those developed specifically for audible sound signals such as deep learning algorithms. Keywords
Pattern recognition · Artificial intelligence · Machine learning · Image recognition · Sound recognition · Deep learning · Convolutional neural network · Computer vision
Introduction Digital replication of human sensory systems proliferates the technological evolution in NDE 4.0. Rudimentary visual inspection is one of the oldest forms of NDE, and the vast acceleration in general industry to harness processing solutions for increased productivity, reliability, and safety have become commonplace, especially in the automotive industry. These developments are vital tools to support inspectors as the magnitude of digital data becomes integral to industry. Secondly, acoustics and frequency recognition have intrinsic value to the core inspection methods of advanced ultrasonics with many tentacles in acoustic emission, phased array, and guided wave, but harnessing Artificial Intelligence’s ability to synthesis and process copious amounts of data through algorithmic processing delivers augmented capabilities to the technician of tomorrow. This chapter presents some general overview
16
NDE 4.0: Image and Sound Recognition
405
of image and sound recognition as it relates to the world of inspection and nondestructive testing.
Motivations With the emerging new sensing and robotic technologies, the magnitude of the generated image and sound data is growing dramatically. This makes it hardly possible for humans to go through the data and analyze them one by one. Along with this, the online monitoring of assets and procedures is among the most wanted technologies for automation from surveillance systems to structure health monitoring (SHM) and manufacturing. Therefore, the list given here demands for automatic, smart, reliable sound and image analysis and recognition escalate in importance through industry: 1. The enormous, generated NDT images and sound signals 2. Fast and reliable online monitoring system such as in situ monitoring of additive manufacturing process 3. Availability of open-source systems for image recognition and sound recognition systems need evaluation of applicability All the instance above requires the implementation of advanced recognition systems that are discussed in this chapter. A focus on artificial neural networks (ANN) as the novel data recognition approach is inspired by human brain functionality. The complexity of an inherent human function of vision entails an impressive mental supercomputer if evaluated as an automated system. “The eye contains 150 million light-sensitive cells that are actually an outgrowth from the brain with neurons devoted to visual processing in the hundreds of millions and neural operation. The act of learning is equally as daunting as it is stated there are about 30,000 visual categories and it is estimated that the learning trajectory is approximately 4–5 categories per day” [1]. Likewise, the hearing process entails the same features in the brain. The hearing signals from ears transmitted the brain and intense interaction of cells and synapses extract the features and characteristic of the sound and make connections with previous experienced events. Synthetic image recognition has been advancing over the past decades with the advent and exploitation of technology from computers and advances in AI. Amending the influx of images and training required to make useful deployments of a “learning” procedure is equally progressing in various industries, including NDE. The Unmanned Aerial Systems with onboard video often require secondary processing which is redundant inefficient. The exponential developments from the automotive industry for self-driving cars and security facial recognition present vetted adjacent utilization. Additionally, sound’s role in inspection impacts many methods and leveraging the available algorithms for assisted analysis will play an important role for the future. This chapter focuses on pattern recognition with a subset emphasis on Image and Sound as it may relate to NDE 4.0.
406
K. Hayes and A. Rajput
Pattern Recognition in NDE 4.0 Sound and Image Data Structures in Pattern Recognition The digitized sound and images are set of numbers stored in the form of arrays or matrixes. As sound is the time series of received voltages from the sensors, they are recognized as array of data commonly with fixed time intervals. This is also called 1-dimensional (1D) data that refers to array for data listed in one direction (time). On the contrary, images are 2-dimensional data (2D) where each number is associated to a certain location in 2D special domain. In general, 2D data are presented in the form of matrixes as shown in Fig. 1, and each single unit is called a scalar or a pixel. Both sound and image data can be represented as series or matrix of number with a certain spectrum. Usually, the normalized data are presented in range of 0–100 covering the minimum and maximum available data in the dataset. The purpose of the pattern recognition toolboxes is to detect specific patterns of a given data and relate it to a certain collected data in the past. For example, given the pictures of a cat or dog, the pattern recognition toolbox can identify if the input is a cat, dog, or an unknown object. While 3D images and multichannel soundtracks are available, in this work, we mainly focus on 2D images and 1D soundtracks, considering that advanced data structures are basically the combination of simple 1D or 2D data forms. A data structure (I) is a method of organizing objects to be processed by one or more
Fig. 1 Data structures of sound and images, in forms of vector and matrix, respectively. The smallest unit of data is a scalar associated with a single point data. (Made by author)
16
NDE 4.0: Image and Sound Recognition
407
computer programs; several different types are common: trees, binary trees, forests, and lists. Many are generalizations of terms in mathematics and operations research (management science): for example, vectors, matrices, files, and queues. Data structure terminology focuses on the structural rather than the mathematical property of each item, calling vectors linear lists and calling matrices arrays [2].
Machine Learning Helps Pattern Recognition Artificial Intelligence (AI) is not a novel topic, nor are the techniques, but the increased computing power and available data may present capabilities allowing for escalation of adoption and applicability. Categorization of the conventional aspects of AI varies, but a few to consider are: (1) hyper-personalization that is heavily utilized in consumer consumption of individual content, products, and services; (2) autonomous systems for the ability to operate on their own with little or any human interaction; (3) predictive analytics to predict about unknown future events (techniques like data mining, statistics, modeling, machine learning); (4) decision support for coordinating data delivery, analyze data trends, provide forecasts, develop data consistency, quantifying uncertainty, and anticipating the user’s data needs [3]; (5) conversation/human interactions through the use of messaging apps, speech-based assistants, and chatbots to automate communication and create personalized customer experiences at scale [4]; (6) goal-driven systems to find the “hidden rules” solving challenging problems; and (7) recognition systems to predict similarities using computer vision and machine learning. For image and sound, the classification best fitting is recognition systems. A subset of AI specific to image processing is leveraged through computer vision (CV), simply stated as seeing and interpreting what is seen. Computer vision emerged in the 1950s with two-dimensional imaging for statistical pattern recognition. The subset to CV is machine vision and more fitting for NDT in that it is targeted to visually identify product defects and process inefficiencies. An example of machine vision system may be composed of optical cameras mounted on a production line, where their acquired images are processed by machine learning (ML) model that analyzes the data and delivers predictions that can be handled as feedback, for example, to optimize the production process and ultimately the product. Additionally, “visually” identify may not necessarily mean the use of electromagnetic radiation in the visible light spectrum; there are machine vision systems that use electromagnetic radiation in the x-ray range for analyzing defects in a microprocessors wafer. It is important to transmit the idea that machine vision is not limited to a specific range of radiation wavelengths. Historically, rule-based algorithm techniques fostered condition-driven deductions with programed capacity. Rule-based use-case for the medical industry may be illustrated that “if a patient has runny nose, fever, cough then conclusion is patient has a cold” and is a set of programmed conditions that drive the prescribed conclusion. Expansive opportunities propagate from machine learning techniques that can learn and improve through experience without direct programing. In the late 1990s,
408
K. Hayes and A. Rajput
machine learning became a scientific discipline where the algorithms can learn from the data and is not limited to the programmed framework. Its origins go back to the likes of Turing (and others) in the 1930–1940s as some basic techniques started developing neural networks, but computing power did not exploit these until the late 1970s or early 1980s. The plethora of algorithms present fertile landscape for the vast needs of the NDE community. Advancements have become possible through machine learning giving way to deep learning by unharnessing the potential within Neural Networks. Available algorithms include decision trees, support vector machines, K-means clustering, K-nearest neighbor, Naïve Bayes classifier, random forests, Gaussian mixture models, linear regression, logistic regression, principal component analysis, and many others. Navigating the vast landscape facilitates dynamics as varied as the industry itself as solutions often require collaborative matrices of strategies. Some of the widely used algorithms can be seen below (Fig. 2). Selecting the proper algorithm takes many forms, but considerations can include, but not limited to: size of training data, categorization of data, required feature set, linearity, and patterns. “During model development, typically, several different architectures are tried, and hyper-parameters adjusted to find a model that leans the target problem well, avoids overfitting and can learn efficiently from available data” [5]. Additionally, during the technical justification phase of procedure development, clear validation metrics must be established to assess the fundamental measure and visualize statistical bias and variance [6]. The right solution for the right requirement. Common across many industries is the burdensome effects of corrosion. In the NDE space for deployment was to tackle the $20 billion rust problem the United States Navy was experiencing with a $3-billion annual spend to combat this challenge. Leveraging the AI/ML of a Google partner, a drone deployed solution to detect, prioritize, and predict maintenance needs [7]. Expansive developments in this arena have significant efforts underway with promising results. Opensource solutions and access to online image databases foster acceleration. TensorFlow algorithms and ImageNet repositories present a competitive landscape that establishment of validation metrics is essential to establish early in the development process.
Fig. 2 Machine learning for identification
16
NDE 4.0: Image and Sound Recognition
409
Neural Networks for Data Classification Since 2010, the best-performing artificial-intelligence systems – such as the speech recognizers on smartphones or Google’s latest automatic translator – have resulted from a technique called “deep learning.” Deep learning expands on neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department. An artificial neural network (ANN) is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it. The “signal” at a connection is a real number, and the output of each neuron is computed by some nonlinear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times [8]. Figure 3 demonstrates a simple image recognition/classification by neural networks. The image, 2D matrix of data, is processed to subset of new matrixes, for example, various derivatives, and then feed to the neural network. The output of the neural network defines the type or configuration of the input image. Similar algorithm can be deployed for sound recognition, which frequency content of the time series data is presented in form of images (spectrogram) for a certain time span.
Fig. 3 Neural networks, input data goes through processing and output classifies the input
410
K. Hayes and A. Rajput
Advancements Enabling Machine Learning The rapid development in machine learning has followed the path of Moore’s Law through doubling every 2 years and by leveraging the value of human and machine collaborative is ideally served when parceled responsibilities allocate to the individual proficiencies. Humans are agile and adaptive where computers are exponentially more equipped to compute massive amounts of data. Also, given the availability of opensource and commercially available algorithms, the level of adoption will steadily be on the incline. Appropriating the hardware for objective is essential and the developments over the years have made this more efficient, but not all computers manage all data with the same efficiencies. As stated in section “Sound and Image Data Structures in Pattern Recognition,” associated data segmentations can be assessed as: Scalar – single number; Vector – array of numbers; Matrix – 2D dimensional array; Tensor – volumetric matrix of numbers and must be considered when selecting computing power demand. Computing power – Platforms: In general, a Central Processing Unit (CPU) can handle 1x1 data unit (scalar) and tens of operation cycles; Graphics Processing Unit (GPU) can accommodate 1xN (vector) and tens of thousands of operations per cycle where Tensor Processing Unit (TPU) manages NxN data units (a tensor is a n-dimensional matrix) and up to 128,000 operations per cycle. TPUs process both matrixes (that may be thought as one-layer tensors) and tensors (that may be thought as multiple layers of matrixes). 1. Central Processing Unit (CPU) designed for general purpose (computers, mobile phones to run applications or software) 32bit or 64bit max. Easy programing and supports programming with C/C++, Scala, Java, Python, and others, but simple ML limitations. 2. Graphic Processing Unit (GPU) specialized for processing images and videos used where a lot of DSP operations like multiplications and additions custom hardware (gaming and video). Simpler processing units compared to CPUs, but larger number of cores ideal for applications processing parallel like the pixels of images or videos. Languages CUDA and OpenCL and a little more limited compared to CPU. 3. Tensor Processing Units (TPU) are built from the bottom-up, a custom Asic built by Google data processing of tons of data at low precision but operates well with Google services unlocking their AI features like machine learning capability feature. Very fast at performing dense vector and matrix computations, but very low flexibility 4. Field-Programmable Gate Array (FPGA) – in the past was configurable chip mainly used in implementing glue logic and custom functions, but now may be tailor-made architectures for specialized applications. High performance, low cost, and low power consumption compared to CPU/GPU. Can be programed in OpenCL and high-level Synthesis (HLS). APIs and libraries can be provided for your framework (Python, Scala, Java, R and Apache Spark). The limitations of FPGAs are not confined to TensorFlow [9].
16
NDE 4.0: Image and Sound Recognition
411
Available Neural Networks Software and Platforms Neural network software is used to simulate, research, develop, and apply artificial neural networks. Using software gives access to versatile operations, instead of building the structures directly. The majority implementations of neural networks available are custom implementations in various programming languages and on various platforms. There are also many programming libraries that contain neural network functionality and that can be used in custom implementations (such as TensorFlow, PyTorch, Keras, CAFFE, etc., typically providing bindings to languages such as Python). Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. The Cloud computing can offer some huge benefits over the traditional computing such as efficient data processing strategies and computing powers particularly for machine learning algorithms. In addition, the immigration of NDT and SHM data to the clouds in some industries will make cloud computing inevitable.
Image Recognition Background History – Computer Vision’s early developments, back in the 1960s, focused on mimicking human vision to query the computer on what it is seeing without analysis in the hi-res space, x-rays, and MRIs. In the 1970s, 3D extraction from image gave way to algorithms still relevant today like extraction of edges, labeling of lines, nonpolyhedral and polyhedral modeling, representation of objects, optical flow, and motion estimation. Later that decade, NDE leveraged the capabilities in a publication from September 1979 on [10], but only in recent years has the uptick occurred. The 1980s presented an escalation weighted in mathematical analysis and quantitative emphasis like scale-space, shading, texture, and focus as well as contour modeling. The 1990s delivered advancements in projective 3D reconstruction that shifted focus to the repurposing of bundled adjustment that led to sparse 3D reconstruction from multiple images. In this era, the first statistical learning technique could recognize faces in images (see Eigenface). Nearing the end of this decade, collaboration between computer graphics and vision produced image-based rendering, image morphing, panoramic image stitching, and early light-field rendering. Algorithms continue to evolve to solve individual challenges compiling tasks of acquiring, processing, analyzing, and “understanding” digital images to extract highdimensional data to produce symbolic information. Feature-based methods in conjunction with machine learning and into deep learning have catapulted legacy developments and present. Early radiomics used predefined features: shape, intensity, and texture with deep learning trained feature representation automatically.
412
K. Hayes and A. Rajput
Early utilization leveraged conventional OCR for image to text conversion, but with the advanced capabilities of multilayer neural networks, the ability to understand is becoming possible. The work from 1998 broke through the use of Convolutional Neural Networks (CNN) with LeNet 5 to decipher orientation and blurry images with their work on “Gradient-based learning applied to document recognition” having relevance today [11]. Using LeNet, it was now possible to correctly recognize text that is presented in a nonvertical orientation or even in case the document image was somewhat blurry. Traditional use of Machine Vision had many utilitarian deployments. It is well suited for measurement, document scanning, and many other applications. The adoption in gauging/measurement simply leverages a fixed-mount camera to capture images to calculate the distances using machine vision systems for accept/reject. Large-scale production demands less intrusive and faster solutions present optimized integration where conventional use of manual contact gauges where automated system acuity in millimeters and in milliseconds ideally fit the migrations of industry. Warehousing or production/assembly lines may use these methods for detection of presence or absence of objects and as more robots are deployed, machine vision systems may use it for guidance. Other uses of machine vision include text recognition capabilities that can “read” like with Optical Character Recognition (OCR) that converts image to text; that can be exported to trigger action or a decision and 1D/2D barcode reading and identification. The conventional use-cases are vast and long since been deployed as commonplace. As complexity and broadening applications press against convention, more elaborate solutions emerge to thwart the obstacles. In the previous example of traditional OCR, demand escalation imposed greater challenges like letter/character orientation, blurring, smaller font, and script interpretation. Conventional solutions worked to a degree even before the Deep Learning boom of 2012. Advantages of Deep Learning and (CNN) have overcome many of these issues. The correlative aspects leveraging CNN in NDE may escalate the plausibility of detection and characterization of discontinuities in various image-based methodologies like visual and radiography. Recent publications present success in manual methods like Magnetic Particle and Liquid Penetrant [12]. The gaming arena has laid groundwork for significant advancements in gesture, motion capture, and prediction that can lend aide to commercial exploitation for surgeons before a real person and even try on clothes before purchasing. Other image recognition capabilities have found their transition to deployment. Examples include fingerprint recognition and biometrics, face categories that can reliably detect individual distinctions from approximately 40 data points. The aforementioned uses lay the foundation for the continued migration to autonomous driving cars that have continuing image recognition hurdles in the refinement of object identification to distinguish humans, cars, bicycles, signs, etc., which is a seemingly innate parallel function within the human brain and not yet reliably replicated. Although there are profound implications between the similarities and differences on how human brains and computers process images and the “learning process” associated to them. In human vision, there are deep neurological and psychological implications that have
16
NDE 4.0: Image and Sound Recognition
413
been evident in sight restoration processes and in bionic vision developments. Those developments shall migrate in some form to AA image recognition processes as in [13].
Image Recognition in NDE 4.0 Automated image recognition is one of the core building blocks of the NDE 4.0 era. The goals of automated image recognition techniques can be manifold depending on the use case and can range from improved image acquisition, automated classification, detection, segmentation, characterization or prediction of anomalies, image quality improvement, change detection, volumetric measurement, and remaining life-time estimation. While visual image recognition is the most common modality, these same techniques can be applied to several additional imaging modalities such as laser scans, radiography, or ultrasonic imaging. Table 1 illustrates the diversity of use cases in NDE. The list is not meant to be exhaustive. Given the huge diversity of image recognition use-cases in NDE, viewing research trends through the prism of a well-defined recognition target with wide applicability across a range of use cases can help highlight key trends. Leveraging AI algorithms in assessing the magnitudes of video and image data for general inspection, methane emissions, corrosion detection are a few targets that span a wide range of NDE use cases leveraging computer vision. Computer vision (CV) has broader capabilities where image recognition is a subset of CV with sets of algorithms and techniques to label and classify the elements inside an image. Feature extraction from large datasets is necessary to train the algorithms to determine boundary range of the vectors. The vast utilization in various sectors includes automotive, security, healthcare, retail, marketing to name a few, but the extensive utilization has increased in adoption through the increased computing potential, image resolution, and associated costs (https:// research.aimultiple.com/image-recognition/). Additional advancement can be referenced to include: Automated image analysis to identify anomalous patterns as it relates to a wide range of NDE use cases, is a topic that has been extensively covered in research literature. Other applied uses can be seen in “Common approaches to NDT of glass materials are X-ray Computer Tomography (XCT), optical systems (OS) and acoustic emission testing (AE) [14]. In production, machine vision-based systems are providing reliable defect detection [13], while in-service it would be difficult to deploy equipment and trained operators to track defects and subjectively interpret results. Many studies present such systems [15–18], where image processing algorithms are used to detect glassware defects. To the best of our knowledge, machine learning (ML) approaches have not been extensively applied in the field of NDT on glassware and there are fewer studies on it. In one of them [19], the authors applied Convolutional Neural Networks (CNNs) on images to detect defects in the mouth, body and the bottom of glass bottles achieving an average accuracy rate of 98.4%.
414
K. Hayes and A. Rajput
Table 1 NDE use cases Imaging modality/technique Visual inspection (image analysis)
Use cases/applications Road damage detection Bridge inspections
Aircrafts inspection
Radiography
Magnetic particle
Wind turbine inspection Inspection of parts in manufacturing line, Pipeline inspection Casting surface
In situ monitoring (Video analysis)
Welding Additive manufacturing Subsea structures
Laser Scans
Confined space inspections, site level inspections
Approach/automation target Longitudinal cracks, alligator cracks, potholes, bumps, patches, etc. Concrete cracks, concrete spalling and delamination, fatigue cracks, steel corrosion, asphalt cracks Surface defects caused by corrosion and cracks and stains from the oil spill, grease, dirt sediments Leading edge erosion, surface cracks, damaged lightning receptors, damaged vortex generators Variations in thickness, measuring varied attributes of the parts, such as dimensions, shape, mass, locations Weld defects Surface roughness on the reliability of magnetic particle inspection for the detection of subsurface indications in steel castings [Lau S 2019] Air holes, foreign-particle inclusions, shrinkage cavities, cracks, wrinkles, and casting fins Identifying events such as pipeline exposure, burial, field joints, anodes, free spans, and boulders. Events such as field joints, sea-life, marine growth, seabed settlements, auxiliary structural elements, breaks on the external pipeline sheathing, and alien objects near the pipe Defect detection, volumetric measurements
Alternatives to image-based systems are Resonance Acoustic Method (RAM) [20], known as Acoustic Resonance Testing (ART) [5] NDT systems” (https://rd. springer.com/chapter/10.1007/978-3-030-49186-4_17). A survey of relevant research literature shows a clear trend towards the application of AI/ML-based approaches on imaging datasets in the NDE realm. Bondada et al. [21] provide an overview of relevant studies over the years that shed light on the evolution of techniques applied to a range of NDE use cases such as crack detection (Motamedi et al. [22]), corrosion detection (Lohade and Chopadae [23]. Ranjan and Gulati [16]), corrosion grading (Choi and Kim [24]. Itzhak et al. [25], Ji et al. [20] Bondada et al. [21]), etc. A critical challenge that the industry needs to address is related to sourcing and curation of an extensive and diverse corpus of high quality datasets. Industry consortia take steps in this direction. Crowd sourcing-based approaches for dataset acquisition and curation
16
NDE 4.0: Image and Sound Recognition
415
have helped address such challenges in other domains such as Medical Radiography. Nash et al. [26] proposed a crowd sourcing-based approach for acquisition of corrosion datasets.
Sound Recognition Background Raj Reddy was the first person to take on continuous speech recognition as a graduate student at Stanford University in the late 1960s. Previous systems required users to pause after each word. Reddy’s system issued spoken commands for playing chess. Around this time, Soviet researchers invented the dynamic time warping (DTW) algorithm and used it to create a recognizer capable of operating on a 200-word vocabulary. DTW processed speech by dividing it into short frames, for example, 10 ms segments, and processing each frame as a single unit. Although DTW would be superseded by later algorithms, the technique carried on. Achieving speaker independence remained unsolved at this time [27]. Much of the progress in the field is owed to the rapidly increasing capabilities of computers. At the end of the DARPA program in 1976, the best computer available to researchers was the PDP-10 with 4 MB ram. It could take up to 100 min to decode just 30 s of speech. In the long history of speech recognition, both shallow form and deep form (e.g., recurrent nets) of artificial neural networks had been explored for many years during 1980s, 1990s, and a few years into the 2000s. Although these methods never won over the nonuniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. A number of key difficulties had been methodologically analyzed in the 1990s, including gradient diminishing and weak temporal correlation structure in the neural predictive models. All these difficulties were in addition to the lack of big training data and big computing power in these early days. Most speech recognition researchers who understood such barriers hence subsequently moved away from neural nets to pursue generative modeling approaches until the recent resurgence of deep learning starting around 2009–2010 that had overcome all these difficulties. By early 2010s, speech recognition, also called voice recognition, was clearly differentiated from speaker recognition, and speaker independence was considered a major breakthrough. Until then, systems required a “training” period. A 1987 ad for a doll had carried the tagline “Finally, the doll that understands you” – despite the fact that it was described as “which children could train to respond to their voice.”
Fundamentals of Sound and Acoustic Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture.
416
K. Hayes and A. Rajput
Accordingly, the science of acoustics spreads across many facets of human society – music, medicine, architecture, industrial production, warfare, and more. Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound, and infrasound. In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness of these disturbances is related to the sound pressure level (SPL) which is measured on a logarithmic scale in decibels. Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as “higher intensity” or “lower intensity” sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time and then presented in more meaningful forms such as octave bands or time frequency plots. Both of these popular methods are used to analyze sound and better understand the acoustic phenomenon. The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. Those limits (INFRASONIC, ACOUSTIC, and ULTRASONIC RANGES) may be modified by specific wave amplitudes or acoustic pressure values.
Sound Recognition in NDT and SHM The tremendous interest and necessity of developing advanced speech recognition and audio event characterization methods have pushed the limits of machine learning drastically. Recently developed methods can precisely recognize sound events embedded in background noise or overlapped with other events. The potential of recognizing specific sounds convoluted with background noise and sounds can be extremely beneficial for NDT applications. This can be investigated by scholars resulting in machine learning methods adopted for NDT/SHM applications. A very common example of this phenomenon is when fiber breakage in a composite material under stress causes the emission of acoustic signals, which are captured by transducers or microphones. A wide range of use cases are served by automated sound recognition techniques (Table 2). The most common deep learning-based approach for classification of sounds is to convert the audio file to an image and then use a neural network to process the image. Recognizing different indoor and outdoor acoustic environments from recorded acoustic signals is an active research field that has received much attention in the last few years. The task is an essential part of auditory scene analysis and involves
16
NDE 4.0: Image and Sound Recognition
417
Table 2 NDE use cases based on acoustic analysis Sound-based NDE technique Acoustic emission Ultrasonic guided waves Microphones and hydrophones Ultrasound scans Sonar
Use cases/applications Structural health monitoring of bridges, concrete structures Leakage detection Crack detection in composites
Approach/automation target Identification and classification of cracking modes Event detection Mode identification
In-air and underwater applications
Event detection
Steel surface inspections (confined spaces, external inspections) Seafloor searches
Wall thickness measurement, flaw detection Identification/detection of Shipwrecks, man-made objects
summarizing an entire recorded acoustic signal using a predefined semantic description like “office room” or “public place.” Those semantic entities are denoted as acoustic scenes and the task of recognizing them as acoustic scene classification (ASC). Most neural network architectures applied for ASC require multidimensional input data. The most commonly used time-frequency transformations are the shorttime Fourier transform (STFT), the Mel spectrogram, and the wavelet spectrogram. The Mel spectrogram is based on a nonlinear frequency scale motivated by human auditory perception and provides a more compact spectral representation of sounds compared to the STFT. ASC algorithms process only the magnitude of the Fourier transform while the phase is discarded. The best performing ASC algorithms from the recent DCASE challenges used almost exclusively spectrogram representations based on logarithmic frequency spacing and logarithmic magnitude scaling such as log Mel spectrograms as the network input. The second approach is to interpret the signal transformation step as a learnable function, commonly denoted as “front-end,” which can be jointly trained with the classification back-end [17]. The third approach is to use unsupervised learning to derive semantically meaningful signal representations. As a complementary read to this article, Barchiesi et al. published an in-depth overview of ASC methods using “traditional” feature extraction and classification techniques prior to the general transition to deep learning-based methods in [18]. Feature learning techniques based on hand-crafted audio features and traditional classification algorithms such as support vector machines (SVM) have been shown to underperform deep learning-based ASC algorithms. Automated classification of environmental sounds, like dog barking and siren, can be used in applications such as remote surveillance and home automation. An interesting application is the use of home monitoring equipment which identifies different sounds produced in a domestic/interior environment and alerts the user accordingly. With the increased focus on deploying assisted analysis systems, recent published disclosures have been presented. One such is from, “Fraunhofer IKTS is using
418
K. Hayes and A. Rajput
machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained” (https://aip.scitation.org/doi/abs/ 10.1063/1.5031519). Additionally, “Acoustic pattern recognition is used to evaluate objects, materials, and components or to monitor production processes, machines, and entire plants automatically [1, 2]. It is a combination of approaches for feature extraction and compression, machine learning, and classification. It is able to learn characteristics of acoustic signals in terms of typical temporal and spectral patterns. In that way it automatically creates individual models of signals in order to assess unknown objects or objects in unknown condition” (https://www.ndt.net/article/wcndt2016/ papers/we3f3.pdf).
Challenges and Insights Overview of Challenges and Potential Solutions While good progress has been made in applying recent advances in automated image and sound recognition to NDE use cases, a number of challenges remain. A majority of these challenges are common to both image and sound recognition space as well in other AI developments. We outline some of these challenges and then outline important aspects of the potential solution space (I) Validation challenges: An important requirement for automatic defect recognition (ADR) systems in most NDE use cases are to achieve a high degree of sensitivity. Users of such ADR systems have a need for effective independent validation methodologies that give them confidence in the ability of these systems to meet these requirements. Validation methodologies need to take into account variations in user-specific data acquisition characteristics and other factors that influence the quality of the acquired NDE data. (II) Lack of sufficient datasets: Given the wide range of potential anomalies of interest in NDE data and given that many of these anomalies tend to be uncommon, there is an invariable shortage of training datasets. Privacy and confidentiality constraints have further ensured that most of the acquired data remains archived and is inaccessible to the developer community. (III) Generalizability: An area that humans continue to outperform machine learning models is in dealing with situations that were not previously encountered. In other words, humans can generalize much better due to the breadth of knowledge and experience and the ability to reason. Today’s ML approaches are for the most part limited by the quality, quantity, and diversity of training data.
16
NDE 4.0: Image and Sound Recognition
419
(IV) Resistance: The limited understanding, inherent risk aversion, and innate human reluctance to change and adoption is met with higher scrutiny and barriers to entry. Transparency to the processes, globally accepted terminology and convergence on validation metrics must be established.
Potential Solutions and Future Direction Data Augmentation Training deep learning models usually requires large amounts of training data to capture the natural variability in the data to be modeled. The size of machine listening datasets increased over the last few years but lagged behind computer vision datasets such as the ImageNet dataset with over 14 million images and over 21 thousand object classes [28]. The only exception to this day is the AudioSet dataset [29] with currently over 2.1 million audio excerpts and 527 sound event classes. This section summarizes techniques for data augmentation to address this lack of data. The first group of data augmentation algorithms generates new training data instances from existing ones by applying various signal transformations. Basic audio signal transformation includes time stretching, pitch shifting, dynamic range compression, as well as adding random noise. Domain Adaptation The performance of sound event classification algorithms often suffers from covariate shift, that is, a distribution mismatch between training and test datasets. When being deployed in real-world application scenarios, ASC systems usually face novel acoustic conditions that are caused by different recording devices or environmental influences. Domain adaption methods aim to increase the robustness of classification algorithms in such scenarios by adapting them to data from a novel target domain. A second challenge arises from the audio recording devices in mobile sensor units. Due to space constraints, microelectromechanical systems microelectromechanical (MEMS) microphones are often used. However, scientific datasets used for training ASC models are usually recorded with high-quality electret microphones [15]. As discussed in Changed recording conditions has an effect on the input data distribution. Achieving robust classification systems in such a scenario requires the application of domain adaptation strategies. Open Set Classification Most ASC tasks in public evaluation campaigns such as the Detection and Classification of Acoustic Scenes and Events (DCASE) challenge assume a closed-set classification scenario with a fixed predefined set of acoustic scenes to distinguish. In real-world applications, however, the underlying data distributions of acoustic scenes are often unknown and can furthermore change over time with new classes becoming relevant. This motivates the use of open-set classification approaches, where an algorithm can also classify a given audio recording as an “unknown” class. This scenario was first addressed as part of the DCASE 2019 challenge in Task 1C “Open-set Acoustic Scene Classification.”
420
K. Hayes and A. Rajput
Transfer Learning Many ASC algorithms rely on well-proven neural network architectures from the computer vision domain such as AlexNet [30], VGG16 [12], Xception [31], DenseNet [32], GoogLeNet [33], and Resnet. Transfer learning allows the finetuning of models that are pretrained on related audio classification tasks. The Role of Standards A critical barrier that needs to be overcome by assisted analysis and discontinuity recognition systems is the effective validation of such systems. Standards bodies and industry have an important role to play in this regard. The creation of standard validation sets and definition of performance metrics for these recognition systems are some of the initiatives that could be pursued within these organizations. Also important is to win the trust of field personnel. Convergence on standardization of global terminology, validation metrics, bias detection, and transparency to successes throughout the value-chain can support reliability in adoption. Workforce Training As NDE 4.0-based advanced automation approaches are rolled out, the role of NDE technicians is expected to evolve. NDE 4.0 tools are primarily meant to assist and augment NDE technicians. Buy-in from the workforce is a critical barrier that must be overcome. So, it is imperative that all the stake holders address this from the early phases. NDE4.0 vendors should keep in mind considerations such as user friendliness and explain ability while designing the user interfaces from a human perspective. End-user organizations and service providers must institute workforce training programs to ease this transition.
Summary Automated image and sound recognition is an important building block in NDE 4.0 workflows. The furious pace of innovation in several domains, as it relates to image and sound recognition, can be directly leveraged and adapted to NDE4.0 use cases. We have provided an overview of recent work in these areas. While these advances have shown great promise in helping push the NDE field forward, several challenges remain, some of which are unique to the NDE field. We have outlined some of the key challenges and touched on areas that need to be explored further. There is much cause for optimism.
References 1. Grady D. June 1 1993. The vision thing: mainly in the brain. Discover Magazine. https://www. discovermagazine.com/mind/the-vision-thing-mainly-in-the-brain 2. Klinger A. Data structures and pattern recognition. In: Advances in information systems science. Boston: Springer; 1978. p. 273–3100.
16
NDE 4.0: Image and Sound Recognition
421
3. Presentation Conference: Knowledge-Based Intelligent Information and Engineering Systems, 10th International Conference, KES 2006, Bournemouth, UK, October 9–11, 2006, Proceedings, Part II. Artificial Intelligence for Decision Making. https://www.researchgate.net/ publication/221020855_Artificial_Intelligence_for_Decision_Making 4. Phillips-Wien G, Lakhmi J. Artificial intelligence for decision making. Berlin/Heidelberg: Springer; 2006. 5. Schmelzer R. January 7, 2021. Data science vs machine learning vs. AI: Wow they work together. https://searchbusinessanalytics.techtarget.com/feature/Data-science-vs-machine-learn ing-vs-AI-How-they-work-together 6. Klotzbucher M, Mazeika L, Samaitis V, Ashwin P. ENIQ publication. Qualification of an Artificial Intelligence/Machine Learning Non-destructive Testing System. Version 13. Nugenia Association c/o EDF, avenue des Arts, 53, B-1000 Bruxelles Belgium. 7. Dietterich TG, Kong EB. Machine learning bias, statistical bias, and statistical variance of decision tree algorithms. https://citeseerx.ist.psu.edu/viewdoc/download?doi¼10.1.1.38. 2702&rep¼rep1&type¼pdf 8. Srivastav B. The inventive: all about machine learning for non-IT persons. Model 1: Artificial Neural Networks. p. 27. 9. Haynes SD, Stone J, Cheung PYK, Luk W. Video image processing with the sonic architecture. Computer. 2000;33(4):50–7. https://doi.org/10.1109/2.839321. 10. Tucker P. August 27, 2020. US Navy Turns to Drones, AI to Monitor Rust. https://www. defenseone.com/technology/2020/08/us-navy-turns-drones-ai-monitor-rust/168036/ 11. Lecun Y, Bottou L, Bengio Y, Heffner P. Gradient-based learning applied to document recognition. https://ieeexplore.ieee.org/document/726791 12. Popescu D, Anania FD, Cotet CE, Amza CG. University Politehnica of Bucharest, IMST Faculty, June 2013. Fully-automated liquid penetrant inspection line simulation model for increasing productivity. https://www.researchgate.net/publication/275912753_Fully-Auto mated_Liquid_Penetrant_Inspection_Line_Simulation_Model_for_Increasing_Productivity 13. Beyeler M, Rokem A, Boynton GM, Fine I. Learning to see again: biological constraints on cortical plasticity and the implications for sight restoration technologies. J Neural Eng. 2017;14(5):051003. 14. Walch K-Contributor, Cognitive Word-Contributor Group. September 17, 2019. The seven patterns of AI. https://www.forbes.com/sites/cognitiveworld/2019/09/17/the-seven-patterns-ofai/?sh¼71056b2b12d0. 15. Huang M, Wu D, Yu CH, Fang Z, Interlandi M, Condie T, Cong J. October 2016. Programming and runtime support to blaze FPGA accelerator deployment at datacenter scale. 16. Ranjan RK, Gulati T. Condition assessment of metallic objects using edge detection. Int J Adv Res Comput Sci Softw Eng. 2014;4(5):253–8. 17. https://betterprogramming.pub/how-to-do-speech-recognition-with-a-dynamic-time-warpingalgorithm-159c2a1bb83c 18. Barchiesi, et al. Acoustic scene classification: classifying environments from the sounds they produce. https://www.researchgate.net/publication/274514661_Acoustic_Scene_Classifica tion_Classifying_environments_from_the_sounds_they_produce 19. Vanderbrug GJ, Nagel RN. Image pattern recognition in industrial inspection [*NBSIR 79-1764*]. 20. Ji G, Zhu Y, Zhang Y. The corroded defect rating system of coating material based on computer vision. In: Transactions on edutainment VIII 2012. Berlin/Heidelberg: Springer. p. 210–20. 21. Bondadaa V, Kumar D, Cheruvu P, Kumara S. Detection and quantitative assessment of corrosion on pipelines through image analysis. ScienceDirect. 2018;133:804–811. https:// www.sciencedirect.com/science/article/pii/S1877050918310688 22. Motamedi, et al. Dynamic Analysis of fixed cracks in composites by the extended finite element method. https://www.researchgate.net/publication/223548381_Dynamic_analysis_of_fixed_ cracks_in_composites_by_the_extended_finite_element_method 23. Lohade DM, Chopade PB. Metal inspection for surface defect detection by image thresholding. https://www.semanticscholar.org/paper/Metal-Inspection-for-Surface-defect-Detection-byLohade-Chopade/e321d593df2eab5724f332e6da890d06efd65f25
422
K. Hayes and A. Rajput
24. Choi KY, Kim SS. Morphological analysis and classification of types of surface corrosion damage by digital image processing. Corros Sci. 2005;47(1):1–15. 25. Itzhak D, Dinstein I, Zilberberg T. Pitting corrosion evaluation by computer image processing. Corros Sci. 1981;21(1):17–22. 26. Nash WT, Powell CJ, Drummond T, Birbilis N. Automated corrosion detection using crowdsourced training for deep learning. Corr J Sci Eng 2019. https://meridian.allenpress. com/corrosion/article-abstract/76/2/135/445338/Automated-Corrosion-Detection-UsingCrowdsourced?redirectedFrom=fulltext 27. Zeghidour N, et al. LEAF: a learnable frontend for audio classification. https://arxiv.org/abs/ 2101.08596 28. https://image-net.org/ 29. http://research.google.com/audioset/ 30. https://en.wikipedia.org/wiki/AlexNet 31. https://arxiv.org/abs/1610.02357 32. https://arxiv.org/abs/1608.06993 33. https://research.google/pubs/pub43022/
Image Processing 2D/3D with Emphasis on Image Segmentation
17
Andreas H. J. Tewes, Astrid Haibel, and Rainer P. Schneider
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measurement Techniques in Brief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-Ray Radiography and Computed Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Magnetic Resonance Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applied Segmentation Problems: Classical Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Histogram-Based Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples of Histogram-Based Segmentation Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model-Based 3D Geometry Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitation of Classical Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Segmentation: Toward New Frontiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convolutional Neural Network: A Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Semantic Segmentation Using CNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
424 424 424 426 427 427 428 431 433 434 434 435 437 440 440 440
Abstract
This chapter highlights the application of image processing for automated analysis of images generated by reconstructive imaging techniques (CT, MRI). It is shown that, in addition to the pure image information, contextual knowledge about the imaged structures as well as knowledge about acquisition- or reconstruction-related artifacts is always necessary in order to obtain qualitatively good results. The consideration of this contextual information, however, is complicated in the classical way and is rarely directly transferable to modified questions. The application of deep neural networks to such questions offers a
A. H. J. Tewes (*) · A. Haibel · R. P. Schneider Beuth University of Applied Sciences, Berlin, Germany e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_59
423
424
A. H. J. Tewes et al.
great potential. This is explained, demonstrated, and discussed using the semantic segmentation of image data. Keywords
Imaging · Tomography · Artifacts · Quantitative image analysis · Semantic segmentation · Deep learning
Introduction 2D/3D imaging-enhanced nondestructive techniques have always played a key role. Pictures usually allow for an easily to understand explanation of the materials degradation/crack formation/geometry deviation even for an audience that is not familiar with the measurement problem as well as the sensing principle itself. Thus, testing methods that inherently deliver images such as X-ray radiography, 3D computed tomography, ultrasound, and thermography have some direct advantages in comparison to acoustic emission, light scattering, and spectroscopic techniques. Finally, every technique delivering local measurement values that can be mapped on a coordinate grid will come out with images. Segmentation is a central tool in the field of image analysis. Segmentation aims to identify contiguous areas, interfaces, surfaces, etc. For sporadic evaluation in 2D data sets, a manual segmentation may be barely appropriate, but for multiple sets of 3D data and evaluation on a regular basis, there is no alternative to automated segmentation. This is where neural networks and among them deep neural networks in particular have become a game changer whose importance for NDE 4.0 can hardly be overestimated. This chapter illustrates segmentation problems and classical approaches as well as state-of-the-art solutions using deep neural networks. Typical challenges and limitations based on X-ray computed tomography (CT) and magnetic resonance imaging (MRI) are also mentioned.
Measurement Techniques in Brief This paragraph gives a short description of the physical measurement principles of the imaging techniques in focus.
X-Ray Radiography and Computed Tomography Radiography is a measuring method in which three-dimensional objects are irradiated with X-rays and their two-dimensional projection is displayed on a detector. Thus, the spatial 3D information of the object gets lost. Small structures within the object that lie one behind the other in the beam path are superimposed in the
17
Image Processing 2D/3D with Emphasis on Image Segmentation
425
projection. The local attenuation of the X-ray beam is depending on the photon energy, the irradiated material, its density, arrangement, and thickness (Fig. 1). By rotating the object during the measurement and irradiating it in many equidistant angular positions (0 360 ), a data set of projections from different perspectives of the object is produced. A three-dimensional image of the object (tomogram) can be calculated from this data set using appropriate reconstruction algorithms. There are different approaches to the reconstruction of the three-dimensional image from a data set of two-dimensional projections. Beside the reconstruction according to the Fourier slice theorem based on the Radon resolvent, the reconstruction methods of filtered back projection and iterative, algebraic methods have become established today. Due to inaccurate measurements of the projections and nonlinear effects, artifacts, i.e., errors, can occur in the reconstructed 3D images. The reason for this can be, for example, single defective pixels of the detector, movements of the object during the measurement, scattering of the X-ray radiation in the object, or high-density differences in the investigated object. Also a wide energy distribution of the X-ray radiation and a low bit depth of the detector or an insufficient number of measured projections lead to image errors. Single defective pixels of the detector cause dark ring-shaped structures in the 3D image, so-called ring artifacts. If the investigated object moves during the measurement, the 3D images are blurred. Scattering of the X-ray radiation during the measurement leads to background noise in the image. Locally highly absorbing materials in the investigated subject cause stripes and streaks in the tomogram. This is caused by local strong absorption compared to the surroundings. These artifacts are called metal artifacts and are usually found in the medical field with CT images of patients with dental fillings or hip prostheses. If a broad X-ray spectrum is used for irradiation, low-energy photons are absorbed significantly stronger than high-energy photons. On the one hand, this leads to the formation of stripes in the tomogram due to shading of more absorbing areas in the object, on the other hand, to gray value gradients in actually homogeneous sample material. These gray value gradients incorrectly suggest a material density gradient. A low bit depth of the detector leads to insufficient density resolution in the 3D image. Gray values in the 3D image, which should be assigned to the various sample components, smear into broad gray value distributions and overlap with each other.
Fig. 1 Schematic sketch for X-ray radiography (left) and computed tomography (right)
426
A. H. J. Tewes et al.
If the number of projections is too low, the 3D image becomes blurred and streaky. All of these image errors described make the segmentation of individual sample components based on their gray value distribution considerably more difficult.
Magnetic Resonance Tomography If a patient is brought into an external magnetic field along his body’s longitudinal axis, the protons in the body align themselves parallel or antiparallel to this magnetic field. Since a larger number of protons align themselves parallel to the external magnetic field, there is a net magnetic moment along the external magnetic field. The individual protons carry out a precession movement in this external magnetic field. A perpendicular radiated high-frequency pulse (i.e., 90 ) to this external magnetic field which has the same frequency as the precession frequency of the protons can trigger a resonance and thus transfer energy to the protons. As a result, a larger number of protons are aligned antiparallel and thus neutralize a correspondingly larger number of protons in the opposite direction. The consequence of this is that the longitudinal magnetization decreases. In addition, the precession of the protons is synchronized by the high-frequency pulse, so that the protons now precess in phase. The result is a new magnetization vector, namely the transverse magnetization. If the highfrequency pulse is switched of, the longitudinal magnetization increases again in time T1 and the transverse magnetization decreases again with time T2. The time constants are called the longitudinal relaxation time T1 and the transverse relaxation time T2. Longitudinal and transverse relaxation are different processes that are timeindependent of each another. The sum vector of the longitudinal and transverse magnetization is called the net magnetization. The transverse component of this net magnetization induces a measurable signal in an antenna. The spatial correlation of the individual signal components is achieved via additional gradient fields in three spatial directions (slice-selecting gradient, frequency-encoding gradient, and phase-encoding gradient), which are superimposed on the external magnetic field during the measurement. Using the Fourier transformation, the strength, frequency, and phase of the various individual signals can be calculated from the total detected signal within a slice, and these can be assigned to where they originated. Image artifacts also occur with MRI, which negatively affect the image quality. Artifacts can occur, for example, from the patient’s movement during the measurement (movement artifacts) or from flowing blood (flow artifacts). Further artifacts arise from the different precession frequencies of the protons in fat and water (chemical shift artifact) or from body parts outside the examination field but within the receiver antenna (folding artifacts). Local magnetic field inhomogeneities can also generate artifacts in the 3D image and thereby impair an exact assignment of the gray values in the 3D image. In the following sections, selected segmentation methods for both CT data and MRI data are presented, and the difficulties that occur from inaccurate data are shown.
17
Image Processing 2D/3D with Emphasis on Image Segmentation
427
Applied Segmentation Problems: Classical Approaches Histogram-Based Segmentation If specific image information is to be extracted from an image, this is called segmentation. A standard segmentation method is the separation of the specific image areas on the basis of their different gray value distributions. As example, a CT cross-sectional image through a wooden pencil (see Fig. 2) is used to demonstrated the procedure. The pencil consists of its highly absorbing lead (visible in the middle of the picture) and the wooden cover. In the picture, the cell structure of the wood used is seen. The first step for the segmentation by gray values is to generate a histogram of the image (see Fig. 3). The number of gray values occurring in the image is plotted versus their brightness value in the histogram. If structures of very different gray values are present in the image (in the example the lead and the surrounding wood), these are visible in the histogram as separate gray value distributions. These gray value distributions can be clearly seen in Fig. 3 (black function). The larger gray value distribution with smaller gray values belongs to the wood, and the smaller gray value distribution with higher gray values can be assigned to the pencil lead. If these gray value distributions are largely separated in the histogram, the image information to be extracted can be cut out of the image by means of a threshold between the gray value distributions. The extracted image information is called foreground and is assigned the value “white.” The rest of the gray values in the image, i.e., all other image information, is called background. The value “black” is assigned to the background.
Fig. 2 CT slice through a wooden pencil. Clearly to see is the pencil lead in the middle and the surrounding wood
428
A. H. J. Tewes et al.
Fig. 3 Histogram of Fig. 2. The black curve describes the gray value distribution in the image. With the help of the intersection of the two Gaussians Fits (orange and green), the optimal threshold between the gray value distributions of the pencil lead and the surrounding wood can be found (black vertical dashed line)
In the example shown, the gray value distributions of the pencil lead and the surrounding wood partially overlap. In this case, the optimal threshold can be found by fitting the two Gaussian gray value areas. The green and orange dashed curves are the Gaussian fits of the two gray value distributions in the histogram. The threshold is set then at the intersection of the two fits (vertical black dashed line in Fig. 3). Due to the overlap in gray values, it is obvious that some image pixels that belong to the pencil lead are assigned to the wood and vice versa during segmentation. This can be clearly seen in Fig. 4. The annoying white background pixels and black foreground pixels can be corrected by using different types of images filters. However, if the gray value distributions overlap too much in the histogram, the image information of interest can no longer be separated well or not at all on the basis of their gray value distribution.
Examples of Histogram-Based Segmentation Failure Conventional X-ray CT instruments in laboratories are using a white X-ray beam in contrast to tomography setups at synchrotron sources, where the available intensities allow to work with monochromatic beams. White beams consist of photons with highly different energies that result in different absorption coefficients and thus
17
Image Processing 2D/3D with Emphasis on Image Segmentation
429
Fig. 4 Result of the gray value segmentation. Due to the overlap of the gray value distribution of the pencil lead and the wood, some black pixels in the image foreground and some white pixels in the image background can be seen. (Image analysis done with ImageJ Fiji [1])
Fig. 5 Cross-section of a 3D tomographic dataset of a complex component consisting of a variety of materials that illustrates especially beam hardening artifacts due to highly absorbing metal parts within the sample [1]
penetration depths in matter. Therefore, the quantity of low-energy photons is being absorbed disproportionately, and the mean wavelength of the X-ray spectra will significantly shift to shorter values while crossing strong absorbing materials (“beam hardening”). Additionally scattering of photons that changes their flight direction is not considered in the reconstruction algorithms, also. Complex components usually consist of a mixture of weakly and highly absorbing materials. Figure 5 shows X-ray scattering together with very intense
430
A. H. J. Tewes et al.
beam hardening artifacts resulting in gray value variations in regions of homogeneous materials. But even with two matters present in the 3D reconstruction area (metal/air), the histogram-threshold technique will not succeed properly to realize a segmentation that seems to be easily visible for the human eye. This is shown in Fig. 6 that shows a cross section of a gasoline injector 3D CT image. This image exhibits huge variations of the gray values of air and metal within different regions within the image. Additionally to the shift of the mean gray value for metal along the vertical direction, there is a non-neglectable gray value noise that has to be taken into account, additionally. Figure 6 shows the result of a nonsuccessful binarization using one threshold value, only. Figure 7 exhibits the non-negligible overlap of the histogram distributions even of neighboring but clearly assignable material and air regions. Thus, an automated segmentation based on even a specifically adjusted threshold value will not be successful for these classes of samples. Another approach to extract geometry parameters out of tomographic 3D data sets is 3D modeling surfaces and interfaces within the samples. This shall be demonstrated for injection nozzles for diesel engines in the following paragraph.
Fig. 6 Cross section of a 3D tomographic dataset of a standard car gasoline injection nozzle. The structure shown is about 4 mm in width. The upper picture illustrates strong scattering artifacts at the upper surface of the injector and a strong variation of the materials and air gray-values especially in the upper third of the picture. The lower picture shows the binarization of the upper one based on a single histogram threshold value
17
Image Processing 2D/3D with Emphasis on Image Segmentation
431
Fig. 7 This picture illustrates the histogram overlap of two neighboring regions within the cross section shown in Fig. 6. Each of them can definitely be assigned to only material (black) and only air (green), respectively
Model-Based 3D Geometry Extraction Correct injection nozzle geometry is the decisive factor for the correct fuel spray formation within the combustion chamber and thus for the quality and efficiency of the combustion itself. Even small deviations of the injection hole geometry and especially the curvature of the injection hole entrance significantly influence the spray formation. Relatively small diameters between 100 and 200 μm and precise conus of the injection holes as well as entrance curvature radii in the range of approximately 30 μm demand for not only highest resolution X-ray CT but also high photon energies in order to penetrate approximately 2 mm of steel (Fig. 8). Figure 9 exhibits the inner geometry of an injection nozzle. Its center is filled by the needle tip. The linear needle position controls the fuel flow. Circularly positioned within the upper part of the nozzle, the injection holes connect the inner volume with the cylinder outside, where the combustion takes place. Of significant influence on the performance of the nozzle are the injection hole conical geometries as well as the roundness of the transition of the injection hole to the blind hole volume. In order to evaluate these based on X-ray CT data, one has to cope with intense beam hardening artifacts that strongly affect the gray values for material and air. Together with small transmission intensities due to the massive steel structure resulting in relatively high noise within the reconstructed 3D dataset, the extraction of reliable geometry parameters is a tough task.
432
A. H. J. Tewes et al.
Fig. 8 Standard diesel injection nozzle with injection holes [2]
Precision of geometry analyses within 3D data sets is limited by the density of the 3D voxel data, the geometrical resolution of the measurement setup, being usually worse, as well as the information resolution in each voxel (noise). Nevertheless, geometry information can be extracted with a precision below voxel length for parameterized geometric shapes covering a multitude of single voxels information. But, of significant influence are the local gray values for material and air. Gray value gradients for material and air depending on the specific vertical and horizontal position within the nozzle wall have to be compensated in some way in order to reduce degrees of freedom for local least square geometry fits. This compensation should be valid, because it is strongly correlated to the conus of the spray hole to be found. One possible strategy here is to pin the local gray value for air to the local center-of-spray hole value. The gray value for material can be adjusted by the local median gray value in between two spray holes. That way, a number of “elliptical wheels” can be fitted within the local cross sections of the spray holes in order to determine the local geometry quantitatively. This is shown in Fig. 9. As illustrated in Fig. 10, this evaluation results in very precise data for the local cross-sectional parameters of the injection holes. The presented data have been measured on a microfocus X-ray CT resulting in a voxel-length of 3.9 μm. Forty wheels, each 81.5 μm in thickness and scanning the local injection hole geometry,
17
Image Processing 2D/3D with Emphasis on Image Segmentation
433
Fig. 9 Model of the inner surface structure (polygon mesh built by VGStudio Max) of a diesel injection nozzle. The figure illustrates the injection hole geometry determination within a 3D X-ray CT dataset. Up to 80 elliptical wheels are fitted along the axis of the injection hole in order to determine the local horizontal and vertical diameters of the injection hole
were able to determine the local diameters with a standard deviation of 0.4 μm. This is one order below the nominal resolution of the X-ray machine applied. The error bars within the figure mark the maximum spread over all eight injection holes of the nozzle. The relatively huge deviations close to the start and end of the hole are due to centering and symmetry mismatches of the real part. This example intends to illustrate not only the chances but also the problems to tackle with when evaluating 3D imaging data that are affected by artifacts and noise. Thus, for each new problem, a dedicated solution has to be developed in order to parameterize inner and outer surfaces as well as interfaces making assumptions and mathematical models to cope with artifacts/gray value gradients [2].
Limitation of Classical Approaches As illustrated in Fig. 6, the problem of extracting the inner surface structure of the injection nozzle seems to be feasible for the experienced engineer. However, it is not only the given image itself that the viewer considers for his evaluation, since seeing is an active process. Rather, the experience and knowledge of vision in general and about the image content in particular plays quite an important role. The discipline that tries to enable machines to actually perceive their environment visually or in other words to gain high-level image understanding is known to be computer vision. Neural networks and among them, in particular, the so-called deep neural networks have proven to be very powerful tools in the domain of computer vision. Therefore, after giving a short overview of the basic structure of so-called convolutional neural
434
A. H. J. Tewes et al.
Fig. 10 Local horizontal and vertical diameters of the injection holes over the hole length. Error bars mark the maximum spread of the diameter values over all spray holes of the nozzle [2]
networks in the next section, their use for semantic segmentation of images shall be explained by means of examples.
Image Segmentation: Toward New Frontiers Motivation Deep learning has proven to be a game changer with respect to several image processing tasks. In comparison to the classical approach where the usage of machine learning itself was mainly limited to classification, deep learning now includes the whole task of defining and extracting features which are eventually being used for classification. The definition of features as well as their extraction out of the images used to be done manually by the engineer. The application of deep learning can now be understood as a holistic approach. The training of a deep neural network not only includes learning a classifier from training samples but also finding the right kind of features, may it be edges of different orientation and size or color features among others. This even shows some similarity to visual information
17
Image Processing 2D/3D with Emphasis on Image Segmentation
435
processing done in the visual cortex of higher developed creatures [3, 4]. It is this holistic feature which is a unique selling point of deep neural networks. In the following, after giving a brief introduction into deep neural networks in general, the application of these for sematic segmentation will be explained in more detail. Semantic segmentation is to be understood as segmentation on pixel level. Each pixel is assigned to one of several classes that are defined in advance.
Convolutional Neural Network: A Summary Among the deep neural networks, it is the so-called convolutional neural networks (CNN) which are probably the most frequently used when it comes to image processing. Let us therefore first take a look at the general structure of a CNN. As shown in Fig. 11, a typical CNN consists of different types of layers, among them at least one convolutional layer. Since CNNs belong to the group of feedforward neural networks – the data only flow in one direction from input- to output-layer – each layer gets the input data from the preceding layer, performs its operations, and then passes the data on to the following layer. What gives CNN its name is of course the so-called convolutional layer. An example of which is given in Fig. 12. This layer performs a convolution – usually, it is actually a correlation rather than a convolution from a mathematical point of view – on the preceding layer. The associated filter mask’s size is given by its width and height both of which are defined in advance and belong to the group of so-called
Fig. 11 This is a typical structure of a convolutional neural network being used for image classification. This network is used to classify input images of digits. A score is calculated for each of the ten possible digits from zero to nine. The scores, which are all non-negative and add up to one, can be interpreted as probabilities. The network’s last two layers, however, are fully connected rather than convolutional layers
436
A. H. J. Tewes et al.
Fig. 12 Three layers of the network are shown. The neurons of the input layer (image) are connected to the neurons of the first convolutional layer. Here, however, each neuron in the convolutional layer is connected only to some neighboring neurons (receptive field) of the input layer. This significantly reduces the number of weights to be learned. Furthermore, all neurons in the convolutional layer share the same weights. In this way, the convolutional layer can perform convolutional operations on the respective preceding layer. Usually, not only one filter mask is learned in this way, so that the convolutional layer itself actually consists of several so-called activation maps, each of which is determined by a different filter mask (The activation maps are shown here in green and red, respectively)
hyper parameters. Since the filter has to be applied to the whole layer, the filter mask’s depth is defined by the number of so-called activation maps within the preceding layer. If the preceding layer represents the input image, the depth corresponds to the number of channels of the image. While the filter mask’s values – which are called weights in the context of CNNs – are to be learned by the network itself, they are shared between the individual neurons of the current layer’s activation maps. This makes sense because each filter is supposed to react to a particular feature (e.g., edges of certain orientation) of the image, however, independent of its actual location. Instead of using only one filter mask per layer, the network actually uses several filter masks at once. The number of filter masks per layer is another hyperparameter. The values calculated by convolving the preceding layer with the current layer’s filter masks are further processed by applying a so-called activation function. This function, which has to be a nonlinear function, is applied after the convolution. There are several possible types of activation functions (among them the so-called rectified linear unit ReLU) which also have to be chosen in advance. They do therefore also belong to the hyperparameters. By linking several convolutional layers, the associated filter masks can become increasingly sensitive to more complex structures (e.g., taking edges from neighboring parts of the image makes it possible to recognize even contours). The more
17
Image Processing 2D/3D with Emphasis on Image Segmentation
437
complex the extracted information becomes, the less relevant its location gets. Therefore, a so-called pooling layer is often used after the convolutional layer. This layer is supposed to reduce the spatial resolution while keeping the abstract information. The pooling layer can either choose the maximum activation from the convolutional layer’s output on a certain two-dimensional grid or uses the average value within that grid. Since only one value per grid remains after the pooling layer’s application, the grid’s size determines the factor of downsampling. This is already the dominant structure of a CNN. Depending on the use of the network, there might also be some fully connected layers at the end of the network which usually act as a final classifier.
Semantic Segmentation Using CNNs Let us now look at the SegtNet [5, 6]. It belongs to a type of deep neural network which in the meantime has been varied in many ways, but has retained an essential structural component. The SegNet is shown in Fig. 13. This structural component is described by an Encoder-Decoder architecture. Within the encoder path, the actual input is processed using convolutional and pooling layers accompanied by what is known as Batch normalization (BN). BN [7] can be seen as an additional layer which normalizes its input data by recentering and rescaling. This has proven useful with respect to accelerating the learning process and reducing the influence of the initial weights’ distribution on the network’s final performance. By reducing the consecutive layers size, as being shown in Fig. 13, the network transforms the image into increasingly abstract content-related features, called activation maps, while roughly maintaining the localization of the same. After the most abstract layer in terms of features and the coarsest layer in terms of resolution has been reached, the decoder path follows. Using layers for upsampling, which can be understood as reversing the pooling layers, accompanied by the convolutional and BN layers, the abstract features are being used for finally generating an output image having the input image’s size. The decoder’s final output is being processed by a multiclass softmax-classifier to create class probabilities for each single pixel. During the encoder’s upsampling steps, the information of where the max-value had formerly
Fig. 13 An illustration of the SegNet architecture [6]. Conv stands for convolutional layer. Batch Normalization and Softmax are described in the text
438
A. H. J. Tewes et al.
been taken from is used in combination with a subsequent convolution using trainable filtermasks. In this way, not only a simple geometric upscaling takes place, but also the information loss caused by downsampling is partially compensated for by trainable filtermasks supposed to be filling the gaps. This is an essential step that, apart from the concrete technical implementation, can also be found in further developed networks. A prominent successor to SegNet, which originated in the field of biomedical imaging, is the so-called U-Net [8]. A variant of this, which the authors used for semantic segmentation of sagittal Magnetic Resonance (MR) images of the spine, is shown in Fig. 14. The network’s name is directly derived from its shape as being shown. It is immediately apparent that this network’s topology has also been inspired by the encoder-decoder structure. However, the encoder and decoder branches are now coupled using connections on several levels. Instead of using an upscaling followed by another filtering, the so-called transposed convolution [10] is used instead. Furthermore, activation maps from the very same level on the decoder branch are directly concatenated with activation maps from the encoder branch. In this way, less abstract but more localized features can be considered as well. The last step consists of applying only one filter mask of size 1x1 which, however, as it is always the case when applying convolution in this context, extends over the entire depth of the layer. The output layer of the U-Net being shown in Fig. 14 does finally
Fig. 14 U-Net for semantic segmentation of sagittal MR images of the human spine. The training and test data was taken from [9]. The operations shown in the legend are explained in the text. The vertically written numbers denote the size of the respective activation map while the numbers on top denote the absolute number of activation maps per layer. The fact that not all visible bodies of vertebra had been found is due to the fact that the ground truth data on which the training images are based were recorded only for certain parts of the thoracic spine
17
Image Processing 2D/3D with Emphasis on Image Segmentation
439
Fig. 15 The U-Net’s output as shown in Fig. 14 is being used for creating a mask being applied to the input image. Based on the masked image, further analysis can then be performed. For example, pathological changes in the vertebral bodies could be diagnosed by further classification
consist of only one layer, the size of which corresponds to that of the input layer (Fig. 15). Since there are no fully interconnected layers in this network, it is also referred to as a fully convolutional network. Since the invention of U-Net in 2015, various advancements and modifications have been published. Among them are fully convolutional DenseNets, which have realized the concept of concatenating activation maps even within individual layers, the so-called dense blocks [11]. Apart from modifying layers, there have also been examples where the input’s dimensionality has been enhanced. When using data acquired by magnetic resonance imaging (MRI) or computed tomography (CT), there is usually a whole bunch of consecutive images called slices which inherently encode the three-dimensional structure of the objects under examination. Therefore, there have also been extensions of the U-Net toward the usage of three-dimensional data inputs. One may expect that exploiting the inherent three-dimensional structure will lead to better results in comparison to just using single slices [12, 13]. An example of the application of deep learning for the automatic detection and segmentation of defects in specimen in the context of pulsed thermography is described in [14]. For this, the specimen is first thermally excited and the temperature distribution is then analyzed using an infrared camera. The images are processed using a Mask-RCNN [15]. This is a neural network based on CNNs, which in addition to a state-of-the-art detection of defects in the form of bounding boxes also carries out a semantic segmentation within the boxes. It was shown that the quality could be significantly increased compared to previously used methods. It was also shown that the use of artificially generated data – these were generated by means of simulations using FEM – is not only helpful in the case of not having a sufficient amount of measured data, but that the quality of the neural network could even be clearly increased by the additional use of artificially generated data.
440
A. H. J. Tewes et al.
Conclusion Although the use of deep neural networks is already more advanced in areas such as highly automated driving or medical image processing, recent publications also show the increasing importance for the field of nondestructive evaluation. A large number of methods used in the aforementioned domains can be directly applied to several problems in the field of NDE and thus contribute to further automation in evaluation. Deep neural networks must therefore be seen as one central component of NDE 4.0. But where exactly is the knowledge and experience of vision imparted to the deep neural network? The human knowledge and experience is encoded within the data being used for training the neural network. Therefore, the creation of the training data and in particular the annotation of the image data by experienced experts are of central importance for the network’s final performance. Only this way, the network can be enabled to combine features from the image with the contextual knowledge available in the training data and thus finally arrive at results that are comparable to those of human experts. However, it is precisely this approach that poses a particular challenge. In contrast to many traditional methods from the field of machine learning, the interpretability of the neural network’s approach to problem solving is significantly more difficult. In particular, the predictability of results and, associated with this, also especially the assessment of the error pattern pose a challenge. This problem is of particular importance whenever the neural network is to be used in the context of safetyrelated systems. One thinks here, for example, of highly automated driving or also medical applications. This problem, inherent in deep neural networks in particular, represents a major research focus that can be summarized under the term “explainable AI” (XAI). For deep neural networks, which are used for the classification of images, there is, e.g., the possibility to identify the pixels of an input image, which have taken a positive influence on the respective classification result. In this way, the decision for a certain class can be made transparent, at least for the experienced viewer [16].
Cross-References ▶ Applied Artificial Intelligence in NDE ▶ NDE in the Automotive Sector ▶ NDE 4.0: Image and Sound Recognition
References 1. Schneider CA, Rasband WS, Eliceiri KW. NIH image to ImageJ: 25 years of image analysis. Nat Methods. 2012;9(7):671–5. PMID 22930834. 2. Creuz A, 3D-dimensionelles Messen im μm-Bereich: Möglichkeiten und Grenzen der Nanofokus-Tomographie zur korrekten Geometrieermittlung von Dieselinjektoren. Bachelor thesis, Beuth University for Applied Sciences; 2015.
17
Image Processing 2D/3D with Emphasis on Image Segmentation
441
3. Hubel D, Wiesel T. Receptive fields, binocular interaction, and functional architecture in the cat’s visual cortex. J Physiol. 1962;160:106–54. 4. LeCun Y, Haffner P, Bottou L, Bengio Y. Object recognition with gradient-based learning. In: Shape, contour and grouping in computer vision. Berlin/Heidelberg: Springer; 1999. p. 319–45. 5. Badrinarayanan V, Handa A, Cipolla R. SegNet: a deep convolutional encoder-decoder architecture for robust semantic pixel-wise labeling. CoRR; 2015. 6. Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39(12): 2481–95. 7. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd international conference on machine learning, vol. 37; 2015. p. 448–56. 8. Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention (MICCAI), vol. 9351. Springer; 2015. p. 234–41. 9. Chu C, Belavy DL, Armbrecht G, Bansmann M, Felsenberg D, Zheng G. Annotated T2-weighted MR images of the lower spine. Zenodo; 2015. 10. Dumoulin V, Visin F. A guide to convolution arithmetic for deep learning, arXiv; 2018. 11. Jegou S, Vazquez D, Romero A, Bengio Y. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: IEEE conference on computer vision and pattern recognition workshops (CVPRW); 2017. p. 1175–83 12. Cicek O, Abdulkdair A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Medical image computing and computerassisted intervention (MICCAI). Springer International Publishing; 2016. p. 424–32. 13. Zhao W, Jiang D, Queralta J, Westerlund T. MSS U-Net: 3D segmentation of kidneys and tumors from CT images with a multi-scale supervised U-Net. Inform Med Unlocked. 2020;19: 100357. 14. Fang Q, Ibarra-Castanedo C, Maldague X. Automatic defects segmentation and identification by deep learning algorithm with pulsed thermography: synthetic and experimental data. Big Data Cogn Comput. 2021;5(1):9. 15. He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. In: Proceedings of the IEEE international conference on computer vision, Venice; 2017. p. 2961–9 16. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE international conference on computer vision (ICCV); 2017. p. 618–26.
Applied Artificial Intelligence in NDE
18
Ahmad Osman, Yuxia Duan, and Valerie Kaftandjian
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conventional Automated Image Processing Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Features Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Example for Ultrasound Image Processing of Carbon Fiber Reinforced Polymer Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Example for Infrared Thermography Data of Stainless-Steel Sample . . . . . . . . Deep Learning Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convolutional Neural Network (CNN) and Region-Based CNN . . . . . . . . . . . . . . . . . . . . . . . . . . Long Short-Term Memory-Recurrent Neural Networks LSTM-RNN . . . . . . . . . . . . . . . . . . . . . Application Example for LSTM-RNN-Based Defect Classification in Honeycomb Structures Using Infrared Thermography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance Measures of a Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Commonly Used Libraries and Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensor Data Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Example for X-Ray Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
444 447 448 448 449 449 449 452 453 453 458 460 461 464 465 471 473 474
A. Osman (*) Fraunhofer IZFP Institute for Nondestructive Testing, Saarbrucken, Germany Faculty of Engineering, University of Applied Sciences, Saarbrucken, Germany e-mail: [email protected] Y. Duan School of Physics and Electronics, Central South University, Changsha, Hunan, China e-mail: [email protected] V. Kaftandjian Vibrations and Acoustic Laboratory, INSA-Lyon, Villeurbanne Cedex, France e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_49
443
444
A. Osman et al.
Abstract
The fourth industrial revolution is driven by the digitalization. The artificial intelligence (AI) as re-emerging technology have seen an increased application since 2012 with the breakthrough achieved through the application of convolutional neural networks to the ImageNet Large Scale Visual Recognition Challenge. The nondestructive evaluation (NDE) is expected to profit from the digitalization technologies and mainly from the AI. This has led to the notion of NDE of the future or NDE4.0, which partially stands for the application of artificial intelligence to process and interpret input inspection data. The manual interpretation of the NDE data offers no satisfactory solution for the future NDE systems and the AI is expected here to provide reliable and trusted methods for the automated interpretation. Reliability, repeatability, and transparency of the AI algorithms in defect detection and decision-making are main requirements for a broad integration of this technology in the NDE world. This chapter introduces the machine learning classical approaches incorporating human expertise in handcrafted processing and feature extraction techniques. Then, the chapter explains data-driven approaches based on deep learning network which are similar to a human brain capable of modeling the human thinking, learn to segment, and analyze NDE data. Finally, the chapter gives some concrete examples of the application of AI in the NDE4.0 context, in particular for ultrasound, X-ray, and thermography. Assessment of performances is also presented as it is a key part of an automated approach. A brief presentation of data fusion is also given, as a mean to enhance reliability of inspection. Keywords
Image processing · Deep learning · Classifiers · Data fusion · Performance metrics
Introduction The artificial intelligence (AI) is a topic that is currently attracting a lot of scientific and media attention. The associated uncertainty regarding the possible displacement or transformation of existing jobs is a controversial issue. Prominent scientists and entrepreneurs such as the recently deceased British theoretical astrophysicist Stephen Hawking [1] or the US-American high-tech founders Elon Musk (PayPal, Tesla, SpaceX) [2] and Sergey Brin (Alphabet and Google, respectively) warn even from the possible risks of AI for mankind. AI pioneer Geoff Hinton’s recent statement [3] regarding the modeling capabilities of Deep Learning networks “Deep learning is going to be able to do everything” emphasizes this risk. There is however at least an awareness that digitalization, as a global phenomenon, will massively change our working and private life in the coming years. Actual topics such as the Internet of Things (IoT) and Industry 4.0 are hardly conceivable without
18
Applied Artificial Intelligence in NDE
445
AI as the core of digital transformation. This applies as well for the NDE field, where the NDE associations worldwide are coming together to define the requirements, expectations, and norms for the next generation NDE, also called NDE 4.0. This chapter does not deal with the social, political, or economic issues surrounding artificial intelligence. Instead, it is an entry into the technologies that currently play a major role in AI research and applications, namely image processing, classification methods, Deep Learning (DL) via neural networks for the NDE data analysis, and sensorial data fusion. Firstly, the need for application of AI in NDE is discussed. The NDE sensors and inspection methods (vibrational, acoustic, Infrared, X-ray, etc.) are today integral parts of the industrial infrastructure, whether for continuous monitoring of production processes or scheduled inspection during production or for the structural health assessment of machines and products. As well the inspection of components in-service is done using NDE sensors. These usually operate in different and sometimes changing environmental and operational conditions. They can monitor a bridge, an airplane component, an asynchronous rotating machine; inspect casting or forging parts, etc. It is obvious that this requires the use of rapid, robust, and reliable NDE techniques. While the data acquisition technology is one part of the solution, the other part is the automated, repeatable, explainable, and reliable data interpretation. By interpreting NDE data, the aim is to find possible metrological deviations, to image materials properties, or detect abnormalities or defect imaged by the inspection method. This actually requires specific knowledge and qualifications (NDE level 1–3) from the human operator, which are sharpened through years of experience, and there is currently no known accelerated experience transfer process between humans, one human operator to another. AI is a key technology for providing automated interpretation of the complex NDE data. The AI algorithms are (still) mainly designed by humans and learn in supervised form based on manually annotated data (human knowledge). Thus, they actually represent a natural framework to absorb the human knowledge into mathematical models. The generated interpretation algorithms are specific in the sense that they are mainly optimized to process mono-modal input NDE data types (X-ray for instance) which does not guarantee that their application for other NDE modalities will provide comparable results. Therefore, the variety and physical specificities of the singular NDE techniques and inspected materials directly influence the generalization of the AI models as blueprints. The next question is whether AI is new to the NDE field? The answer is clearly no. In fact, AI-based automated defect detection and decision-making methods have been used in various nondestructive evaluation and structural health monitoring systems since many years. Most applied data processing and analysis methodologies are relying on simple decision algorithms such as heuristic based (filtering, thresholding, etc.) or reference-based image processing [4–6]. They represent no match for human experts. They fail to deliver reliable results when it
446
A. Osman et al.
comes to processing data with un-adequate signal to noise ratio and data coming from demanding operational environments. The expert human inspectors are still “more trusted” for this task as they can adapt to the situation. Indeed humans think nonlinearly and thus they can achieve far superior performance compared to the simplistic computer-based methods. Consequently, in most of NDE inspections, human experts currently still perform the data analysis, even when the data acquisition itself is largely automated. Such human-based evaluation is time consuming, vulnerable to fatigue of the inspectors, sometimes not well documented, in most cases costly and offers actually no solution for a fully automated monitoring and evaluation system compatible with the NDE 4.0. The question is why the progress on the NDE data analysis level is slower compared to the other computer vision tasks? The Images in the visible spectrum are easy and rather cheap to gather (big data is available) while NDE data are acquired with expensive equipment and need to be operated by experts. This applies as well for the data annotation itself. Visual image annotation is much simpler and straightforward for an optical image. This does not apply for NDE data annotation. The variety of defects and false alarms in shapes and forms can only be partly covered. The human expert inspector performs sophisticated NDE data interpretation. The inspectors sharpen their skills through years of training (learning) and utilize various data characteristics in their judgment (such as signal dynamics, image contrast). The recent improvements in machine learning (ML) algorithms and especially in deep learning (DL), and the corresponding computational tools have enabled more complex and powerful models that reach near and super human-level performance in tasks like object recognition, driverless cars, robotics, and machine translation. Modern deep architectures have achieved impressive results in many tasks, such as image classification [7] and object detection [8]. The NDE field is expected to make use of these advances in DL techniques in order to solve complex NDE data interpretation tasks. In the current state of the art for using ML/DL in NDE, these methods can be clustered into two distinct approaches. The first methodology is referred to as conventional image processing approach which is based on the computation of hand-crafted features which are fed to a (shallow) certain classification model for decision making (such as random forests [9] or Support Vector Machines [10, 11]). The main goal here is to develop computationally lightweight models that can be implemented on-the-fly to fully do the automated interpretation or as an aid to the inspector in manual inspection. The second approach is data driven, where DL methods such as CNNs and U-Nets (see [12–17]) are expected to learn to model features from input NDE signals without the need for explicit feature engineering. The rest of this chapter is organized as follows: conventional image processing approach is theoretically introduced and some industrial examples will be presented in section “Conventional Automated Image Processing Approach.” Secondly, datadriven learning-based approaches are explained in section “Deep Learning Approaches,” where principle of three deep learning architectures is provided as well as the demonstration for industrial applications shown as examples. Sections
18
Applied Artificial Intelligence in NDE
447
“Performance Measures of a Classifier” and “Commonly Used Libraries and Frameworks” shorty introduce the statistical performance metrics and the currently available frameworks for DL and image processing. This chapter ends then with a short summary and an outlook.
Conventional Automated Image Processing Approach The conventional image processing approaches include the following steps: preprocessing, image segmentation, feature extraction (for instance sizing) of suspicious regions followed by a classification to decide about the type of the regions (whether a defect or not). Note that the classification step is actually a pattern recognition (i.e., machine learning) task that maps a d-dimensional input ! feature vector x ℝ d into a discrete class y. Furthermore that we will consider the classical classification algorithms (such as support vector machines (SVMs)) as part of the conventional image processing approach. The reader might argue about placing some classifiers (such as cascade classifiers or AdaBoost-classifier [18]) into the data-driven approach; however, we prefer to integrate them into the conventional image processing approach since the feature engineering was still mainly done through a human operator (see Fig. 1). We believe that this way of presenting the data analysis processing and interpretation technologies will ease reading of the chapter.
Input image
Preprocessing
Reference image
Segmentation
Features extraction
Training data
Classification
Trained classifier
Result Fig. 1 Simplified block diagram of an automated defect detection chain using conventional image processing approach
448
A. Osman et al.
Preprocessing Preprocessing aims at preparing the input image for the next steps. This includes different types of operations that can be: • Point-based operations such as histogram equalization for contrast enhancement, or calibration for distortion removal. • Region-based operations such as filtering for noise reduction and contrast enhancement calibration. Filtering is specific for each NDE method. There are techniques, such as X-ray and Infrared thermography, which deliver data with additive type of noise. Other methods, including radar imagery and ultrasound, suffer from more complicated multiplicative forms of noise. Reducing the noise effects in this case requires tools that are more advanced. • Reconstruction of one-dimensional input signals into 2D or 3D images, such as Synthetic Aperture Focusing Technique for Ultrasound or Algebraic Iterative Reconstruction methods for X-ray tomography. • Image transformation such as Principal Component Analysis of thermal images.
Image Segmentation Usually two different approaches can be followed for NDE image segmentation: reference (image of the sample without defects) and reference-less (no reference image is available) approach. Although the general tendency is toward referenceless procedures, it is well accepted that a reference-based image processing procedure can be very efficient when the anatomy of the inspected sample and the inspection settings stay without variation. In fact, if the NDE system is working under stable conditions and with constant testing rate, reference image can be acquired during routine testing of fault-less parts. By comparison (such as subtraction) of currently tested image and the reference, the difference information is related to the presence of suspicious regions within the test image. These are then further processed and transformed into binary image. Note that in some cases where no constant reference images are possible to collect, they can be synthetically generated from the actual test image by using filtering methods such as a large median filter. The reference-less approach directly applies a thresholding method on the preprocessed test image. For both approaches, the output binary image is usually subject to further processing by means of morphological operators in order to remove false alarms and get a clean segmentation. Then the next processing step is to assemble the single suspicious pixels into well-defined suspicious regions appropriately by means of connectivity-based techniques such as connected components analysis.
18
Applied Artificial Intelligence in NDE
449
Features Extraction After segmentation, each detected suspicious region needs to be characterized by a list of features. Computed features are centered on the measurement of the geometric properties and the intensity characteristics of detected blobs. Consequently, the measured features are divided into two categories: geometric features and Intensity characteristics. The geometric features are related to size, shape, and contour of the suspicious region, while the intensity features are related to its intensity-levels distribution.
Classification The main task of the classification is to assign each suspicious region to a certain class. In NDE classes are usually limited to two: defective versus non-defective region without considering the type of defect. However, it might be the case that a more explicit mapping of defects is required, where each suspicious region is classified into a specific class of defects such as cavity, porosity, and artifact. The type of defects is strongly related to the industrial application, manufacturing process, and material type. For example for a welding process, there are five kinds of defects that can occur in a weld: porosity, slag inclusion, lack of penetration, lack of fusion, and crack. Classifiers for NDE detection tasks are mainly trained in supervised mode, where datasets of labeled suspicious regions are used for training, testing, and validation of the classifier. Cross-validation approach is nowadays established as a standard method for finding the best set of parameters of a classifier and for error estimation. Different types of classifiers have been applied in NDE applications such as the support vector machine [19], cascade AdaBoost classifiers [20], Fuzzy Decision Tree [21], Artificial Neural Network mainly Multilayer Perceptron Neural Network [22], Random Forest [23], etc.
Application Example for Ultrasound Image Processing of Carbon Fiber Reinforced Polymer Samples The presented example is extracted from the author’s work reported in [24]. It only serves as an illustration of the conventional image processing approach. The aim was to automatically process 3D ultrasound volumes of carbon fiber reinforced polymer (CFRP) specimen for detecting defects within them. The CFRP specimen considered has a 14 mm thickness and contains 24 rectangular-shaped artificial defects, including delamination, at different depth positions, at 0.6 mm, at 7.2 mm, and at 13.7 mm . The specimen was scanned with a linear 16 elements ultrasonic probe of 5 MHz. Different scans were done from the side without drilled holes with different scanning speeds, sampling frequencies, and different gain settings (Fig. 2).
450
A. Osman et al.
Fig. 2 CFRP specimen of thickness 14 mm (CFRP-14): defects at 0.6 mm, at 7.2 mm, and at 13.7 mm in depth (z) direction Fig. 3 Flow chart of the proposed image processing chain
The proposed image processing chain of 3D ultrasonic datasets is composed of a segmentation procedure followed by a classification procedure as presented in Fig. 3. The input of the chain is a 3D ultrasound volume produced by the acquisition system. At first the segmentation procedure is run. The objective of this procedure is to locate and characterize suspicious regions by a list of features. A major difficulty in ultrasound image segmentation is the presence of speckle noise distributed all over the volume. The compromise is to detect fine defects without noise detection. Another difficulty is the detection of defects located near the entrance and backwall (or end) layers. Strong echoes reflected from the surface and the backwall of the inspected specimen can hide information obtained from reflections by nearby discontinuities. Thus, the number of suspicious regions after segmentation can be high. Therefore, an SVM classifier with a Gaussian-based kernel is used to distinguish the appropriate type (true defect or false alarm) of each region. The output is a list of defects where each defect is described by geometrical and intensity based features. First in the segmentation, the speckle noise which is a specific pattern that occurs in the ultrasound NDE images is reduced by means of a nonlocal-means filter, which
18
Applied Artificial Intelligence in NDE
451
Fig. 4 Comparison between the input volume slice before filtering (left image) and after filtering (right image) using a nonlocal-means filter. Notice the granular form of the speckle noise in the input image
x
Fig. 5 3D view of labeled suspicious regions (label ¼ intensity value) where defects and artifacts can be seen
defect
y
z
BWE artifact
applies a Pearson distance for similarity measure. An illustrative result before and after the filtering is provided in Fig. 4. After the data enhancement step, the inner volume is localized by automatically detecting where the entrance and the backwall slices lay in the enhanced volume, and then, the binarization via hysteresis thresholding to set suspicious regions as foreground. Illustration results for two inner volumes, after application of the connected components step, are shown in Fig. 5. Here it can be noticed that not all the suspicious regions
452
A. Osman et al.
correspond to real defects. Artifacts such as scattering artifacts, artifacts caused by the backwall echoes (BWE), and reverberation of the defects are set as foreground and could be misclassified as defects. Therefore, it is necessary to have a robust classification procedure to distinguish the correct type of each region. The intensity and geometricalbased features are computed for each region as the last step in the segmentation procedure. After systematically segmenting the available volumes, a total of 419 suspicious regions or blobs were detected by the segmentation procedure, among which 91 were real defects and 328 were false alarms, as decided by an expert operator. These results were divided into: a learning dataset (48 true defects and 212 false alarms) to train the SVM classifier and a testing dataset (43 true defects and 164 false alarms) to examine its performance. The obtained classification results on the testing dataset were as follows: 97.6% of the defects were correctly classified as being defects and 95.7% of the false alarms were correctly classified as false alarms.
Application Example for Infrared Thermography Data of StainlessSteel Sample The presented example is extracted from the author’s work reported in [25]. It is an example of using a neural network in infrared thermography to classify different foreign matter invasions in a homogeneous material. A stainless-steel sample with flat bottomed holes (FBH) was fabricated to simulate air, oil, and water ingress. Visible sample images and schematic indicating sample fillers appear in Fig. 6. Different colors indicate the different substances in the holes. Features relating to the defect types are extracted from the thermal sequences. In this study, six coefficients as input features are obtained from thermographic signal reconstruction (TSR) [26], which is a common data processing method in pulsed infrared thermography. The input layer consists simply of feature vectors. The next layers are called hidden layers. The number of hidden layers and hidden neurons in
Fig. 6 Visible sample images: (a) inspected side and (b) rear side; (c) schematic indicating sample fillers [25]
18
Applied Artificial Intelligence in NDE
453
Fig. 7 Defect classification process framework [25]
each hidden layer is chosen on a trial-and-error basis. The k-fold cross Scheme [27] was used to adjust and optimize network settings. The final layer is the output layer, where there is one node for each class. In the training phase, the correct value for each input is known, which is termed supervised training. The calculated output value is compared with the correct value to compute an error term for each node, which is then fed back through the network. These error terms are then used to adjust the weights in the hidden layers. After repeating this process for a sufficient training cycles, hopefully, the output values will be closer to the correct values. A defect classification process framework is shown in Fig. 7. Four optical pulsed thermography were carried out on the stainless-steel sample. In the first three, the four FBH were filled with the same substances: air, water, and oil, respectively. In the fourth experiment, the holes were filled with different substances. The thermal sequences obtained from the first three experiments were taken as training data. The data obtained from the fourth experiment was used for testing. The defect classification results of NN Model 6-30-15-4 are shown in Fig. 8.
Deep Learning Approaches This approach is based on the modeling capabilities of deep learning networks in their different architectural forms. Marginally processed raw data represent the input to a neural network. Instead, on the hand-crafted engineering of features, in the DL approach the features modeling is done by the DL model itself, where a stack of convolutional layers learn the features present in the input data (see Fig. 9). These features are then inputs of a classifier, which maps them into one class or another. In the following, DL architectures based on the analysis of images and signal time series are briefly presented.
Convolutional Neural Network (CNN) and Region-Based CNN The CNN is a special type of a layered feed-forward deep learning network architecture, which is inspired by biological natural visual cognition mechanism, designed for processing signals in the form of multiple arrays like visual and audio signals [28].
454
A. Osman et al.
Fig. 8 Defect classification results for NN Model 6-30-15-4: (a) defect map; (b) normalized confusion matrix (coefficients from TSR used as features) [25]
CNN has three important thought frameworks: local area perception, weight sharing, and spatial or temporal sampling. Local area perception can extract some local features of the original data, which makes CNN model translation invariant, and greatly improves the generalization ability of model. Weight sharing strategy reduces the parameters that need to be trained to improve learning efficiency. The purpose of sampling is to confuse the specific location of features and improve the recognition ability of the model. CNN uses forward propagation to calculate the output value while back-propagation adjusts the weight and bias. The hierarchical structure of convolutional neural network is mainly composed of input layer, convolutional layer, activation layer, pooling layer, and fully connected layer. The convolutional layer mainly realizes the extraction of local features of the original input, and each convolution operation can extract higher-level features [28]. The activation layer mainly realizes the nonlinear mapping of the output result
18
Applied Artificial Intelligence in NDE
Data
455
Processing and Feature extraction
Prediction
Conventional Image Processing chain Deep Learning
Data
Feature learning
Prediction
Fig. 9 Conventional image processing chain vs deep learning. In DL the feature learning and prediction are modeled as networks. Instead of fine-tuning features, network architecture and parameters are fine-tuned. Prediction covers the classification or regression. The output of the prediction could be a discrete or continuous value
of the convolutional layer through the activation function, which makes the expression ability of the neural network more powerful. The pooling layer mainly realizes the feature dimension compression of the feature map, simplifies the network calculation complexity, reduces overfitting, and improves the fault tolerance of the model. As for the fully connected layer, it mainly maps the “high-level features” learned by the network to the sample label space, and uses the features of the last layer to perform classification and regression tasks through the fully connected layer. In Reference [29], a CNN model is used for detection of flaws in ultrasonic tomography of concrete. The input for the CNN was the normalized image of the CT scan. After the input image has been processed by all convolutional and max pooling layers, the flattening operation is applied and then the resulting vector of features is processed by a fully connected layer with a rectified linear unit (ReLU) function. Lastly, the input is classified to two categories: “specimen without a defect” and “specimen with a defect” by the last layer, which is fully connected and has the sigmoid function. The results show that the model training accuracy is 98% while maximal validation accuracy is 97%. After analyzing the new B-scans by the network after fine-tuning, the generalization accuracy for the final network was close to 99%. One issue of the CNN is that the image should be cropped into several regions which are fed into the network. Some approaches use a sliding window to scan the whole image such as [30], while others use a region-based CNN(R-CNN) in which a selective search (SS) process generates several region proposals to be fed into the CNN to obtain feature maps. The processed feature maps are then fed into an SVM to accomplish classification. The classified images are used for bounding box regression, which generates the coordinates of each defect [31], see Figs. 10 and 11. Next, we will discuss region-based CNN (R-CNN) algorithm. R-CNN is proposed for object detection: that is, to accurately find the location of the object in a given image, and mark the category of the object [32]. The core idea
456
A. Osman et al.
Fig. 10 Exemplary architecture of the CNN model: the two traditional stages ¼ features extraction + classification are all included in the CNN [31]
Fig. 11 Architecture of “region-based CNN” [31]
of R-CNN is that adopting a large CNN to compute features for each region proposal which is extracted by SS, and then these features are used to classify the proposal by class-specific linear SVMs. In the field of NDT, R-CNN can be used to locate and classify defects in images to be detected. While R-CNN obtains good performance, it has obvious weaknesses. Independently multistage training process and features extracted repeatedly for each proposal require large computational expense, these result in a very slow detection speed. To address the shortcomings of R-CNN, Fast R-CNN [8], and Faster R-CNN [33] were developed. The speed limit of fast-RCNN is in the regional proposal. So Faster RCNN is proposed which realize the regional proposal by a CNN rather than a sliding window or a selective search. Faster-RCNN has three CNN sub-networks: a base network which generates feature map; a region proposal network (RPN) which generates class-agnostic object proposals based on feature map; and a detection network which refines the location and classify the proposals based on feature map too. The most prominent feature of Faster-RCNN is the RPN, which shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals [33]. The RPN is a fully convolutional network that simultaneously predicts the target boundaries and object scores at each pixel on feature map, and provides several optimized proposals for the detection network. The Faster-RCNN structure is shown in Fig. 12. In [34] the deep learning model based on Faster-RCNN and transfer learning is proposed to detect the hotspots in the infrared image of photovoltaic modules. The
18
Applied Artificial Intelligence in NDE
457
Fig. 12 The structure of Faster-RCNN [34]
results show that deep learning algorithm has higher robustness and can distinguish reflective regions from hotspots accurately. For the image data obtained by X-ray, there have been many related reports using CNN and its derivative networks to classify and locate defects. Examples of application of Deep Learning on X-ray application are further shortly presented. The article of [30] gives a good overview of the potentiality of deep learning as a way to replace the complete image processing chain to detect casting defects by X-ray inspection. His approach is compared to classical image processing, and a specific CNN called Xnet II is developed with 30 layers. A sliding window is used to scan the complete data set and each window is then classified by the network. In [35] several deep architectures were developed for 3D CT data in a voxel-wise manner, that is, each voxel is classified by the model into a defective or normal voxel. Due to the huge amount of data necessary for training, simulated data are used [36], containing different levels of noise and artifacts. It is shown that promising results are obtained without the need for hand labeled data for ground truth. Different state-of-the-art CNN models were investigated in [37] for recognizing and localizing casting defects in 2D X-ray images. Here the term “defect localization” means that the goal is to place a bounding box around each defect, instead of a segmentation approach where each pixel is to be classified. The two steps approach is done by two different networks. First, the convolutional layers of one network are
458
A. Osman et al.
used as a features extractor, and then, the outputs are used as inputs to another network in order to localize the defect in the X-ray image, that is, place the bounding box around the defects. Several combination of networks are compared on the GDX-ray database [38], with mean average precision ranging from 0.46 to 0.921 for the best one. It is worth noting that the networks were not trained solely on the GDXray database because it generates overfitting. Instead, a training was first done on ImageNet dataset, and then, fine-tuning was done on GDXray database. In [39], a CNN-based deep learning approach was developed for detecting defects in radiographic images of aluminum castings. The approach consists of handling several sequences of radiographs acquired at different angles. The radiographs are first processed independently to extract and label suspicious areas, that is, pixels with grayscale values different from their surroundings, using a specific filter. Then, a second phase is devoted to tracking the defect in consecutive images with the help of a CNN network, which takes the suspicious pixels as inputs, and outputs a feature vector. If the latter is similar to some extent in the sequence of frames acquired at different angles, then the suspicious zone is validated as a defect; otherwise, it is rejected as a false alarm. A dataset of 400 images was used to train the CNN network, and the methodology showed a very small false positive rate of 1.1%. In [40] a CNN architecture has been compared to various machine learning (ML) algorithms for classification of wood images in four different classes. Texture descriptors were used as input to ML approaches, while CNN work with color images first transformed in a grayscale range and normalized. In this particular example, the texture-based ML approach proved the most efficient, because the classification task was perfectly “solved” using texture descriptors. For more complex task where it is not obvious to decide which features could best describe the classification task to be done, CNN would be more adequate.
Long Short-Term Memory-Recurrent Neural Networks LSTM-RNN Recurrent neural network (RNN) is a state-of-the-art deep learning architecture specifically designed for time-series forecasting. It is based on the view that “human minds are based on past experience and memory.” Different from normal neural network models, RNN model is constructed based on past information and is updated by new information constantly. However, while learning long temporal sequences, RNN is prone to gradient explosion or gradient vanishing during training [41], which is not suitable for practical applications. Hochreier et al. [42] proposed long short-term memory (LSTM) to tackle the problem of long-term dependence by creating a long-term memory unit, which slowed the decline of model memory. The hierarchical structure of LSTM-RNN is shown in Fig. 13. Weight connections not only exist between layers, but also between neurons in each LSTM layer. In the sequence model, LSTM is such an excellent time-series network that can handle continuous information. The core idea behind LSTM architecture lies in the operation of the long-term memory state. As shown in Fig. 14, a memory unit is composed of forget gate, input
18
Applied Artificial Intelligence in NDE
459
Fig. 13 The hierarchical structure of LSTM-RNN
Fig. 14 The internal structure of LSTM unit, Ot, ft., it, and Ct are output gate, forget gate, input gate, and long-term memory state, respectively
460
A. Osman et al.
Fig. 15 Test sample photographs and schematic of defects [20]
gate, output gate, and memory cells. The forget, input, and output gates are the keys to control information flow. The forget gate controls the long-term memory unit to store global memory information, the input gate prevents current irrelevant information from entering the long-term memory unit, and the output gate controls the impact of the long-term memory unit on the current output.
Application Example for LSTM-RNN-Based Defect Classification in Honeycomb Structures Using Infrared Thermography The presented example is extracted from the author’s work reported in [20]. In this chapter, the temperature change over time was extracted as sample data to train the LSTM-RNN model which can automatically classify common defects occurring in honeycomb materials. Specimen size was 280 mm * 210 mm, GFRP skin thickness was 1 mm, and the honeycomb cell length was 3 mm and the height was 12 mm. There are four types of defects in the sample: debonding, adhesive pooling, water, and hydraulic oil ingress, respectively (see Fig. 15). The optical pulsed-thermography included two xenon flash with maximum of 4.8 KJ of light energy were used to provide instantaneous heat sources. An infrared camera was used to monitor and record the infrared thermal image sequences of the sample in real time. A 3D data, that is, temperature distribution of sample surface at different times was obtained. The training of the LSTM model in this study belonged to supervised learning, which required input of training data and its corresponding labels at the same time. The pixel cooling processes, that is, temperature time series, were extracted and labeled as sample data to train the LSTM-RNN model (see Fig. 16). The LSTM-RNN model has one LSTM hidden layer with 64 hidden units. The output layer has five neurons to classify the four types of defects and sound areas. Thermal contrasts overtime in the cooling process were used as input to train and test the LSTM-RNN model. Sensitivity was greater than 90% for both water and hydraulic oil ingress from the testing data of an independent experiment. Classification sensitivity was greater than 70% for debonding and adhesive pooling defects which have had a cooling process similar to sound areas (see Fig. 17).
18
Applied Artificial Intelligence in NDE
461
Fig. 16 The infrared thermal image sequences segmentation and defect classification process
Fig. 17 Defect classification results and the corresponding normalized confusion matrix [20]
Performance Measures of a Classifier Selecting the evaluation metric is the key ingredient of measuring the effectiveness of a classifier. Let us consider a binary classification problem, where the system predicts whether a specimen is diagnosed as defective or not. The performance of the three exemplary models will be evaluated in the given test set, where only 2 out of 10 specimens are in reality (ground truth) defective. The predictions for each exemplary fictitious model are shown in Table 1. A specimen which is defective (positive) and is classified as defective is referred to as True Positive (TP).
462
A. Osman et al.
Table 1 Ground truth and results of three exemplary classifiers Specimen Ground truth Classifier 1
1 OK
2 OK
3 OK
4 OK
5 OK
6 OK
7 OK
8 OK
OK
OK
OK
OK
OK
Classifier 2
OK
OK
OK
OK
OK
Not OK OK
Not OK OK
Classifier 3
OK
OK
OK
OK
OK
OK
OK
Not OK Not OK OK
9 Not OK Not OK Not OK OK
10 Not OK Not OK Not OK OK
A specimen which is not defective (negative) and is classified as not defective is referred to as True Negative (TN). A specimen which is defective (positive) and classified as not defective is referred to as False Negative (FN). A specimen which is not defective (negative) and classified as defective is referred to as False Positive (FP). The accuracy of a classifier, that is, model, is defined as: Accuracy ¼
TP þ TN TP þ FP þ FN þ TN
For the abovementioned example, the accuracy of classifier 1 is 0.7, the accuracy of classifier 2 is 0.9 and of classifier 3 is 0.8. Thus after evaluating the accuracy of each model, the best performance is achieved by model 2. Usually, in the case of imbalanced datasets, accuracy is no longer the proper metric because the classifier tends to be more biased toward the majority class. Another problem with accuracy is that it does not give much detail about the performance of the classifier (i.e., it does not directly tell how many specimens were classified as defective while being ok and vice versa). A confusion matrix gives more insights not only about the performance of the classifier but also about the type of errors that are being made. As the name suggests the confusion matrix outputs the performance of the classifier in the form of a table. The columns correspond to the ground truth and the rows to the model predictions.
Confusion matrix Predicted
Positive Negative
Ground Truth Positive TP FN
Negative FP TN
The confusion matrix is used to derive other evaluation metrics such as Sensitivity (Recall), Specificity, Precision, F1-Score, and the Receiver Operating Characteristic (ROC) Curve. Sensitivity (Recall) and specificity are two metrics used to assess the classspecific performance of a classifier.
18
Applied Artificial Intelligence in NDE
463
Sensitivity ¼
TP TP þ FN
Specificity ¼
TN TN þ FP
Sensitivity is the proportion of the positive instances that are correctly classified, while specificity is the proportion of the negative instances that are correctly classified. In our example, the sensitivity will show us how good our model was at correctly predicting all specimens which are in reality defective, while specificity will show how good the model was at correctly predicting all non-defective specimens. Precision is another metric that describes which proportion of predicted positives is truly positive. Out of all specimens that the model diagnosed with defects, how many of them actually are defective. Precision ¼
TP TP þ FP
A metric that combines both precision and recall into a single formula is F1-score. It can be interpreted as a harmonic mean between recall and precision values, where an F1-score reaches its best value at 1 and the worst value at 0. A good F1-score means that we have a low number of False Positives and False Negatives. F1 ¼ 2
Precision Recall Precision þ Recall
Receiver operating characteristics (ROC) curve is a graphic representation of false-positive rate (1- specificity) versus true-positive rate (sensitivity) for all possible decision thresholds (Fig. 18). The overall performance of the classifier summarized over all possible thresholds is given by the area under the ROC curve (AUC). Since ROC curves consider all possible thresholds, they are quite helpful in comparing different classifiers. The choice of the threshold is task-related. For instance, in a cancer prognosis system, the threshold is preferably chosen to be smaller than 0.5. An ideal classifier will have an AUC of 1, which means the ROC curve will touch the top left corner. In addition to being a generally useful performance graphing method, they have properties that make them particularly useful for domains with skewed class distribution and irregular classification error costs. These characteristics have become increasingly important as research continues into the areas of cost sensitive learning and learning in the presence of unbalanced classes. Moreover, regarding the accuracy of an object detection algorithm, the primary metrics used to evaluate the performance of the system are Intersection over Union (IoU) and mean average precision (mAP). IoU can be used in any object detection algorithm that predicts bounding boxes as output and has ground-truth bounding boxes. IoU is a ratio of the overlapped area between the predicted bounding box and the ground-truth bounding box to the
464
A. Osman et al.
Fig. 18 Example of an ROC curve: each point refers to a specific decision threshold above which all samples are considered as defective
common area enclosed by both the predicted bounding box and the ground-truth bounding box (Fig. 19). The higher the IoU, the better the fit. To better visualize the performance of an object detection or a classification algorithm, a precision-recall curve can, similarly to ROC curve, be used for each threshold value. If we want to represent the performance of the detector using only one metric, the area under this curve is similarly calculated. This metric is referred to as average precision (AP). The mean average precision (mAP) is actually the average of all APs. We compute the AP for each class and image and average them, giving us a single metric.
Commonly Used Libraries and Frameworks For deep learning there are a different frameworks available. Table 2 shows a comparison of the most popular frameworks, in which Pytorch and Tensorflow are most actively developed. For Conventional Image processing, the following tools are available: openCV [43], MATLAB, Scikit-Image [44], and ImageJ [45]. Regarding the conventional machine learning tools, one can refer to: Scikit-Learn [46], libsvm [47], MATLAB, openCV.
18
Applied Artificial Intelligence in NDE
465
Fig. 19 Example of Intersection over Union for the defect detection in a metal sheet using eddy current technique
Note that there are also cloud-based AI services to develop AI solutions among which are the Google Cloud, Amazon machine learning Platform, Microsoft Azure, Rainbird, etc.
Sensor Data Fusion Any singular NDT method suffers from physical drawbacks that limit its applicability and capability to deliver all necessary information required for a given component. This could be solved by using further NDT methods in parallel or at a second stage once the primary NDT method delivers first results showing potential abnormality in inspected regions. The complementary method(s) could deliver more details about specific regions of interest or inspected regions that for instance were not reliably detected using a single method. At present such a multimodal NDT approach is not chosen by the industry due to the increase in complexity it implies, as well as cost issues. An important advance can be procured via application of innovative machine learning and data fusion methods in order to correlate and fuse the results of several inspection methods in order to simplify the global multimodal inspection. The higher equipment cost can be acceptable if it is equalized through improving the quality of production processes and products via reliable inspection technologies. Several definitions of data or, more generally “information” fusion exist, along with that of [49] is following “Information fusion consists in combining information from various sources in order to improve decision reliability. The type of information can be directly data (signals, pixels, segmented objects. . .) or knowledge elements (expert rules. . .).” A common implicit rule for data fusion is that sources should be different, in such a way that independent information is combined. However, some combination rules have been defined in the case of non-independence. A data fusion problem can be summarized as follows: a given element should be assigned to an hypothesis Hi Θ ¼ {H1, H2, . . ., HN}, from the information given by various sources Sj (j ¼ 1..S). This information is modeled by a number denoted Mj(x Hi). As an example, Mj(x Hi) can be interpreted as a conditional
Initial release 2016
2015
2013
2015
2016
Name Pytorch
Tensorflow
Caffe
Keras
Microsoft cognitive toolkit (CNTK) MATLAB deep learning toolbox
MIT license MIT license Proprietary
Apache 2.0 BSD
License BSD
MATLAB
Python, C++, command line
Python, R
Python, C++, Java, Go, R, JavaScript, Julia, Swift C++
Language interface Python
Table 2 Comparison of popular deep learning frameworks [48]
Windows, Linux, macOS
Platform Windows, Linux, macOS Windows, Linux, macOS Windows, Linux, macOS Windows, Linux, macOS Windows, Linux Yes
Yes
Yes
Yes
Yes
CUDA support Yes
Yes
Yes
Yes
Yes
Yes
CNN support Yes
Yes
Yes
Yes
Yes
Yes
Recurrent nets support Yes
466 A. Osman et al.
18
Applied Artificial Intelligence in NDE
467
Fig. 20 Illustration of data fusion steps, from multimodal acquisition to decision
probability (probability to obtain the measurement given by Sj for the object x when Hi is true, or more generally the degree of belief of source Sj in the fact that x belongs to Hi). The principal fusion steps (illustrated in the following Fig. 20) are the following [49]: • • • •
Modeling of information and related imperfection (uncertainty, accuracy) Estimation of Mj(x Hi) values Combination rule on the different Mj(x Hi) Decision rule
It is worth noting that both uncertainty and imprecision have specific definitions in the context of data fusion, which can be different from their usual meaning in NDT or metrology. Uncertainty is related to information truth (such as « tomorrow it will rain »), while imprecision relates to the information content (« this object is 10 0.1 mm length) which is actually the physical measurement uncertainty [50]. It is important to distinguish both types of imperfections when modeling the degree of belief Mj(x Hi). The Bayesian model of the probability theory represents a classical framework for reasoning with uncertainty. In this framework, the possible hypotheses have to be complementary. For example, a Bayesian framework would model the color of an image as a probability distribution over (red, green, blue), assigning one number to each color. Nevertheless, the main disadvantage of this model is its inability to represent ignorance. For instance, if a witness reports “I saw that the image, it was either blue or green.” Accordingly, last decades have seen the appearance of other theories such as the possibility theory proposed by Zadeh [51] and the theory of evidence, initiated by Dempster [52] and further developed by Shafer [53] and then Smets [54–56] under
468
A. Osman et al.
the name transferable belief model (TBM). The Dempster-Shafer (DS) evidence theory forms a theoretical framework for uncertain reasoning and has the particular advantage to enable handling of uncertain, imprecise, and incomplete information. It overcomes the limitation of conventional probability theory. Consequently, the Dempster-Shafer theory of belief functions is now well established as a formalism for reasoning and making decisions with uncertainty [57]. It also provides rules for dealing with possible conflicting pieces of information. In the abovementioned example, the evidence theory or Dempster–Shafer would assign numbers to each of (red, green, blue, (red or green), (red or blue), (green or blue), (red or green or blue)) which do not have to cohere, for example, M(red) + M(green) 6¼ M(red or green). In this theory, a set of possible answers to some question are defined as elements of a frame of discernment Θ ¼ {H1, H2,. . .,HN}, and a so-called mass function (Also called basic belief assignment bba) m represents the piece of evidence pertaining to that question (this is the defined above Mj(x Hi)). Particularly in the NDT field, the question can be whether the component is defective or not, or it can refer to the type of detected indication (crack, porosity, artifact, etc.). Those Mj(x Hi) are then fused using a generic operator called Dempster’s rule of combination (also known as orthogonal rule). A recent review of the relationships between DS theory and recent machine learning approaches such as logistic regression and neural networks has been done in [58]. It is shown that DS theory has been widely used for supervised classification in the past 20 years. The first application is to consider several classifiers, most of them based on statistical pattern recognition, and then, fuse their outputs directly considering them as belief functions [59–63]. This is called classifier fusion. A variant of this is to convert the decision of statistical classifiers into belief functions [64, 65]. However, the most promising approach as argued in [58] is to completely design evidential classifiers by considering the principles of DS theory directly inside the classifier. In other words, an evidential classifier is based on traducing the evidence of each input feature vector into elementary mass functions, then combining them by Dempster’s rule or another rule [66]. The decision is then based on more information than with conventional classifier because DS theory allows to distinguish the lack of information from conflicting information. A generic data fusion framework for NDT based on DS theory consists in the following four steps: 1. Frame of Discernment: In DS theory, a fixed set of N mutually exclusive and exhaustive elements, called the frame of discernment, is defined and symbolized by Θ ¼ {H1, H2, , HN}. Θ includes all answers (i.e., propositions or hypotheses) for which the information sources can provide evidence which is expressed by a so-called mass value. 2. Mass functions: The information sources should attribute degrees of belief or mass values on subsets of the frame of discernment, Hi 2Θ , 0 mðH i Þ P mðH i Þ ¼ 1 . Here, Hi designates a single hypothesis or union of 1 and H i 2Θ
18
Applied Artificial Intelligence in NDE
469
simple hypotheses (composite hypotheses). The important point to notice is that an information source assigns mass values only to those hypotheses, for which it has direct evidence. That is, if an information source cannot distinguish between two propositionsSHi and Hj, it assigns a mass value to the set including both propositions (Hi Hj). This point is one of the main advantages of the DS theory compared to the Bayesian Theory, because it allows to reflect the hesitation between two hypotheses. 3. Mass combination and decision: Once each source of information assigns mass values to a set of hypotheses, mass values can be combined using different rules. Dempster’s rule is defined as follows: considering two mass distributions m1, m2 from two different information sources, Hp and Hq, the Dempster’s rule of combination results in a new distribution, m ¼ m1 Å m2, which carries the joint information provided by the two sources: X
mðHi Þ ¼ ð1 K Þ1 ∙
m1 H p ∙m2 H q
ð1Þ
H p \H q ¼H i
where K¼
X
m1 Hp ∙m2 H q
H p \Hq ¼⊘
The variable K is interpreted as a measure of conflict between the two sources and introduced in Eq. (1) as a normalization factor. The larger K, the more conflicting are the sources and the less sense makes their combination. As a consequence some authors, Smets in particular [54], require the use of the Dempster combination rule without normalization, which is called the conjunctive rule. However, in this case, the final obtained mass is not in the [0 1] range. 4. Decision rule: usually the hypothesis of highest confidence can be chosen, providing that conflict is lower than a threshold to be defined. Let us consider a numerical example in NDT in order to understand the data fusion process. In this example, the frame of discernment consists of two hypotheses: any indication should be classified either as a true defect (H1) or a false alarm (H2). However, DS theory also allows to consider the H3 ¼ H1 [ H2 hypothesis, which represents the ignorance (hesitation between the two hypothesis). In the left part of Fig. 21, method 1 (here an X-ray image) has detected several indications after image processing, among which, indication α is extracted with the mass values indicated, for each hypotheses of the frame of discernment m1(α Hi). In the right part of the figure, method 2 (here ultrasonic measurement) has also detected indication A corresponding (“corresponding” means that there should be a geometrical registration procedure in order to check that both indications are the same. This is not an easy task and specific care should be taken at the acquisition stage in order that this registration is possible) to α, with the mass values indicated as m2(α Hi). The
470
A. Osman et al.
Fig. 21 Example of information fusion with DS theory
result of combination using Dempster’s orthogonal rule is given, and different colors have been chosen for the different cross products in order to describe the information content: if both methods are in agreement, the corresponding cross product is assigned to the final fused mass, as it increases the confidence. On the other hand, if both methods are in contradiction (case where the two masses belong to different hypotheses without intersection), then it increases the conflict. In this numerical example, no conflict occurs because there is always an intersection between the masses: this is the advantage of using the H3 hypothesis, as the ignorance of one method does not disagree with the confidence of the other source. In this numerical example, the initial confidence for the defect hypothesis is, respectively, 0.7 for method 1, and 0.5 for method 2, with a hesitation (or part of doubt) of 0.3, and 0.5 for both methods, respectively. After combination, the confidence of the defect hypothesis is 0.85, and the ignorance is 0.15, which means that the decision is more reliable than for each method alone.
18
Applied Artificial Intelligence in NDE
471
“Bayesian” masses means that only single hypotheses are used (Ù probabilies)
α
m1(αH1)=0.7 m1(αH2)=0.3 m1(αH3)=0
Combinaon with Dempster rule mfus(α/AH1)=0.7*0.5=0.35 mfus(α/AH2)=0.3*0.5=0.15 mfus(α/AH3)=0 K=0.7*0.5+0.3*0.5=0.5
m2(AH1)=0.5 m2(AH2)=0.5 m2(AH3)=0
Decision : Pb : conflict higher than the highest confidence => decision ??
Fig. 22 Illustration of data fusion with Bayesian masses (without using the ignorance modeling)
In order to understand the interest of this third hypothesis to represent ignorance, let us consider the case of the so-called “Bayesian masses, ” which indeed represents the framework of probability theory, because any piece of information should be affected to mutually exclusive hypotheses. In the example, the part of ignorance is assigned to the hypothesis “false alarm.” As a result, a conflict appears between the methods after fusion, and the decision is not an easy task because in this example, conflict is higher than the highest confidence. It must be noted that in the framework of probability theory, the conflict is not computed, and thus, the hypothesis H1 would be chosen as a decision (see Fig. 22).
Application Example for X-Ray Images The following example is part of the authors work [11]. In this study, we dispose of a 3D CT aluminum casting dataset composed of 442 objects (or potential defects) classified manually by an expert between true and false defects (see Fig. 23). In this database there are only 44 true defects and the remaining 398 objects are false defects, therefore it is considered to be very unbalanced. For automatic classification purpose, a total number of 30 features are measured on each object. These features represent the input sources of information for both classifiers to automatically classify the entry object as true or false defect. The two classifiers considered are the support vector machine (SVM) considered as a reference machine learning algorithm, and a data fusion classifier (DFC) based on Dempster-Shafer theory. For the learning and testing processes, the complete dataset is divided into: • Learning dataset: 226 potential defects consisting of: 200 false alarms and 26 true defects • Testing dataset: 216 potential defects consisting of: 198 false defects and 18 true defects
472
A. Osman et al.
Fig. 23 Part of a slice view extracted from a 3D CT volume: on the left side a true defect (surface defect) appears as a darker area and on the right side a false alarm (reconstruction artifact) appears as a darker area as well Feature values Mass values
1 0,9
Area under ROC curve
0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Features
Fig. 24 Area under ROC curves for feature values and for mass values
At first ROC curves are built to give us a primary idea about the quality of the measured features. The area under ROC curves corresponding to all sources of information (using original feature values and mass values) are presented in Fig. 24. Based on the values of area under ROC of feature values, it can be seen that five single features (8, 14, 15, 23, and 24) are reliable for classification (area under ROC0.9). All the features were translated into mass values in order to be able to be fused using the Dempster-Shafer theory as introduced before. The way those masses are computed is detailed in the original paper [11] and is completely automatic. When using the mass values to calculate the area under ROC curves, 11 features (2, 8, 14, 15, 17, 18, 19, 23, 24, 25, and 29) become reliable (area under ROC0.9). This
18
Applied Artificial Intelligence in NDE
Table 3 Performances obtained using the mean of all the masses, the DFC classifier (using two features), and with SVM
Source Mean mass DFC(features 14 & 18) SVM
473
Learning TN TP 0.99 0.96 1 0.8
Testing TN 0.87 0.99 0.97
TP 1 0.61 0.94
means that the performance of the same features, when transformed into masses, is better. This is it itself already an advantage of the Dempster-Shafer approach, even without any fusion, and this will help in the optimization of SVM and DFC in the next stage. The learning process for DFC and SVM is done using all available features. Afterwards the testing process takes place. Performance results are presented in Table 3. On the learning dataset, the best result for true defects classification is obtained using the mean of all the masses with a TP rate of 0.96. Dempster-Shafer fusion classifier (DFC) using only two features (14 and 18) gives the highest false defects classification or true negative rate (TN ¼ 1). These two sources are used to automatically classify the testing dataset. Here the TN rate is reduced to 0.87, but the TP rate is perfect. For DFC of feature 14 and 18, TN rate is still high; however, the TP rate falls. This might be due to the fact that the number of true defects in the dataset is lower than the number of false defects (negatives), and thus, the testing set is not enough similar to the learning set. SVM performances are very high both in TP and TN rate, above the other classifiers if we consider the two rates. A combination of the three classifiers would here be the best result. A very big interest of DFC is that it is not a black box and the whole process is under control up to the final decision.
Summary and Outlook In this chapter, we have tried to provide an overview of the actual AI methods applied in the NDE field starting from the conventional image processing methods, the more advanced DL methods, and a possible framework for data fusion of multimodal information sources. Metrics and frameworks to apply and measure the performance of AI techniques were also provided. Indeed it is very challenging to cover all advances in the dynamic AI field in one handbook chapter, this work is intended to be a starting point for the researchers and engineering that work in the NDE field and aim to deepen their knowledge and experience in the AI domain. From the author’s perspective, the NDE domain is an application case for the AI. This domain comes with its requirements and constraints, mainly related to processing multimodal data, changing anatomy of inspected specimen, new materials and new manufacturing processes, the trust, explainability, repeatability, validation, and certification of the AI technology. We expect that these requirements and expectations will be major research topics in the NDE domain in the near future.
474
A. Osman et al.
References 1. Davies S. Hawking warns on rise of the machines. 2014. Available: https://www.ft.com/content/ 9943bee8-7a25-11e4-8958-00144feabdc0 2. Musk E. Competition for AI. 2020. Available: https://twitter.com/elonmusk/status/ 904638455761612800. Accessed 6 Oct 2020. 3. Haoarchive K. MIT technology review. 2020. Available: https://www.technologyreview.com/ 2020/11/03/1011616/ai-godfather-geoffrey-hinton-deep-learning-will-do-everything/ 4. Kazantsev IG, Lemahieu I, Salov GI, Denys R. Statistical detection of defects in radiographic images in nondestructive testing. Signal Process. 2002;82(5):791–801. 5. Sun Y, Bai P, Sun H-y, Zhou P. Real-time automatic detection of weld defects in steel pipe. NDT & E Int. 2005;38(7):522–8. 6. Wenzel T, Hanke R. Fast image processing on die castings. In: Anglo-German conference on non-destructive testing. 1998. 7. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 770–8. 8. Girshick R. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision. 2015, p. 1440–8. 9. Shipway NJ, Barden TJ, Huthwaite P, Lowe MJS. Automated defect detection for fluorescent penetrant inspection using random forest. NDT & E Int. 2019;101:113–23. 10. Osman A, Hassler U, Kaftandjian V, Hornegger J. An automated data processing method dedicated to 3D ultrasonic non destructive testing of composite pieces. In: IOP conference series: materials science and engineering. 2012, p. 12005. 11. Osman A, Hassler U, Kaftandjian V. Automatic classification of three-dimensional segmented computed tomography data using data fusion and support vector machine. J Electron Imaging. 2012;21(2):21111. https://doi.org/10.1117/1.JEI.21.2.021111. 12. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90. 13. Meng M, Chua YJ, Wouterson E, Ong CPK. Ultrasonic signal classification and imaging system for composite materials via deep convolutional neural networks. Neurocomputing. 2017;257: 128–35. 14. Munir N, Kim H-J, Park J, Song S-J, Kang S-S. Convolutional neural network for ultrasonic weldment flaw classification in noisy conditions. Ultrasonics. 2019;94:74–81. 15. Dorafshan S, Thomas RJ, Maguire M. Comparison of deep convolutional neural networks and edge detectors for image-based crack detection in concrete. Constr Build Mater. 2018;186: 1031–45. 16. Tong Z, Gao J, Zhang H. Innovative method for recognizing subgrade defects based on a convolutional neural network. Constr Build Mater. 2018;169:69–82. 17. Zhu P, Cheng Y, Banerjee P, Tamburrino A, Deng Y. A novel machine learning model for eddy current testing with uncertainty. NDT & E Int. 2019;101:104–12. 18. Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition (CVPR 2001). 2001, I-I. 19. Wang Y, Guo H. Weld defect detection of X-ray images based on support vector machine. IETE Tech Rev. 2014;31(2):137–42. 20. Hu C, et al. LSTM-RNN-based defect classification in honeycomb structures using infrared thermography. Infrared Phys Technol. 2019;102:103032. https://doi.org/10.1016/j.infrared. 2019.103032. 21. Rabcan J, Levashenko V, Zaitseva E, Kvassay M, Subbotin S. Application of fuzzy decision tree for signal classification. IEEE Trans Ind Inf. 2019;15(10):5425–34. 22. Boaretto N, Centeno TM. Automated detection of welding defects in pipelines from radiographic images DWDI. NDT & E Int. 2017;86:7–13.
18
Applied Artificial Intelligence in NDE
475
23. Chun P-J, Ujike I, Mishima K, Kusumoto M, Okazaki S. Random Forest-based evaluation technique for internal damage in reinforced concrete featuring multiple nondestructive testing results. Constr Build Mater. 2020;253:119238. 24. Osman A. Automated evaluation of three dimensional ultrasonic datasets. Doctoral dissertation; 2013. 25. Duan Y, et al. Automated defect classification in infrared thermography based on a neural network. NDT & E Int. 2019;107:102147. https://doi.org/10.1016/j.ndteint.2019.102147. 26. Shepard SM, Lhota JR, Rubadeux BA, Wang D, Ahmed T. Reconstruction and enhancement of active thermographic image sequences. Opt Eng. 2003;42(5):1337–42. https://doi.org/10.1117/ 1.1566969. 27. Bengio Y, Grandvalet Y. No unbiased estimator of the variance of K-fold cross-validation. J Mach Learn Res. 2004;5:1089–105. 28. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge: MIT Press; 2016. 29. Słoński M, Schabowicz K, Krawczyk E. Detection of flaws in concrete using ultrasonic tomography and convolutional neural networks. Materials. 2020;13(7):1557. 30. Mery D. Aluminum casting inspection using deep learning: a method based on convolutional neural networks. J Nondestruct Eval. 2020;39(1):12. 31. Du W, Shen H, Fu J, Zhang G, Shi X, He Q. Automated detection of defects with low semantic information in X-ray images based on deep learning. J Intell Manuf. 2021;32:141–156. https:// doi.org/10.1007/s10845-020-01566-1. 32. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE conference on computer vision and pattern recognition. 2014, p. 580–7. 33. Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39(6):1137–49. https://doi.org/ 10.1109/TPAMI.2016.2577031. 34. Wei S, Li X, Ding S, Yang Q, Yan W. Hotspots infrared detection of photovoltaic modules based on Hough line transformation and Faster-RCNN approach. In: 2019 6th international conference on control, decision and information technologies (CoDIT). 2019, p. 1266–71. 35. Fuchs P, Kröger T, Garbe CS. Self-supervised learning for pore detection in CT-scans of cast aluminum parts. Proceedings of the international symposium on digital industrial radiology and computed tomography, 2–4 July 2019 in Fürth, Germany (DIR 2019). 36. Fuchs P, Kröger T, Dierig T, Garbe CS. Generating meaningful synthetic ground truth for pore detection in cast aluminum parts. In: 9th Conference on industrial computed tomography 2019, 13–15 Feb, Padova, Italy (iCT 2019). 37. Ferguson M, Ak R, Lee YT, Law KH. Automatic localization of casting defects with convolutional neural networks. In: 2017 IEEE international conference on big data (big data). 2017, p. 1726–35. 38. Mery D, et al. GDXray: the database of X-ray images for nondestructive testing. J Nondestruct Eval. 2015;34(4):42. 39. Lin J, Yao Y, Ma L, Wang Y. Detection of a casting defect tracked by deep convolution neural network. Int J Adv Manuf Technol. 2018;97(1–4):573–81. 40. Affonso C, Rossi ALD, Vieira FHA, Ferreira d L, Ponce AC. Deep learning for biological image classification. Expert Syst Appl. 2017;85:114–22. 41. Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Netw. 1994;5(2):157–66. 42. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80. 43. OpenCV. Home – OpenCV. 2020. Available: https://opencv.org/. Accessed 25 Nov 2020. 44. van der Walt S, et al. Scikit-image: image processing in Python. PeerJ. 2014;2:e453. 45. ImageJ. Image processing and analysis in Java. 2020. Available: https://imagej.nih.gov/ij/. Accessed 25 Nov 2020. 46. Pedregosa F, et al. Scikit-learn: machine learning in Python. J Mach Learn Res. 2011;12:2825–30.
476
A. Osman et al.
47. Chang C-C, Lin C-J. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST). 2011;2(3):1–27. 48. Wikipedia. Comparison of deep-learning software. 2020. Available: https://en.wikipedia.org/w/ index.php?title¼Comparison_of_deep-learning_software&oldid¼990254198. Accessed 25 Nov 2020. 49. Bloch I, Maître H. Fusion of image information under imprecision. In: Bouchon-Meunier B. (eds) Aggregation and fusion of imperfect information. Studies in Fuzziness and Soft Computing, vol 12. Physica, Heidelberg; 1998. https://doi.org/10.1007/978-3-7908-1889-5_11. 50. Rombaut M. Fusion: état de l’art et perspectives. In: Convention DSP, 2001, p. 78. 51. Zadeh LA. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1999;100(Suppl 1): 9–34. 52. Dempster A. Upper and lower probabilities induced by multivalued mapping. Ann Math Stat. 1967;38(2):325–39. 53. Shafer G. A mathematical theory of evidence. Princeton; London: Princeton University Press; 1976. https://doi.org/10.2307/j.ctv10vm1qb. 54. Smets P. The combination of evidence in the transferable belief model. IEEE Trans Pattern Anal Mach Intell. 1990;12(5):447–58. 55. Smets P. Belief functions: the disjunctive rule of combination and the generalized Bayesian theorem. Int J Approx Reason. 1993;9(1):1–35. 56. Smets P. The canonical decomposition of a weighted belief. In: Proceedings of the 14th international joint conference on artificial intelligence, San Mateo; 1995, vol. 2, p. 1896–1901. 57. Yager RR, Liu L. Classic works of the Dempster-Shafer theory of belief functions: SpringerVerlag Berlin Heidelberg; 1998. https://doi.org/10.1007/978-3-540-44792-4. 58. Denœux T. Logistic regression, neural networks and Dempster–Shafer theory: a new perspective. Knowl-Based Syst. 2019;176:54–67. 59. Rogova G. Combining the results of several neural network classifiers. Neural Netw. 1994;7(5): 777–81. 60. Bi Y, Guan J, Bell D. The combination of multiple classifiers using an evidential reasoning approach. Artif Intell. 2008;172(15):1731–51. 61. Quost B, Masson M-H, Denœux T. Classifier fusion in the Dempster–Shafer framework using optimized t-norm based combination rules. Int J Approx Reason. 2011;52(3):353–74. 62. Bi Y. The impact of diversity on the accuracy of evidential classifier ensembles. Int J Approx Reason. 2012;53(4):584–607. 63. Liu Z, Pan Q, Dezert J, Han J-W, He Y. Classifier fusion with contextual reliability evaluation. IEEE Trans Cybern. 2017;48(5):1605–18. 64. Xu P, Davoine F, Zha H, Denoeux T. Evidential calibration of binary SVM classifiers. Int J Approx Reason. 2016;72:55–70. 65. Minary P, Pichon F, Mercier D, Lefevre E, Droit B. Face pixel detection using evidential calibration and fusion. Int J Approx Reason. 2017;91:202–15. 66. Denoeux T. Analysis of evidence-theoretic decision rules for pattern classification. Pattern Recogn. 1997;30(7):1095–107.
The Human-Machine Interface (HMI) with NDE 4.0 Systems
19
John C. Aldrin
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Human-Machine Interface at Center of NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE 4.0 Process Tasks and the Advancement of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary of NDE 4.0 Process Tasks and Automation Opportunities . . . . . . . . . . . . . . . . . . . . . . Transition from Analog to Digital NDE Systems and Early Human-Machine Interfaces . . . Quantitative NDE and Algorithms for NDE Data Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lessons Learned from Developing a Human-Machine Interface for Early NDE 4.0 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE 4.0 HMI and the NDE Inspector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overcoming Challenges of Implementing NDE 4.0 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . Complimentary Roles for Automation and NDE Inspector in NDE 4.0 Systems . . . . . . . . . Improved Standards for Graphical User Interface Development for NDE Data Review . . . Strategies to Verify Data Quality in NDE 4.0 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback Tools for Adapting NDE 4.0 Algorithms Over Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . Augmented Reality (AR) Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collaborative Robots (Cobots) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Interfaces to Support NDE Inspector Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE 4.0 HMI and the NDE Engineer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Engineering Management Oversight of Transition to NDE 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supporting Engineering Data Mining and Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tools to Support Digital Twin and Material Review Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE 4.0 Automation Development Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
478 478 478 480 480 481 481 482 485 485 486 487 488 488 489 489 490 490 490 491 491 492 493 493 493
J. C. Aldrin (*) Computational Tools, Gurnee, IL, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_32
477
478
J. C. Aldrin
Abstract
The focus of this chapter is to introduce the various ways in which humans will interface with emerging and future NDE 4.0 systems. The inspector is an integral part of NDE 4.0 systems and is expected to perform necessary tasks in collaboration with NDE automated systems and data analysis algorithms. Care must be taken with the implementation of automation and the design of graphical user interfaces to ensure that operators have the necessary awareness and control as needed. As well, NDE engineers will play an integral role in NDE 4.0, developing automated systems and software interfaces, performing process monitoring, supporting NDE data discovery, and incorporating key results into improved life cycle management programs. This chapter will present guidance for the interface design between NDE hardware, software and algorithms, and human inspectors and engineers to ensure NDE 4.0 system reliability. Keywords
Automation · Automated data analysis (ADA) · Algorithms · Characterization · Graphical user interface (GUI) · Human-machine interface (HMI) · Intelligence augmentation (IA) · Nondestructive evaluation (NDE) · Rare events · Reliability
Introduction Introduction to NDE 4.0 NDE 4.0 is a vision for the next generation of inspection systems following the expected fourth industrial revolution. Industry 4.0 is a term used to describe how the Internet of Things (IoT), an emerging network of linked cyber-physical devices, incorporating algorithms such as artificial intelligence (AI), will improve engineering, manufacturing, logistic, and life cycle management processes [1]. Moving beyond the current third major wave of technological change due to the application of computers and automation, the fourth industrial revolution is expected to be based on connected cyber-physical systems. There exists a parallel vision for the next generation of NDE capability referred to as NDE 4.0 [2–5]. A key of aspect of NDE 4.0 is leveraging automation in the evaluation of the parts under test and providing enhanced characterization and use of the component state for improved life cycle management [6–7].
The Human-Machine Interface at Center of NDE 4.0 A diagram of an integrated vision for NDE 4.0 is presented in Fig. 1. One key innovation of NDE 4.0 is the integration of advanced control systems and NDE algorithms to support complex inspections, including NDE data acquisition and data analysis tasks. In recent years, major advances have been made in the field of
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
479
NDE 4.0 Software NDE Algorithms and Models
NDE 4.0 Intelligence Augmentation (IA)
NDE Inspector / NDE Engineer
(Artificial Intelligence, Digital Twin)
Human Machine Interface (HMI)
NDE 4.0 Hardware NDE Sensors and Data Acquisition (SHM / IoT, Databases) Test Article / (Component) NDE Automation Hardware (Scanning, Cobots, Drones)
NDE Automation Control (Scan Plans)
Fig. 1 Diagram for NDE 4.0 where the human-machine interface (HMI) is the link between the NDE inspectors and engineers, the NDE hardware (sensors and automation) link to the test article, and NDE software tools (algorithms, models, and controls)
machine learning (ML) and artificial intelligence (AI) to perform complex data classification tasks leveraging training on “big data” sets [8, 9]. While an increasing use of automation and algorithms in NDE is expected over time, NDE inspectors will still play a critical role in NDE 4.0. The concept of intelligence augmentation (IA) in NDE [8] was recently introduced as a counterpoint to the growing expectation for ML/AI applications in the field. Although such technology is promising, challenges do exist with transitioning such algorithms to NDE applications. Training algorithms today requires very large, well-understood data sets, frequently not available in NDE, and there are major concerns about the reliability and adaptability of such algorithms to completely perform complex NDE data review tasks. While attempting to replicate the human mind has encountered many obstacles over the years, the collaboration of humans with computer software has a much longer history of practical success. The focus of this chapter is to introduce the various ways in which humans will interface with emerging and future NDE 4.0 systems. The human-machine interface (HMI) is the critical link between the NDE inspector and engineer and NDE 4.0 software and hardware. The inspector is an integrated part of NDE 4.0 systems and performs necessary tasks in collaboration with NDE automated systems and data analysis algorithms. Care must be taken with the implementation of automation and the design of graphical user interfaces (GUI) to ensure that operators have the necessary awareness and control as needed. This chapter will present guidance for the interface design between NDE hardware, software and algorithms, and human
480
J. C. Aldrin
inspectors to ensure NDE 4.0 reliability. As well, NDE engineers will play an integral role in NDE 4.0 developing automated systems and software interfaces, providing process monitoring, supporting NDE data discovery, and incorporating key results into improved component life cycle management programs.
NDE 4.0 Process Tasks and the Advancement of Automation Summary of NDE 4.0 Process Tasks and Automation Opportunities It is important to consider the NDE workflow and the key process tasks that are expected to be impacted by automation, with the adoption of NDE 4.0 practices. Prior work has considered the general application of automation as four application classes: (1) information acquisition, (2) information analysis, (3) decision and action selection, and (4) action implementation [10]. These classes can be expanded by considering the NDE workflow [11] and level of automation associated with each step, shown in Table 1. Step 1 considers the NDE technique design, with emphasis on the class of the nondestructive evaluation acquisition data: analog (data represented in a physical form, e.g., X-ray film), digital (data stored directly to a computer), and hybrid (a physical acquisition subsequently scanned to a digital form, e.g., computed radiography). Step 2 addresses the set-up of the NDE acquisition, including alignment of the test part and the calibration of the transducers. Step 3 considers the NDE data acquisition and scanning process of the part under test: manual, semiautomated (with some human intervention), or fully automated (no human intervention). Step 4 considers data processing, which may include filtering, data selection, and feature extraction sub-steps. Data processing algorithms may be manually chosen by the operator, may include a combination of fixed processing steps with some user control, or a fully automated process. Step 5 applies the detection call criteria, which may be implemented through manual data review, assisted by the NDE hardware setting of gates and thresholds, or a fully automated detection algorithm. Step 6 considers evaluating the NDE results beyond detection, but quantitative characterization, providing possible interpretation of discontinuities in terms of some class, location, and size. As with most steps, the characterization Table 1 NDE process workflow and levels of automation # 1 2 3 4 5 6 7 8
NDE process steps Design (data) Set-up (calibration) Acquisition (scan) Data processing Detection Characterization (sizing) Disposition (decision-making) Reporting/archiving
Levels of automation Analog (visual, film), hybrid, digital Manual, semiautomated, fully automated Manual, semiautomated, fully automated Manual, assisted, fully automated Manual, assisted, fully automated Manual, assisted, fully automated Manual, assisted, fully automated Manual, assisted, fully automated
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
481
process may be performed manually by the inspector, through an interactive GUI that assists in sizing indications, or through a fully automated data analysis process. For NDE applications, the disposition or decision options include (1) pass the part, (2) perform secondary inspection of the part, (3) perform a repair of the part, or (4) scrap/replace the part. Often, consideration of the discontinuity location and expected load state of the component will greatly impact the possible risk and material review decision-making. Lastly, Step 8 addresses the generation of inspection report documentation and archiving of the NDE raw data and key metadata.
Transition from Analog to Digital NDE Systems and Early HumanMachine Interfaces If necessity is the mother of invention, then nondestructive evaluation (NDE) would likely hold a special place in her heart, with the fundamental goal of testing part quality without destroying it [12]. It is helpful at this point to provide a brief history on NDE systems. There are several excellent overviews of the history of NDE [13–16], with the field initially emerging in order to address failures in boilers and pressures vessels in the 1800s [13]. The development of X-ray techniques and early versions of penetrant methods were crucial to ensure early weld quality. Due to the acceleration of manufacturing due to World War II, there was a desire to formalize and share best practices within the NDE field. In the USA, the American Industrial Radium and X-Ray Society (AIRXS), the precursor to the American Society for Nondestructive Testing, was founded in Boston in 1941 [14]. At this time, NDE was analog but eventually with the development of film radiography, images could be captured, and NDE data stored. As analog devices matured using early display technology like the cathode-ray tube (CRT), the application of ultrasonic and eddy current methods became more practical [15]. Early on, strip chart recorders provide a means to monitor NDE measurement and include data in reports. With the evolution of the digital computers from large rack mount systems to microprocessors, the advent of portable NDE became much more practical [17]. As well, with emerging digital hardware and serial data communication protocols, the ability to store and transmit NDE results became feasible. Over time, NDE methods like eddy current and ultrasonic testing migrated from handheld pointbased acquisition to having transducers affixed to scanning hardware, enabling UT and EC imaging [15, 18–19]. With larger scale digital acquisition and storage of NDE data, the foundations for the emergence of NDE 4.0 were born.
Quantitative NDE and Algorithms for NDE Data Review Over time, the capability of NDE techniques has been extended to not simply detect discontinuities, such as fatigue cracks, but also to fully characterize flaws or the material state under test. The term quantitative nondestructive evaluation (QNDE) refers to techniques with the ability to assess the deterioration of a material or a
482
J. C. Aldrin
structure, and to detect and characterize discrete flaws [20]. The development of QNDE methods is critical to meet the requirements for reliability of the material and structures and enabling innovative NDE 4.0 applications. The fundamental core of quantitative NDE is the processing and interpretation of NDE data. The term NDE algorithm will be used here as a set of rules to be followed, likely involving a computer to support an analysis. Other related terminology that has been used in NDE include automated data analysis (ADA) [21] and automated defect recognition (ADR). The most basic algorithm is one based on the human experience. The term heuristic algorithm is useful to describe a class of algorithm, based on learning through discovery, incorporating rules-of-thumb, common sense, and practical knowledge. This first class of algorithms essentially encodes all key evaluation steps and criteria used by operators as part of a procedure into the algorithm. The second class of algorithm is model-based inversion that uses a “first-principles” physics-based model with an iterative scheme to solve characterization problems. This approach requires accurate forward models and iteratively compares the simulated and measurement data, adjusting the model parameters until agreement is reached. The heart of much of the early QNDE research concentrated on solving such inverse problems. The third class of algorithm covers statistical classifiers and machine learning that are built through the fitting of a model function using measurement training data with known states. Statistical representation of data classes can be accomplished using either classical frequentist procedures or Bayesian classification. Related to statistical models, machine learning and artificial intelligence are general terms for the process by which computer programs can learn. Early work on machine learning built on emulating neurons through functions, as artificial neural networks using layered algorithms and a training process that mimics a network of neurons [22]. In recent years, impressive advances have been made in the field of machine learning, primarily through significant developments in deep learning neural network (DLNN) algorithms [8, 23–24]. Large sets of high quality, well-characterized data have been critical for successful training of DLNNs. As well, software tools have been developed for training neural networks that better leverage advances in high performance computing (HPC). Several recent overview on algorithms for NDE classification and machine learning are summarized in Refs. [21, 25]. Fundamentally, algorithms providing quantitative NDE capability are at the heart of achieving the vision of NDE 4.0.
Lessons Learned from Developing a Human-Machine Interface for Early NDE 4.0 Applications From 1998 to 2001 while working for Prof. Achenbach at the Center for Quality and Failure Prevention at Northwestern University, I had the experience of supporting the transition of an early NDE 4.0 application for a USAF application. As part of my dissertation work, I first developed numerical models that enhanced our understanding of the physics of the inspection problem. As well, I created algorithms incorporating signal processing, feature extraction methods, and neural network classifiers
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
483
for the interpretation of ultrasonic test data for the detection of cracks at the near and far regions of weep holes in C-141 structures [26]. During a program review in 1999, I asked the question, “So, who was going to write the software user interface for the algorithm?” Everyone in the meeting looked at me and I was soon asked, “Can you do that?” Since that moment, I have been actively working on the problem of interfacing the NDE inspector with NDE data and NDE algorithm results. This section is meant to provide some of my personal experience with pertinent questions people generally have considering NDE 4.0 applications and the evolving humanmachine interface.
Addressing the Question “Why Was That Indication Call Made?” Based on consensus discussion during the early stages of the C-141 weep hole inspection program, the focus of the human-machine interface was to provide the inspector with a simple “red light/green light” feedback, where a crack call was either found (red light) or no crack was found (green light) and the inspection site was passed. Early on, the goal was to minimize human factors in reviewing complex ultrasonic signals, as well as streamline the decision-making on the maintenance action for the inspection site. To determine the reliability of the inspection process implemented in a field environment, a probability of detection (POD) study was performed, where the neural network-based approach was compared to an approach based on reviewing C-scan data by the NDE operator. The application of the neural net assisted, automated inspection approach significantly increased the reliability of calls being made for both top and bottom cracks, with the largest impact on the detection of top cracks. This increase in detectability did not produce an increase in the false call rate but, to the contrary, reduced the false call rate significantly for top crack detection. In addition, the automated neural network approach only requires scanning from one direction while the C-scan approach (reviewed by NDE inspectors) requires scanning from both directions. Thus, the automated neural network approach provided a significant improvement in both detectability and inspection time over the C-scan approach [26]. Given the excellent performance demonstration, one would initially assume the “red light/green light” user interface feedback would be satisfactory. However, when any calls were made, there are always additional questions from NDE inspectors, NDE engineers, and management. In particular, there is always the desire to have supplementary information to address “why was a call made?” Early work on the human-machine interfaces under this program strived to supply some level of verbal explanation, that was linked to the part of the algorithm and metrics that were responsible for the call. As well, supplementary processed data was also provided for follow-up visual review to the NDE operator, to provide more feedback on the source of the call. Although the technique performed exceedingly well during the POD studies, there were some observed calls of tool scratch marks left over from past “cleaning” operations of the weep holes. This was not surprising given the sensitivity of the creeping wave technique. Providing this supplementary information was found to be helpful in this situation. Providing verbal explanations and
484
J. C. Aldrin
supporting data for follow-up review have become a standard practice for HMI incorporating automated data analysis algorithms.
Contrasting Assisted from Fully Automated NDE 4.0 Systems Following the success of the C-141 weep hole inspection program [26], the development of automated data analysis algorithms was investigated for the inspection of beam cap holes in C-130 aircraft [27]. Here, the fastener sites of interest were in locations of limited accessibility from the external surface and contain fasteners with sealant. Due to limitations with the NDE capability at the time, there was a need to develop improved ultrasonic techniques to detect fatigue cracks at these locations. A key challenge was the ability to discern multiple signals originating from a possible crack and a geometric feature in a part that are either closely spaced or superimposed in time. The C-130 beam cap holes provided a special challenge given the skewed riser, installed fasteners, and limited transducer accessibility of the B-scan inspection [8]. This inspection problem frequently produced reflections from the fastener shaft (referred to as reradiated insert signals, RIS) occurring at similar times-of-flight as near and far crack signals. To address this challenge, a novel feature extraction methodology was developed to detect the relative shift of signals in time for adjacent transducer locations due to differing echo dynamics from cracks and part geometries [28]. This technique was the first ultrasonic NDE method using assisted data analysis (ADA) methods, validated through a POD study, to inspect for fatigue cracks on Air Force structures [27]. The original plan was to have the automated data analysis (ADA) algorithms make all of the indication calls for this inspection. As with the C-141 weep hole inspection problem, there was a need to further address questions on why a call was made. Enhancements were made to the software to provide more specific feedback on called indications and highlight when data was not adequate for making certain indication calls [8]. As well, certain severe structure plus fastener conditions were found to produce false calls on rare occasions. To manage these false calls by the algorithm, the results and raw data required secondary review by inspectors. Inspectors were trained on what to look for in the B-scan to manage this limitation of the algorithm. Although this technique was the first neural-network based approach used to inspect a portion of the USAF C-130 fleet, this case study is actually a very good example of a semiautomated NDE 4.0 system, in practice. Table 2 presents a list of the NDE process steps and levels of automation original planned versus final implementation, for the above case studies. There are a few important takeaways from this experience. One is that even when technique is validated by a rigorous probability of detection study, it is still difficult to achieve a “fully automated” evaluation for NDE process steps (5) detection, (6) characterization, and (7) disposition in practice. Based on experience for these applications and other, it will not be so easy to fully remove the human from the loop at these three steps. It is one thing to have algorithms review data and make calls. But, because these algorithms are not trained for every scenario, most algorithms work best “assisting” human inspectors. Until such algorithms further mature and are appropriately validated, it is important to consider the design and functionality of the critical human-machine interface for NDE 4.0 systems.
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
485
Table 2 Planned and actual levels of automation for early NDE 4.0 case studies # 1 2 3 4 5 6 7 8
NDE process steps Design (data) Set-up (calibration) Acquisition (scan) Data processing Detection Characterization Disposition Reporting/ archiving
Level of automation – planned Digital DAQ + scanning Semiautomated Fully automated Fully automated Fully automated Fully automated Fully automated Assisted
Level of automation – implemented Digital DAQ + scanning Semiautomated Fully automated Fully automated Assisted Assisted Assisted Assisted
NDE 4.0 HMI and the NDE Inspector The focus of this section is to introduce the various ways in which NDE inspectors will interface with emerging and future NDE 4.0 systems. An emphasis is placed on the complimentary roles for automation and the NDE inspector ensuring NDE 4.0 reliability, and emerging human-machine interface technology.
Overcoming Challenges of Implementing NDE 4.0 Algorithms While there is great promise with the application of NDE algorithms, there are a number of potential disadvantages with algorithm-based solutions to NDE inspection problems. First, the development and validation of reliable algorithms for NDE can be expensive. Training deep learning neural networks requires very large, wellunderstood data sets, which is frequently not readily available for NDE applications. Second, algorithms also can perform poorly for scenarios that they are not trained to interpret. In NDE, many promising demonstrations have been performed by the NDE research community, but frequent issues concerning overtraining and robustness to variability for practical NDE measurements “outside of the laboratory” have been noted, as demonstrated with the case studies in the prior section. As well, designing algorithms to address truly rare events, so-called “black swans” [29], is extremely difficult. Third, while human factors are frequently cited as being sources for error in NDE applications, humans are inherently more flexible in handling unexpected scenarios and can be better at making such judgement calls. Human inspectors also have certain characteristics like common sense and moral values, which can be beneficial in choosing to the most reasonable and safest option. As well, humans in many cases can readily note when an algorithm is making an extremely poor classification due to inadequate training and correct such errors. Lastly, with the greater reliance on algorithms, there is a concern about the degradation of inspector skills over time. As well, there is a potential for certain organizations to view automated systems and algorithms as a means of reducing the
486
J. C. Aldrin
number of inspectors. However, many of these disadvantages can be mitigated through the proper design of human-machine interfaces. As a counterpoint to the hype of artificial intelligence, intelligence augmentation (IA) refers to the effective use of information technology to enhance human intelligence [30, 31]. Fundamentally, progress on algorithms in NDE 4.0 should be viewed as an evolution of tools to better support human performance [32, 33]. From the perspective of NDE applications incorporating algorithms, IA has the potential to address most of the disadvantages of NDE 4.0 algorithms cited above. For example, many of the most promising DLNN applications today, from speech recognition, to text translation and image classification, are still far from perfect. However, that does not mean that these tools are not useful. In practice, humans can frequently detect these errors and can quickly work around poor results. Humans often develop an understanding where such algorithms can be most appropriately applied and where they should be avoided. By leveraging the algorithms where they are most useful, it becomes less critical for the algorithm to be able to handle all scenarios, especially very rare events.
Complimentary Roles for Automation and NDE Inspector in NDE 4.0 Systems Table 3 presents a breakdown of all of the potential opportunities for automation in NDE process steps and the complimentary roles of an NDE inspector for semiautomated/assisted NDE 4.0 systems. By operators working in conjunction with algorithms, there is no need to pursue eliminating the human entirely. In general, the most cost-effective and reliable solution will mostly likely be some hybrid, human plus machine, based approach. Generally, it is important to address the low-hanging fruit on implementing algorithms for NDE applications and to help alleviate the burden for inspectors of reviewing “mostly good” data. As well, some complex interpretation problems can benefit from algorithms and data guides. The design of these algorithms requires a focus on the base capability for making NDE indication calls, to provide value and help ensure reliability. The algorithm design process should consider the necessary engineering development time, cost for acquiring necessary data, and the approach with the highest likelihood of success. There will be payoff for some applications but not all applications may benefit from automation. However, as NDE 4.0 systems mature, development costs for each new application should be reduced. While there is often an initial desire to have NDE algorithms make all indication calls and present simple (good or bad) calls; based on prior experience, additional information is always requested by engineering and management to understand the details on why an indication call was made. Inspectors need a natural user interface to review each call with supporting data and provide feedback on the call details in light of the technical requirements. As well, since no algorithm will be perfect, inspectors need to have a straightforward means to review NDE data quickly. This entails identifying rare indications and determining when the acquisition of the NDE data is out of specification.
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
487
Table 3 Role of automation and inspector for semiautomated NDE process steps # 2
3
NDE process steps Set-up (calibration)
Possible roles of automation Positioning of part Transducer selection Calibration standard selection Scan calibration standard Automated data review
4
Acquisition (scan) Data processing
Scan part under test Automated scan verification Application of data filters Automated data processing Automated feature extraction Report quality metrics for scan
5
Detection
6
Characterization
Automated data analysis for indication detection Provide support for why calls were made Rank indications of primary interest Automated classification of indication type Automated sizing of defects
7
Disposition
Automated assessment of decision/ maintenance action Communicate disposition within NDE 4.0 network to appropriate parties
8
Reporting/ archiving
Automated report generator Performed data archiving (local, cloud, long-term)
Possible complimentary roles of NDE inspector Assist and verify part orientation Select appropriate scan plan Verify standard quality Provide secondary indication review Document poor and challenging indications calls Verify system performance Verify scan quality and coverage Review data quality Assist in selection of filters Apply special processing tools for challenging scan regions and data Review indication calls Make additional calls where automation is not fully validated Document poor and challenging indications calls Verify indication location, class, and size Document poor and challenging characterization results Verify automated assessment Provide support for modification of the automated decision Communicate with team members pass/fail result Share critical data for material review board assessment Include supplementary data to report Verify data archiving
The design of the interface between the inspector and NDE 4.0 system is critical to achieve the optimal performance for all scenarios. The following sections will discuss some of the emerging design practices, software tools, and interface technologies that will enable this next generation of NDE systems.
Improved Standards for Graphical User Interface Development for NDE Data Review While this is an exciting time for new human-machine interface tools, there is a critical need to carefully optimize the fine interactions between humans with computer algorithms in NDE. Some work has studied the HMI problem for different
488
J. C. Aldrin
nondestructive evaluation applications [34–36]. For example, Bertović performed a detailed survey of prior work on human factors when interfacing with automation in NDE [34]. While quality human-automation interaction has clear benefits, research suggests that increased automation has a number of challenges, costs – a paradox frequently dubbed as the automation ironies [37] or automation surprises [38]. Failure modes and effects analysis (FMEA) should be performed for all NDE techniques incorporating automation, to understand the potential sources for poor reliability [34]. Usability of human-machine interfaces is a critical aspect of workflow management for NDE techniques, from set-up, calibration, data acquisition, and indication review. Ideally, inspectors need a way to report results and efficiently provide feedback on indications. Frequently, there are means in NDE software systems today to annotate indication results; however, making this meta-data readily available to external systems is one the challenges for NDE 4.0 going forward. Such information will be very useful to refine NDE algorithms and improving life cycle management.
Strategies to Verify Data Quality in NDE 4.0 Systems Simply demonstrating POD capability does not ensure reliability of the technique. In practice, NDE reliability depends on a reproduceable calibration procedure and a repeatable inspection process [39]. While most consideration is given to the application of automation for NDE detection and characterization tasks, Table 3 highlights the significant value that automation can provide to NDE setup and calibration. Process controls and algorithms can be used to ensure all calibration indications are verified and track key metrics that the NDE process is repeatable over time and under control. This is especially important for increasingly large NDE data sets that are acquired today. Algorithms have the capability to review statistics from calibration data quickly. Alternatively, inspectors are more likely to detect outlier conditions that algorithm might readily miss. Recent work on model-based inverse algorithms with eddy current inspections has shown the potential to reduced error due to variability in probes through calibration process controls [40]. NDE 4.0 systems are also expected to improve the safety of inspections in dangerous environments. By collecting environmental conditions data (using environmental sensors and/or weather monitoring) and test system state data from the site, one can ensure the reliability of the inspection task and reduce the level of risk for all involved.
Feedback Tools for Adapting NDE 4.0 Algorithms Over Time Currently, NDE algorithms are primarily being developed by engineers to perform very specific NDE detection or NDE characterization tasks. In the near future, there will come a time when NDE 4.0 tools are more adaptive and offer collaborative training. One case study is the ultrasonic inspection of large composite structures
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
489
requires significant manpower and production time. To address this inspection burden, automated data analysis (ADA) software tools have been developed and implemented for automated review of certain aircraft composite panels [41]. The automated data analysis minimizes the inspector burden performing mundane tasks and allocates their time to analyze data of primary interest. When the algorithm either detects a feature in the data that is unexpected or that is found to be representative of a defect, the indication is flagged for further analysis by the inspector. Currently, feedback is collected, both verifying quality calls and documenting errors in the algorithm performance [41]. The long-term goal is to adapt the algorithm based on this critical inspector feedback. It is important for adaptive NDE 4.0 algorithms to maintaining core competency when also providing flexibility and learning capability. Care must be taken to avoid having an algorithm “evolve” to a poorer level of practice, due to bad data, inadequate guidance, or deliberate sabotage. Like computer viruses today, proper design practices and failure mode effects analysis are needed to ensure such algorithms are robust to varying conditions. It is important to design these systems to periodically do self-checks on standard data sets, similar to how inspectors must verify NDE systems/transducers using calibration procedures or having inspectors perform NDE examinations periodically. Best practices should include an updating of POD performance for the technique [42].
Augmented Reality (AR) Systems Typical human interfaces with computer systems in NDE have included monitors, buttons, dials, a keyboard, a mouse, and possibly joystick interaction. While these classic PC interfaces are still efficient for many tasks, there are also a number of emerging devices and tools to connections humans with automation. For example, industrial touch-screen tablets, augmented reality glasses, wearables devices (i.e., smart watches), voice-recognition systems, and position tracking devices (e.g. Microsoft Kinect), all have the potential to provide more natural human-machine interfaces to support emerging NDE 4.0 systems. Several promising applications of augmented reality for aircraft maintenance applications have demonstrated feasibility in recent years [43–47]. A growing integration of these maturing AR technologies with NDE is expected in the coming decade.
Collaborative Robots (Cobots) A collaborative robot or cobot for short is a robot intended to physically interact with humans in a shared workspace. There exists some recent work in the NDE field exploring the application of cobots to support NDE inspectors [48–52]. The use of collaborative robots for NDE is attractive for several reasons. First, cobots enable the registration, precision, repeatability, and speed that robotics provides while eliminating the need for safety exclusion zones or other safety barriers during inspection.
490
J. C. Aldrin
Second, cobots allows robotic NDE to be performed on a structure or vehicle while other work is taking place in close proximity [52]. For NDE 4.0, cobots also provide a natural means of the inspector to provide feedback to the robotic scanning system, to adapt scan plans for example, for varying part conditions that most raster scanning or conventional robotic systems would have difficultly. As well, cobots enable inspectors to have access and control within confined spaces, providing greater safety for the NDE inspector. Given the rapid decrease in the cost for off-the-shelf cobot systems relative to other scanning hardware, cobots are expect play a growing role in NDE 4.0 applications in the coming years.
Software Interfaces to Support NDE Inspector Training There is a potential to leverage the same NDE software interface for training purposes, by having the operators periodically train and test their skills with various conditions of NDE data. For example, specific rare events can be stored and introduced periodically, as part of a regular retraining process. Thus, the interface could be used in a similar way that flight simulators are used for pilots to verify their performance under standard conditions and rare events. As well, integrated models with the same user interface can also provide a means for verification of indications and support sizing by the inspectors. Such tools are currently under development [53–55] with different forms of user interfaces such as tablets displaying the part under inspection [54] or incorporating augmented reality devices such as the HoloLens [55].
NDE 4.0 HMI and the NDE Engineer Figure 2 presents a diagram of the interface between NDE 4.0 process tasks with supporting NDE inspector and engineering tools. This section will focus on how NDE engineers will interface with emerging and future NDE 4.0 systems, providing NDE process control monitoring, supporting NDE data discovery, incorporating key results into improved component life cycle management programs, and developing automation and software interfaces.
Engineering Management Oversight of Transition to NDE 4.0 Managing cost and mitigating risk drive most decisions for NDE today. For organizations that depend on NDE, there are likely certain applications that will provide the greatest payoff in terms of cost and quality for their customers, transitioning from conventional NDE to NDE 4.0. The transition of algorithms should initially be a phased approach, to both validate the algorithm performance and build an understanding of where algorithms are reliable and where limitations exist. By tracking called indications over time, it becomes feasible to refine algorithms, as necessary.
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
491
NDE 4.0 Process Steps 1
Design (Data)
2
Set-up (Calibration)
3
Acquisition (Scan)
4
Data Processing
5
Detection
6
Characterization (Sizing)
7
Disposition (Decision Making)
8
Reporting / Archiving
NDE Automation + HMI Development Tools
Fig. 2 Diagram of the interface between the NDE 4.0 process and supporting NDE inspector and engineering tools
Building that experience internally and achieving initial payoff will lead to broader transition of these best practices across an organization and greater shareholder value. To facilitate this assessment, engineering management tools must be incorporated into the user interface, to appropriately study trends in the results.
Supporting Engineering Data Mining and Discovery In the coming years, one of the primarily goals of NDE 4.0 is to leverage NDE data well beyond immediate inspection requirements. Migrating NDE data to secure accessible databases and providing practical user interfaces for engineering access are key features for an NDE 4.0 system. Promising software tools do exist today to support NDE practitioners with data archiving, visualization, and performing special queries [56], and continued improvements with usability and functionality are expected in the future. As well, such environments will provide unique resources for the development and optimization of advanced NDE algorithms. Ideally, to share data between NDE 4.0 components, open data standards (such as DICONDE and HDF5) and incorporating flexible software architectures will greatly accelerate the evolution of these systems [4, 11].
Tools to Support Digital Twin and Material Review Board Nondestructive methods for characterizing material properties and damage features has significant potential to enable and refine methods to certify new materials, sustain existing systems, and enhance methods to forecast future maintenance of
492
J. C. Aldrin
engineered systems [6]. This includes certification of new material systems, such as those being made via additive manufacturing or complex composite lay-ups. There is significant potential for augmented sustainment practices, such as conditionbased maintenance (CBM), monitoring the actual part condition and deciding the optimal time when maintenance is needed. To support long-term goals of CBM, innovative methods are needed to go beyond damage detection to characterize the state of the structure. Current initiatives such as the Digital Thread and Digital Twin [57] are pathfinders for enhanced representation of a system via digital surrogates to improve current prognostic methods to manage the integrity, or safety, of the system. The Digital Thread provides a means to track all digital information regarding the manufacturing and sustainment of a component and system, including the material state and any variance from original design parameters. The Digital Twin concept provides a digital equivalent of a system and exercises the digital twin model through various use scenarios to evaluate individual performance and forecast possible emerging maintenance issues. For high value assets, the review of nonconformance, such as an NDE indication, and if the variance is permitted, current practice does not capture the magnitude of the variance. In aerospace applications, this typically occurs as part of the Materials Review Board, or MRB [7, 21]. NDE 4.0 systems are thus critical to achieving these Digital Thread and Digital Twin concepts, enabling an evolution in knowledge management for end users.
NDE 4.0 Automation Development Environments Lastly, it is important to consider the software development tools that enable the creation of the next generation NDE 4.0 software, including specialized NDE algorithms. Over the years, there have been a few examples of general software tools, for example LabView, that have been developed to streamline the creation of user interface tools for data acquisition and signal processing for general engineering applications. Specific to NDE, one example is InspectionWare, a general software tool for NDE data acquisition [58]. A few NDE general software applications have also been built off of open sources tools available in Python: NDIToolkit by TRI/Austin [7] and databrowse by Prof. Steve Holland et al. at Iowa State University [59]. As well, there are a number of different integrated development environments (IDE) for software application development, such as Visual Studio, Delphi, Eclipse, and PyCharm frequently used for NDE software development. To be successful in NDE 4.0 application development, engineers will need to continuously streamline the development of new applications, through user interface and code reuse, and software engineering best practices. For example, automated data analysis (ADA) tools for aircraft composite panels [41] were designed such that the user interface was generic for any NDE data review application and could be revised by simply changing an input file. As well, algorithm parameter optimization could be easily batch run through the same user interface, enabling algorithm adaptability and greater code reuse. While there will be benefits to developing proprietary tools, the
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
493
greater use of common open software and open file formats like DICOM/DICONDE and HDF5 will address some of the outstanding issues concerning data accessibility cross platform.
Summary The focus of this chapter is to introduce the various ways in which humans will interface with emerging and future NDE 4.0 systems. The inspector is viewed an integral part of NDE 4.0 systems and is expected to perform necessary tasks in collaboration with NDE automated systems and data analysis algorithms. Care must be taken with the implementation of automation and the design of graphical user interfaces (GUI) to ensure that operators have the necessary awareness and control as needed to perform their tasks successfully. Going forward, there is an expectation that human-machines interfaces for NDE 4.0 will continue to evolve, especially as augmented reality (AR) and collaborative robot technologies mature, become less expensive, and are specifically designed to better support NDE applications. As well, NDE engineers will play an integral role in NDE 4.0 providing process monitoring, supporting NDE data discovery, incorporating key results into improved component life cycle management programs such as Digital Thread and Digital Twin. There is the need to continue to develop the data infrastructure and software tools to streamline machine learning algorithm studies, in order to lower the cost for NDE 4.0 system development, making it more cost competitive. Concerning adaptive learning incorporating user feedback, it is important to develop best practices to periodically do either self-checks on standard data sets or ideally reassess POD performance for the modified technique, while ensuring the growing validation data sets remains independent from algorithm training. Long term, algorithm design and training schemes should properly address such uncertainty in the development process, which should lead to more robust algorithms and more reliable NDE 4.0 systems.
Cross-References ▶ Artificial Intelligence and NDE Competencies ▶ NDE 4.0: New Paradigm for the NDE Inspection Personnel ▶ NDE 4.0: Image and Sound Recognition ▶ Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
References 1. Imtiaz J, Jasperneite J. Scalability of OPC-UA down to the chip level enables “Internet of things”. In: 2013 11th IEEE international conference on industrial informatics (INDIN), 29 Jul 2013. IEEE. p. 500–5. https://ieeexplore.ieee.org/abstract/document/6622935 2. Meyendorf NG, Bond LJ, Curtis-Beard J, Heilmann S, Pal S, Schallert R, Scholz H, Wunderlich C. NDE 4.0 – NDE for the 21st century – the internet of things and cyber physical systems will revolutionize NDE. In: 15th Asia pacific conference for non-destructive testing
494
J. C. Aldrin
(APCNDT2017), Nov 2017, p. 13–7. https://www.ndt.net/events/APCNDT2017/app/content/ Paper/89_Meyendorf_Rev1.pdf 3. Link R, Riess N. NDT 4.0-significance and implications to NDT–automated magnetic particle testing as an example. In:12th European conference on non-destructive testing (ECNDT 2018), Jun 2018, p. 11–5. http://cdn.ecndt2018.com/wp-content/uploads/2018/05/ecndt-0619-2018File001.pdf 4. Vrana J, Kadau K, Amann C, Schnittstellen DU. Non-destructive testing of forgings on the way to industry 4.0. In: ASNT annual conference, Houston, 31 Oct 2018. 5. Singh R. The next revolution in nondestructive testing and evaluation: what and how? Mater Eval. 2019;77(1):45–50. https://ndtlibrary.asnt.org/2019/TheNextRevolutioninNondestructive TestingandEvaluationWhatandHow 6. Lindgren EA. Opportunities for nondestructive evaluation: quantitative characterization. Mater Eval. 2017;75(7):862–9. https://ndtlibrary.asnt.org/2017/OpportunitiesforNondestructiveEva luationQuantitativeCharacterization 7. Forsyth D, Aldrin JC, Magnuson CW. Turning nondestructive testing data into useful information. In: Aircraft airworthiness & sustainment conference, Jacksonville, 26 Apr 2018. 8. Aldrin JC. Intelligence augmentation and human-machine interface best practices for NDT 4.0 Reliability. Mater Eval 2020;78(7):869–879. 9. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. https://www. nature.com/articles/nature14539 10. Parasuraman R, Sheridan TB, Wickens CD. A model for types and levels of human interaction with automation. IEEE T Syst Man CYA. 2000;30(3):286–97. https://doi.org/10.1109/3468. 844354. 11. Meier J, Tsalicoglou I, Mennicke R, Frehner C, Proceq SA. The future of NDT with wireless sensors, AI and IoT. In: Proceedings 15th Asia Pacific conference for non-destructive testing, Singapore, 13 Nov 2017, p. 1–11. https://www.ndt.net/search/docs.php3?id¼22263 12. Johnson G. Ultrasonic flaw detectors... and beyond: a history of the discovery of the tools of nondestructive technology. Quality. 2013;52(4):22–5. 13. Ling J. The evolution of the ASME boiler and pressure vessel code. J Press Vessel Technol. 2000;122(3):242–6. https://doi.org/10.1115/1.556180. 14. ASNT. From vision to mission: ASNT 1941 to 2016. Columbus; 2016. 15. Georgeson G. Boeing technical journal a century of Boeing innovation in NDE. 2016. https:// www.boeing.com/resources/boeingdotcom/features/innovation-quarterly/nov2017/btj_nde_ full.pdf 16. Farley M. 40 years of progress in NDT – history as a guide to the future. In: AIP conference proceedings, 18 Feb 2014, vol. 1581(1). American Institute of Physics. p. 5–12. https://doi.org/ 10.1063/1.4864796. 17. Martin R. Portable digital NDT instrumentation. In: IEE Colloquium on recent developments in digital NDT equipment design, 1 Mar 1988, IET, p. 1–1. https://ieeexplore.ieee.org/abstract/ document/208858/authors#authors 18. Komsky IN, Achenbach JD, Andrew G, Grills B, Register J, Linkert G, Hueto GM, Steinberg A, Ashbaugh M, Moore DG, Weber H. Ultrasonic technique to detect corrosion in DC-9 wing box from concept to field application. Mater Eval. 1995;53(7):848–52. 19. Shell EB, Aldrin JC, Sabbagh HA, Sabbagh E, Murphy RK, Mazdiyasni S, Lindgren EA. Demonstration of model-based inversion of electromagnetic signals for crack characterization. In: AIP conference proceedings 31 Mar 2015, vol. 1650(1). American Institute of Physics. p. 484–93. https://doi.org/10.1063/1.4914645. 20. Achenbach JD. Quantitative nondestructive evaluation. Int J Solids Struct. 2000;37(1–2):13– 27. https://www.sciencedirect.com/science/article/abs/pii/S0020768399000748 21. Aldrin JC, Lindgren EA. The need and approach for characterization-US air force perspectives on materials state awareness. In: AIP conference proceedings, 20 Apr 2018, vol. 1949(1). AIP Publishing LLC. p. 020004. https://aip.scitation.org/doi/abs/10.1063/1.5031501
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
495
22. Fukushima K, Miyake S. Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition. In: Competition and cooperation in neural nets 1982. Berlin/ Heidelberg: Springer, p. 267–85. https://link.springer.com/chapter/10.1007/978-3-642-464669_18 23. Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18(7):1527–54. https://direct.mit.edu/neco/article/18/7/1527/7065/A-Fast-Learning-Algo rithm-for-Deep-Belief-Nets 24. Lewis-Kraus G. The great AI awakening. The New York Times Magazine, 14 Dec 2016;14:12. http://nyti.ms/2hMtKOn 25. Harley JB, Sparkman D. Machine learning and NDE: past, present, and future. In: AIP conference proceedings, 8 May 2019, vol. 2102(1). AIP Publishing LLC. p. 090001. https:// aip.scitation.org/doi/abs/10.1063/1.5099819 26. Aldrin J, Achenbach JD, Andrew G, P’an C, Grills B, Mullis RT, Spencer FW, Golis M. Case study for the implementation of an automated ultrasonic technique to detect fatigue cracks in aircraft weep holes. Mater Eval. 2001;59(11):1313–9. 27. Lindgren EA, Mandeville JR, Concordia MJ, MacInnis TJ, Abel JJ, Aldrin JC, Spencer F, Fritz DB, Christiansen P, Mullis RT, Waldbusser R. Probability of detection results and deployment of the inspection of the vertical leg of the C-130 Center Wing Beam/Spar Cap. In: 8th joint DoD/FAA/NASA conference on aging aircraft, Jan 2005, vol. 31. 28. Aldrin JC, Kropas-Hughes CV, Knopp J, Mandeville J, Judd D, Lindgren E. Advanced echodynamic measures for the characterisation of multiple ultrasonic signals in aircraft structures. Insight Non Destr Test Cond Monit. 2006;48(3):144–8. https://www.ingentaconnect.com/ content/bindt/insight/2006/00000048/00000003/art00004 29. Taleb NN. Black swans and the domains of statistics. Am Stat. 2007;61(3):198–200. https:// www.tandfonline.com/doi/abs/10.1198/000313007X219996 30. Skagestad P. Thinking with machines: intelligence augmentation, evolutionary epistemology, and semiotic. J Soc Evol Syst. 1993;16(2):157–80. https://www.sciencedirect.com/science/ article/abs/pii/106173619390026N?via%3Dihub 31. Rastogi A. Artificial intelligence-human augmentation is what’s here and now. Medium. 2017. https://medium.com/reflections-by-ngp/artificial-intelligence-human-augmentation-is-whatshere-and-now-c5286978ace0 32. Aldrin JC, Lindgren EA, Forsyth DS. Intelligence augmentation in nondestructive evaluation. In: AIP conference proceedings, 8 May 2019, vol. 2102(1). AIP Publishing LLC. p. 020028. https://aip.scitation.org/doi/abs/10.1063/1.5099732 33. Wilson HJ, Daugherty PR. Collaborative intelligence: humans and AI are joining forces. Harv Bus Rev. 2018;96(4):114–23. 34. Bertovic M. Human factors in non-destructive testing (NDT): risks and challenges of mechanised NDT. [Internet]. Berlin: Doctoral dissertation. Technische Universität Berlin. BAM-Dissertationsreihe Band 145. Bundesanstalt für Materialforschung und -prüfung (BAM); 2016. https://opus4.kobv.de/opus4-bam/frontdoor/index/index/docId/36090 35. Dudenhoeffer DD, Holcomb DE, Hallbert BP, Wood RT, Bond LJ, Miller DW, O’Hara JM, Quinn EL, Garcia HE, Arndt SA, Naser J. Technology roadmap on instrumentation, control, and human-machine interface to support DOE advanced nuclear energy programs. INL/EXT-0611862; 2007. https://inldigitallibrary.inl.gov/sites/sti/sti/4511504.pdf 36. Bertovic M. A human factors perspective on the use of automated aids in the evaluation of NDT data. In: AIP conference proceedings, 10 Feb 2016, vol. 1706(1). AIP Publishing LLC. p. 020003. https://aip.scitation.org/doi/abs/10.1063/1.4940449 37. Bainbridge L. Ironies of automation. In: Analysis, design and evaluation of man–machine systems, 1 Jan 1983, p. 129–35. Pergamon. https://www.sciencedirect.com/science/article/pii/ B9780080293486500269 38. Sarter NB, Woods DD, Billings CE. Automation surprises. Handbook of human factors and ergonomics. 1997;2:1926–43. 10.1.1.134.7077.
496
J. C. Aldrin
39. Rummel WD. Nondestructive inspection reliability – history, status and future path. In: Proceedings of the 18th world conference on nondestructive, Durban, Apr 2010, p. 16–20. https:// www.ndt.net/article/wcndt2012/papers/608_wcndtfinal00607.pdf 40. Aldrin JC, Oneida EK, Shell EB, Sabbagh HA, Sabbagh E, Murphy RK, Mazdiyasni S, Lindgren EA, Mooers RD. Model-based probe state estimation and crack inverse methods addressing eddy current probe variability. In: AIP conference proceedings, 16 Feb 2017, vol. 1806(1). AIP Publishing LLC. p. 110013. 41. Aldrin JC, Forsyth DS, Welter JT. Design and demonstration of automated data analysis algorithms for ultrasonic inspection of complex composite panels with bonds. In: AIP conference proceedings, 10 Feb 2016, vol. 1706(1). AIP Publishing LLC. p. 120006. https://aip. scitation.org/doi/abs/10.1063/1.4940591 42. Aldrin JC, Annis C, Sabbagh HA, Lindgren EA. Best practices for evaluating the capability of nondestructive evaluation (NDE) and structural health monitoring (SHM) techniques for damage characterization. In: AIP conference proceedings, 10 Feb 2016, vol. 1706(1). AIP Publishing LLC. p. 200002. https://aip.scitation.org/doi/abs/10.1063/1.4940646 43. Augmented Reality Workshop, Springfield; 3 Oct 2018. https://ndia.dtic.mil/2018/ 2018augreality.html 44. Jordon H. AFRL viewing aircraft inspections through the lens of technology. 16 Aug 2018. https://www.wpafb.af.mil/News/Article-Display/Article/1603494/afrl-viewing-aircraft-inspec tions-through-the-lens-of-technology 45. Masoni R, Ferrise F, Bordegoni M, Gattullo M, Uva AE, Fiorentino M, Carrabba E, Di Donato M. Supporting remote maintenance in industry 4.0 through augmented reality. Procedia Manuf. 2017;11:1296–302. https://www.sciencedirect.com/science/article/pii/S2351978917304651? via%3Dihub 46. Utzig S, Kaps R, Azeem SM, Gerndt A. Augmented reality for remote collaboration in aircraft maintenance tasks. In: 2019 IEEE aerospace conference, 2 Mar 2019, p. 1–10. IEEE. https:// ieeexplore.ieee.org/abstract/document/8742228 47. Mourtzis D, Vlachou E, Zogopoulos V, Fotini X. Integrated production and maintenance scheduling through machine monitoring and augmented reality: an Industry 4.0 approach. In: IFIP international conference on advances in production management systems,3 Sep 2017. Cham: Springer. p. 354–62. https://link.springer.com/chapter/10.1007/978-3-319-66923-6_42. 48. Goodrich MA, Schultz AC. Human-robot interaction: a survey. Now Publishers; 2008. 49. Futterlieb M, Frejaville J, Donadio F, Devy M, Larnier S. Air-Cobot: aircraft enhanced inspection by smart and collaborative robot. MCG 2016, Vichy, 2016, 5–6 Oct. https:// mcg2016.irstea.fr/wp-content/uploads/2017/05/MCG2016_paper_42.pdf 50. Donadio F, Frejaville J, Larnier S, Vetault S. Artificial intelligence and collaborative robot to improve airport operations. In: Online engineering & internet of things 2018. Cham: Springer. p. 973–86. https://link.springer.com/chapter/10.1007/978-3-319-64352-6_91. 51. Donadio F, Frejaville J, Larnier S, Vetault S. Human-robot collaboration to perform aircraft inspection in working environment. In: Proceedings of 5th international conference on machine control and guidance (MCG) 2016 Oct. 52. Cramer KE. Current and future needs and research for composite materials NDE. In: Behavior and mechanics of multifunctional materials and composites XII 22 Mar 2018, vol. 10596. International Society for Optics and Photonics. p. 1059603. 53. Harris DH, Spanner JC. Virtual NDE operator training and qualification. ASME-PUBLICATIONS-PVP. 1998;375:117–24. 54. TraiNDE by Extende, The first virtual mock-up for NDE inspectors. News. NDT.net, 2021 Nov. https://www.ndt.net/search/docs.php3?id¼25525 55. Nguyen TV, Kamma S, Adari V, Lesthaeghe T, Boehnlein T, Kramb V. Mixed reality system for nondestructive evaluation training. Virtual Real, p. 1–10. 2020. https://link.springer.com/article/ 10.1007%2Fs10055-020-00483-1
19
The Human-Machine Interface (HMI) with NDE 4.0 Systems
497
56. Sharp TD, Kesler JM, Liggett UM. Mining inspection data of parts with complex shapes. ASNT Fall Conference; 2009. 57. Kobryn P, Tuegel E, Zweber J, Kolonay R. Digital thread and twin for systems engineering: EMD to disposal. In: Proceedings of the 55th AIAA aerospace sciences meeting Grapevine, 9 Jan 2017. 58. Weber WH, Mair HD, Jansen D, Lombardi L. Advances in inspection automation. In: AIP conference proceedings 25 Jan 2013, vol. 1511(1). American Institute of Physics. p. 1654–61. https://aip.scitation.org/doi/abs/10.1063/1.4789240 59. Gregory E, Lesthaeghe T, Holland S. Toward automated interpretation of integrated information: managing “big data” for NDE. In: AIP conference proceedings 31 Mar 2015, vol. 1650(1). American Institute of Physics. p. 1893–7. https://aip.scitation.org/doi/abs/10.1063/1.4914815
Artificial Intelligence and NDE Competencies
20
Ramon Salvador Fernandez Orozco, Kimberley Hayes, and Francisco Gayosso
Contents Introduction: The Human Dimension of NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE as a Professional Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Traditional Formation of NDE Professionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Differentiating Knowledge, Skills, and Competences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technology Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Meaning of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AI and Data Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Fluid and Ever-Evolving Dilemma of Attempting an AI Definition: The Blind Men and the Elephant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Concise History of AI and the Continuous Pursuit for Smart Nonhuman Assistants . . . . . . The Prehistory of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Turing Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Early AI Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Dartmouth Summer Research Conference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DARPA and Military Involvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main AI Initiatives Through Academic Institutions with Government Support – North America (Canada, USA) Europe, Japan, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Impact of AI Developments Through Corporations and SMEs . . . . . . . . . . . . . . . . . . . . . . . Frontiers and Lines of Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Phased-Out Parallel Advancement with the Health Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Where We Are Now . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
501 501 505 505 508 510 514 516 517 519 520 520 521 521 521 522 523 523 523
R. S. Fernandez Orozco (*) Fercon Group, Zapopan, Jalisco, Mexico e-mail: [email protected] K. Hayes Valkim Technologies, LLC, San Antonio, TX, USA e-mail: [email protected] F. Gayosso Crea Codigo, Guadalajara, Mexico e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_24
499
500
R. S. Fernandez Orozco et al.
AI Initiatives Development Structure Pertinent to NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Landscape of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Permeating NDE Skills to Digital Assistants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Knowledge, Skills, and Competencies in the NDE 4.0 Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NDE 4.0 Body of Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Soft, Hard, and Digital Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Competencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stable, Redundant/Evolved, and New Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions – Competition Versus Complementation, a Balanced Pathway Toward the Formation of NDE Professionals in the Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relevant Websites Referenced in the Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
533 534 540 541 542 542 545 546 547 547 548 548 548
Abstract
The history of human progress runs parallel with a constant pursuit of augmenting and expanding innate human capacities. For centuries this quest has included the aspiration of creating artificial intelligent assistants, either embodied in physical objects or existing as mere software elements whose achievements have both advanced and challenged our perception of the notion of intelligence. Artificial Intelligence (AI) technologies are no longer distant, detached research efforts but have become part of our everyday objects and experiences which are contributing to shape our professional, social and personal environments cultivating the technology enabled collective intelligence. Through this chapter we aim to explore the structure of technologies and product categories that constitute the ever dynamic and thriving AI landscape and their impacts in technician engagement through enhanced collaborative competencies. Specifically, AI technologies contributed to shape trends related with NDE competencies such as: 1. An unsatisfied demand of highly specialized NDE technicians in specific niches that runs parallel with the irruption and democratization of NDE technologies contained in direct-to-consumer products. 2. An increased bi-directional permeability of NDE and Digital competencies in the workplace. 3. Aging workforces decline amidst increased inspection demand aging-out of skilled craft and challenge in backfilling with current generational mindsets. As we will explore in Chap. 9, NDE 4.0 will imply a profound re-orientation of training and certification efforts not only for NDE professionals but also for AI assistants with increasingly capable NDE competencies. It will also shape normalization efforts to guide an ordered and natural assimilation of those technologies. Additionally, we aim to provide valuable connections to supplementary resources that may contribute from NDE professionals around the world to the use of AI technologies to expand their pursuits for professional and personal development with increased confidence.
20
Artificial Intelligence and NDE Competencies
501
Keywords
Artificial Intelligence · Industry 4.0 · NDE 4.0 · ADR/AI · CADR · Digital assistants · NDE competencies · Digital competencies · Normalization
Introduction: The Human Dimension of NDE In the first NDE lessons often apprentices are still taught the four indispensable elements in a NDE test system (An energy source, a test object, the interaction between that energy source and the test object, and a recording medium for this interaction) and while analyzing how the test system does not function in the absence of one or more of those four elements ineludibly a fifth presence, human intervention, often not addressed for its explicit omnipresence which needs to assimilate and capitalize a rising sixth presence, artificial intelligence (AI), which may be a source of extraordinary opportunities and an unmistakable ally, if properly assimilated, to assist humans to unleash the power of their talent and ingenuity to create and deploy the next generation of NDE systems in the following years. Trampus, Krstelj, and Nardoni [56] remind us the fundamental objectives of NDE: “NDE has two fundamental objectives. Its social objective is to save the human and the natural and built environment in case a structure or component fails due to non-detection of a flaw (. . .) The commercial objective of NDE is to optimize the productivity of assets.” This chapter aims to provide a balanced perspective of both the historic development of AI assistants and the impact of AI technologies toward the future in the developments of skills and competencies of NDE professionals. A truly profound human dimension of NDE should be able to really analyze all instances where humans are both the main actor and the final purpose of the value that NDE processes create and the profound impact of AI technologies to provide invaluable assistance and support to enhance and potentialize human capacities, but also to counteract any risks associated to the integration of negative features of human nature such as discrimination and bias. Building this truly human and balanced perspective of AI is what we need and where we should focus our action.
NDE as a Professional Path Often the analysis of NDE as a professional path is centered in the traditional roles of NDE inspectors, its time-honored development path (See Fig. 1), and a series of recommended practices such as ASNT SNT-TC-!A and standards such as ISO 9712 that has allowed for many decades, NDE apprentices to become highly specialized professionals with open opportunities in an ample diversity of geographies, projects, and industries. This traditional path for the qualification and certification of NDE competencies is entrenched in the sound foundation of ethics, professionalism, and the demonstration of NDE competencies sustained by knowledge, skills, and experience.
502
R. S. Fernandez Orozco et al.
Fig. 1 A Building blocks diagram for the foundation of NDE competencies. (From Fernandez [14])
But as is described by Bertovic and Virkkunen [2] “NDT 4.0: New Paradigm for the NDT Inspection Personnel,” those traditional roles are being disrupted already and an expanded palette of new roles that will enrich the professional paths of NDE-related profession will surge: “The promised autonomy and interconnectedness of NDE 4.0 will supersede the majority of traditional inspector tasks and will in turn require a different set of skills and raise different demands and challenges for the inspection personnel, thus conflicting the current “procedure-following” “level I-III” paradigm.” From this sound foundation, the professional path of NDE professionals has an unconstrained set of opportunities of both high specialization and professional diversification (See Fig. 2), but in order to capitalize those opportunities: (1) A focused reinforcement of hard, digital, soft, and management skills is necessary, (2) The integration of a vertical multi-directional development paths is desirable, (3) The amplification of the palette of roles that a NDE professional is willing to try is advisable, and (4) The integration of a professional development path through a mentoring support process should be imperative. In the following years, the focus and scope of the role of NDE practitioners will be profoundly transformed through the irruption of a portfolio of new technologies but also of revised value propositions and business models in several industries, and an innovative supplementary set of skills and competencies will be necessary not only to face the challenges inherent to this transformation process but, more importantly, to capitalize on the new opportunities generated by those transformations. The following diagram aims to show the evolution of how a revised model of a NDE system is necessary that capitalizes the four-stages transformation process of
20
Artificial Intelligence and NDE Competencies
503
Fig. 2 An expanded Building Blocks Diagram for the specialization and diversification of NDE competencies. (From Fernandez [14])
Fig. 3 How NDE systems support Decision-Making Processes. (Adapted by the authors from Valeske [57])
the data obtained from NDE sensors to actions supported by decision making processes, which may include the integration of several support technologies such as Internet of Things, Digital twins, Big Data, or Augmented Reality, but it will be particularly relevant the confluence of AI assistance technologies (Fig. 3). With data and information at its core to support reliability, NDE systems merges algorithms with a series of intrinsic, applications, and human factors, as is shown in Fig. 4, in order to allow NDE processes to transform data and information into
504
R. S. Fernandez Orozco et al.
Fig. 4 A conceptual model for the interaction of NDE systems within NDE processes and how these interactions contribute with reliability. (Adapted from Kanzler [23])
knowledge and decisions, influenced by a series of environmental and organizational factors comprised in a specific application of the NDE process for a specific geographical or industrial context. From Fig. 4, it is evident that algorithms, regardless if they are in intangible form embedded in an instructive or a standard or at present time instilled as electronic code in tailor-made specialized hardware, will remain at the core of any NDE system. This algorithms repository is the natural place for AI algorithms to synergize, with information and data at its core, assisting humans to transform them in knowledge and decision with the support of an enriched assemble of competencies and skills in a diversified palette of roles. The purpose of this chapter is to provide some insights of how AI assistance and NDE competencies, together, will contribute to enrich the social and commercial purpose of NDE but also be able to create, capture, and distribute true value to fuel a profound transformation in NDE systems and processes towards the future.
20
Artificial Intelligence and NDE Competencies
505
The Traditional Formation of NDE Professionals NDE professionals around the world for their formation follow clearly established qualification and certification processes. Recommended practices such as ASNT SNT-TC-1A or documented standards such as ANSI/ASNT CP-189 OR ISO 9712 provide details about the training, education, and experience requirements and provide criteria for documenting qualifications and certification. The core of NDE specific skills is sustained by four sets of hard skills that comes at the core of STEM body of knowledge: 1. Science skills, particularly physics and chemistry 2. Technology skills, particularly focused in certain specific technology applicable to the methods and techniques where the apprentice is aiming to be certified 3. Engineering skills 4. Mathematics skills NDE specific knowledge and skills are taught personalizing to the requirements of a specific organization training outlines or syllabuses available such as ANSI/ ASNT CP-105 ASNT Standard Topical Outlines for Qualification of Nondestructive Testing Personnel or ISO/TS 25107 Non-destructive testing — NDT training syllabuses. NDE organizations in several regions of the world, with the invaluable wok of NDE professionals contributing as volunteers, have created customized training materials for specific methods, techniques, industries. Although, as we have described, the traditional formation of NDE Professionals has solid and tested foundations, there are specific challenges pending to be formally addressed from an industry perspective, including not only the standardized training in specific supplementary hard, soft, managerial, and even digital skills that NDE professionals require everyday but, what is a major one, the digital transformation of Education Models in general and the formation process of NDE Professionals in particular to unleash the full potential of NDE 4.0 developments.
Differentiating Knowledge, Skills, and Competences As is shown in Fig. 1, the notions of Knowledge, Skills, and Competencies are both intimately bounded with the formation of NDE professionals through time and constitute, with ethics and professionalism, the foundation of the value that NDE provides to the world. As was explained before (Figs. 3 and 4) through information management process and standards, the raw data generated at the moment of the interaction of materials with energy sources and detected by sensors in a NDE systems are transformed into information and knowledge to support decision-making processes, but there are intrinsic limitations to the amount of knowledge a NDE process is able to generate, as is shown in Fig. 5, nevertheless NDE system and processes strives to make the best use of resources and technology to transform the data obtained into
506
R. S. Fernandez Orozco et al.
Fig. 5 A funnel model for achievable knowledge in the universe. (From Grewal [17])
useful and reliable knowledge to support decisions processes. But first we should have a clearer picture of what knowledge really is. Bolsani [4] establishes that “knowledge is a construct similar to the white light which can be decomposed in monochromatic lights when passing through a prism. That means that knowledge is an integrative concept containing rational, emotional, and spiritual knowledge.” To provide some elements of structure, Bolsany [4] provide a brief taxonomy of knowledge: “There are three kinds of knowledge: (a) experiential knowledge; (b) skills; and (c) knowledge claims.” They are interconnected, but have some specific features of their own: Experiential knowledge is what we get from the direct connection with the environment, through our sensory system, and then it is processed by the brain. For instance, if we want to know what snow is then we must go where there is snow and touch it, smell it, taste it and play with it. We cannot get that knowledge only from books or seeing some movies with people enjoying winter sports in beautiful mountain areas. People living in geographical zones where there is never snow have real difficulties knowing what snow is. They lack the experiential knowledge about snow. Experiential knowledge is personal since it can be acquired only through direct interface of our sensory system and then processed by our brain. It is essentially based on perception and reflection. Several people having together the same experience may acquire different experiential knowledge since reflecting upon a living experience means actually integrating it in some previous similar experiences and knowledge structures, if they do exist. As we will show later, experiential knowledge can be seen as created by a powerful interaction between emotional, rational and spiritual knowledge since it is a result of the whole body and mind active participation. Skills means knowledge about how to do something (know-how). It is based on experiential knowledge but it is a well-structured and action oriented knowledge we get by performing repeatedly a certain task and learning by doing it. This is the way of learning
20
Artificial Intelligence and NDE Competencies
507
Fig. 6 From skills to competencies. (From Chouhan and Srivastava [6])
swimming, biking, skiing, playing piano or doing many other similar activities. It is like learning unconsciously to perform a certain procedure or to follow a given algorithm. We don’t learn swimming by reading in a book about fluid mechanics and objects floating. We have to learn by doing it with the whole body and reflecting upon it to improve coordination between breathing and moving our arms. Know-how knowledge is often called procedural knowledge since it is about performing a task in concordance with a given procedure or algorithm. We discussed about some skills associated to physical activities but they can be developed for any kind of task or activities, including thinking processes. For instance, thinking skills are extremely important for knowledge workers and decision makers. Knowledge claims are what we know, or we think we know. We don’t know how much we know since knowledge means both explicit knowledge and tacit knowledge, which means experience existing in our unconscious zone and manifesting especially as intuition. Explicit knowledge is something we learn in schools and reading books, or just listening to some professors or conference speakers. Knowledge claim is what we frame in an explicit way by using a natural or symbolic language. Thus, language is an essential component of the transforming our emotional and spiritual experience into rational or explicit knowledge. With explicit knowledge we are entering the zone of exchange between personal and shared knowledge.
Knowledge and skills, enhanced by experience, are at the core of the formation of NDE professionals, but there is a higher level of integration that is relevant for the purpose of this chapter which is the concept of a competency, Chouhan & Srivastava define it as this: “is the capability of applying or using knowledge, skills, abilities, behaviors, and personal characteristics to successfully perform critical work tasks, specific functions, or operate in a given role or position. Competencies are thus underlying characteristics of people that indicate ways of behaving or thinking, which generalizes across a wide range of situations and endure for long periods of time.” Chouhan and Srivastava [6] deconstruct this notion and there are five major components of competency (See Fig. 6): 1. Knowledge-This refers to information and learning resting in a person, such as surgeon‘s knowledge of Human Anatomy or NDE UT practitioner’s knowledge of acoustics, material sciences, and mathematics.
508
R. S. Fernandez Orozco et al.
2. Skill:This refers to a person‘s ability to perform a certain task, such as surgeon‘s skill to perform a surgery or NDE UT practitioner’s skill to perform a phased array inspection within a pressure vessel. 3. Self Concepts and Values:This refers to a person‘s attitudes, values and selfimage. An example is self-confidence, a person‘s belief that he or she can be successful in a given situation, such as a surgeons self confidence in carrying out a complex surgery. 4. Traits: These refer to physical characteristics and consistent responses to situations or information. Good eyesight is a necessary trait for surgeons, Working in height conditions at a building in construction may be a trait for UT inspectors and as is self-control is an ability to remain calm under stress is necessary for both. 5. Motives: These are emotions, desires, physiological needs or similar impulses that prompt action. For example, both surgeons and NDE professionals with high interpersonal orientation take personal responsibility for working well with other members of the team. Motives and Traits may be termed as initiators what people will do on the job without close supervision. As shown in Fig. 1, the resultant of a critical behavior is higher performance. The level of performance (low, moderate, or high) is always determined by the level of knowledge, skill and attitude. Although often in NDE formation literature the focus is centered in knowledge, skills, and experience towards the transformation implied in NDE 4.0, not only the formation process of NDE professionals should embrace the integrative notion of competencies in its favor, but this same notion may migrate to the acquisitions of competencies by AI assistants. The focus in competencies integration will become critical to contribute in obtaining not only the performance demanded to NDE professional at present time but also the transformation of NDE itself towards the future.
Technology Impact Industry 4.0 is compelling to revise the roles NDE practitioners, trainers and mentors which are immersed not only in the dynamics of exponentially accelerated technological advancements in digitalization, automation, and communication processes but also in profound social, environmental, and cultural transformations. NDE itself is transitioning from a niche role as quality control support instrument to an invaluable knowledge generating process for creating value through substantial improvements in business sustainability, quality, and safety. This profound transformation is shaping how Education 4.0 for Industry 4.0 will deliver disruptive modifications in education models, training processes, certification schemes and support tools. Tailor-made technologies portfolio will be required to shape specific NDE 4.0 solutions. The NDE 4.0 Deployment Management Guidance contains one specific
20
Artificial Intelligence and NDE Competencies
509
annexure devoted to offer a generalized portfolio of relevant technologies for NDE 4.0 which can be used to integrate a personalized portfolio for specific industries, geographic regions, organizations or projects. The NDE 4.0 deployment management guidance contain at its core six sets of conceptual frameworks that may be translated to sets of specific competencies that NDE professional should consider reviewing in order to transform the palette of roles they are able to perform (Fig. 7). The way ahead for NDE 4.0 is promoting the consolidation of wider feedback loops (see Fig. 8) within NDE 4.0 developments based in both, a life-cycle perspective of assets and a true collaborative value-chain ecosystem perspective.
Fig. 7 Conceptual frameworks that constitute the core of NDE 4.0 roadmap initiatives construction, from the NDE 4.0 Deployment Management Guidance
510
R. S. Fernandez Orozco et al.
Fig. 8 NDE 4.0 Stakeholder Collaboration Integration and the need of a wider network of feedback loops. (From Vrana and Singh [59, 60])
In this collaborative space, AI assistants will have increasingly important roles while providing invaluable support to enhance and enrich human NDE competencies to achieve the purpose of NDE 4.0 which is to generate a profound positive impact around the world related with: (a)The safety, quality, and integrity of assets, (b) The sustainability of companies, organizations, and ecosystems, (c) The creation of professional and personal advancement opportunities, and (d) The generation and distribution of accumulated knowledge that contributes to improve the design, production, and sustenance of assets.
Meaning of AI “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.” — R. J. Sternberg.
Through the collection of definitions included in Table 1, we aim to (1) transmit the idea, following Sternberg’s quote, of the complexity of arriving to a single definition of intelligence, (2) displace from general notions to definitions closer to the AI field, (3) displacing from a human-centric approach to wider perspectives of intelligence for non-humans, and (4) provide a foundation to help us define what AI means (Fig. 9): One must define the root of intelligence before assessing artificial replication. Lee [25] establishes that “Intelligence has been defined as the ability to solve complex problems or make decisions with outcomes benefiting the actor and has evolved in lifeforms to adapt to diverse environment for their survival and preproduction.” Mr. Lee unfolds the human aspects of the development of intelligence, but do we need to reserve the term intelligence only for humans? The parallels in the describing human intelligence weaves into the AI framework. The human brain is the command center that networks through the nervous system where coded exchanges occur at the
20
Artificial Intelligence and NDE Competencies
511
Table 1 The diversity of approaches to define intelligence, All quotes in the table were obtained from Legg [26] Approach
Psychology
Definition “The capacity to acquire and apply knowledge.” “. . .ability to adapt effectively to the environment, either by making a change in oneself or by changing the environment or finding a new one . . . intelligence is not a single mental process, but rather a combination of many mental processes directed toward effective adaptation to the environment.” “: the ability to learn or understand or to deal with new or trying situations: . . . the skilled use of reason (2): the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (as tests)” “The ability to use memory, knowledge, experience, understanding, reasoning, imagination and judgement in order to solve problems and adapt to new situations.” “The ability to learn, understand and make judgments or have opinions that are based on reason” “The ability to learn facts and skills and apply them, especially when this ability is highly developed.” “the general mental ability involved in calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information, using language fluently, classifying, generalizing, and adjusting to new situations.” “Capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.” “Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought.” “Intelligence is not a single, unitary ability, but rather a composite of several functions. The term denotes that combination of abilities required for survival and advancement within a particular culture.” “Intelligence is assimilation to the extent that it incorporates all the given data of experience within its framework . . .There can be no doubt
Source The American Heritage Dictionary Encyclopedia Britannica
Merriam-Webster Online Dictionary
All Words Dictionary, 2006
Cambridge Advance Learner’s Dictionary, 2006 Encarta World English Dictionary, 2006 Columbia Encyclopedia, sixth edition, 2006
Random House Unabridged Dictionary, 2006 American Psychological Association
A. Anastasi
J. Piaget
(continued)
512
R. S. Fernandez Orozco et al.
Table 1 (continued) Approach
Provided by AI Researchers
Definition either, that mental life is also accommodation to the environment. Assimilation can never be pure because by incorporating new elements into its earlier schemata the intelligence constantly modifies the latter in order to adjust them to new elements.” “A biological mechanism by which the effects of a complexity of stimuli are brought together and given a somewhat unified effect in behavior.” “. . . the ability to undertake activities that are characterized by (1) difficulty, (2) complexity, (3) abstractness, (4) economy, (5) adaptedness to goal, (6) social value, and (7) the emergence of originals, and to maintain such activities under conditions that demand a concentration of energy and a resistance to emotional forces.” “The ability to carry on abstract thinking.” “The capacity to acquire capacity.” “Any system . . . that generates adaptive behaviour to meet goals in a range of environments can be said to be intelligent.” “Achieving complex goals in complex environments” “Intelligence is the power to rapidly find an adequate solution in what appears a priori (to observers) to be an immense search space.” “Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.” “. . . the ability to solve hard problems.”
Source
J. Peterson
Stoddard
L. M. Terman H. Woodrow D. Fogel
B. Goertzel D. Lenat and E. Feigenbaum J. McCarthy
M. Minsky
synapsis for further responses and actions invoked. The transmission and reception of these signals in these neural network process data exchanges invoking an action or response. This processing location is similar in the artificial neural networks (ANN) of AI where the input is processed for specific output. It is discussed (see Dicke and Roth) [10] that it is not the size of the brain in ratio to the body in mammals that determines intelligence, but the combination of the “number of cortical neurons, neuron packing density, interneural distance and axonal conduction velocity.” These factors determine general information processing capacity (IPC). Many theories exist in intelligence, Hofstadter [20] provides a list of essential abilities for intelligence: (1) to respond to situations very flexibly, (2) to make sense out of ambiguous or contradictory messages, (3) to recognize the relative importance of different elements of a situation, (4) to find similarities between situations despite
20
Artificial Intelligence and NDE Competencies
513
Fig. 9 Ionic receptors decode chemical signals into electrical responses to transmit information across nervous system [Public Domain]
differences which may separate them, and (5) to draw distinction between situations despite similarities which may link them. Further classifications, as established in web.cortland.edu [60], include naturalistic, musical, logical-mathematical, existential, interpersonal, intrapersonal, bodily-kinesthetic, verbal-linguistic, visual-spatial, and are possessed in varying degrees in individuals. Similar segmentations fit the synthetic or artificial capacities as algorithms process masses of data and deduce hidden mathematical correlations and relationships. Artificial Intelligence (AI) is generally defined as a computer or robot performing tasks commonly associated with intelligent beings. AI fosters the ability to leverage the inherent strengths presented by solutions such as computers to maximize the intuitive skills collaboratively with the human factor. General AI is when many different types of problems can be solved on their own where Narrow AI are machine-based systems to solve specific problems/tasks, like playing chess. The layers of capabilities can be stated in general: AI is when computers do things that escalate or augment human intelligence; Machine Learning (ML) for rapid autoconstruction of algorithms from data; Neural Networks are powerful forms of ML and Deep Learning (DL) leverages many layers of the neural networks. Although the activities and research in AI have been ongoing since the 50s with Turing, its recent escalation in the commercial landscape is largely attributed to the benefits of Moore’s law whereby computing capabilities doubles every 2 years. The processing capacity has delivered increased capabilities and development/implementation paths for computing in a rapid way significant amounts of data.
514
R. S. Fernandez Orozco et al.
AI and Data Science Since the advent of the computer in the 1940s, mathematically latent computations have been efficiently and increasingly been leveraged. As the processing capabilities have advanced exponentially through technological developments, so have the utilizations. With the comingled interaction with data science, understanding some of the nuances in this space require a unique set of definitions. Broad framework terms can be started with the overarching Artificial Intelligence umbrella that was previously defined as “a computer or robot performing tasks commonly associated with intelligent beings.” A subset to AI is Machine Learning (ML) as a discipline that tries to design, understand, and use computer programs that “learn” from experience (data) for the purpose of modeling, prediction or control through the rapid auto-construction of algorithms from data. Neural Networks (NN) are powerful forms of ML and powerful utilization of layered NNs can be defined as Deep Learning (DL) (Fig. 10). As more and more data are generated and processed, the importance of extracting relevance presents an adjacent field of data science often previously existing in the background now takes a more important position. The amalgamation of the Domain Knowledge of the technician’s method and examination expertise augmented through Computer Science and for extracting essential information with noise extraction. Machine Learning is leveraged for processing generated information. Mathematical and Statistical deductions converge to leverage an additional critical component to this next evolution through the Research and Analysis of the specific domain. The convergence of these is the hub of Data Science being the hub. Therein lies the power of collaborative AI. Supporting the “learning” process through machines by means of supervised, reinforced, and unsupervised learning, as described in Delua [9] play a key role in cultivating a powerful tool with the induced nuances of the inspection process. Supervised Learning relies on labeled datasets to train the algorithms into classifying data or predicting outcomes accurately. Unsupervised uses ML algorithms to analyze and cluster to discover hidden patterns
Fig. 10 Convergence of Artificial Intelligence domain and data science. (Author adapted from public domain)
20
Artificial Intelligence and NDE Competencies
515
in unlabeled data sets. Three main tasks leveraged are: clustering, association, and dimensionality reduction. The role of AI will present a migration path as confidence and exposure increase. Validation metrics can instill confidence intervals for increased utilization and expanded deployment. Vrana and Singh, in Fig. 11, present in a straightforward fashion the interrelation between algorithms, AI, machine learning, and deep learning. Corea creates in Fig. 12 an artificial intelligence knowledge map (AIKM) that plots a series of AI developments or technoscientific branches in a bidimensional field composed by a sequence of AI Paradigms (symbolic, statistical, and subsymbolic) in the horizontal axis and a sequence of AI Problem dominions (perception, reasoning, knowledge, planning, and communication) in the vertical axis. This AIKM provides a valuable perspective of how those AI developments may either have general application through several AI paradigms (and generate several subtype of AI developments) or have a narrower application scope related with AI paradigms but have a relevant impact through several AI problem dominions.
Fig. 11 The path from algorithms to deep Learning. (From Vrana and Singh [58])
516
R. S. Fernandez Orozco et al.
Fig. 12 Artificial Intelligence Knowledge Map (AIKM). (From Corea [7])
The Fluid and Ever-Evolving Dilemma of Attempting an AI Definition: The Blind Men and the Elephant The rate at which industry will shapeshift through the process of defining capabilities, understanding use-cases, strategic planning, deployments, adaptations, and redefine will be an evolving process for years to come. The landscape assessment of current condition should allow for a fluid morphology as the solutions validate over time with new perspective and augmented skillsets adapting to the power of leveraging the magnitudes of data currently underutilized. Grewal [17] establishes that Artificial Intelligence as a field that “was founded on the claim that a central property of human beings, intelligence, can be so precisely described that it can be simulated by a machine.” Aligned with this notion, the following table provide a series of definitions of AI based in a series of perspectives or approaches that do not constitute an exhaustive list, that is aimed to transmit the notion of the complexity of attempting to grasp in a single paragraph a comprehensive complete definition of AI, but to encourage the reader to attempt in building a mental map of AI based in those perspectives and key ideas within the definitions that may be useful for your own intuitive knowledge of the field (Fig. 13 and Table 2). Perspective plays a key role in defining the totality of AI. The parable of the blind men and the elephant presents an interesting illustration and the evolving convergence of a cohesive definition as the utilization of AI in the NDE space is filtering through many perspectives. The parable is said to have originated in India where a
20
Artificial Intelligence and NDE Competencies
517
Fig. 13 Blind men (here, monks) examining an elephant, by Japanese painter, poet, and calligrapher Hanabusa Itcho (1652–1724] [Public Domain]
collection of blind men had heard an elephant was going to pass by and each approached the beast at various points. One at the side and expressed the elephant to be a wall, another at the tusk and differed in perception of an elephant being a large sharp object, another at the trunk and depicted it as a snake. Yet another at the leg where he expressed all prior perspectives were wrong and the elephant is like a tree then another at the ear and pressed that it is like a fan. The value of the wholistic view will evolve in the NDE space as deployments increase in adoption. The utilization of AI is not necessarily a replacement of a technician but an augmented complement. There are many areas of deployment outside of detection, characterization and dispositioning. Amending the data heavily and computational redundancies present an ideal placement for these algorithmic aides.
A Concise History of AI and the Continuous Pursuit for Smart Nonhuman Assistants After revising the complexities associated to attempting a definition of AI and before exploring the realm of the rich opportunities the field offers, it will be important to spend a little time revising where we have come from to understand where we are heading. It is not the purpose of this chapter to provide an extensive history of AI and all related approaches but to center in the pursuit of smart non-human assistants as is
518
R. S. Fernandez Orozco et al.
Table 2 A compilation of definitions for artificial intelligence, All quotes in the table were obtained from Grewal [17] Approach/ Perspective AI Basic Notions
Intelligence Agents
Non-Naturally Occurring Systems Ideas in Machines
Process of Simulation Computers as AI instruments
AI as branch of Computer Science
Definition ““the science and engineering of making intelligent machines.” “Artificial intelligence is the study of ideas to bring into being machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention. Each such machine should engage in critical appraisal and selection of differing opinions within itself. Produced by human skill and labor, these machines should conduct themselves in agreement with life, spirit and sensitivity, though in reality, they are imitations’ ““the study and design of intelligent agents.” “. . . An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success” “attempting to build artificial systems that will perform better on tasks that humans currently do better” “‘The ability of a machine to learn from experience and perform tasks normally attributed to human intelligence, for example, problem solving, reasoning, and understanding natural language” “Tools that exhibit human intelligence and behavior including self-learning robots, expert systems, voice recognition, natural and automated” “information processing by mimicking or simulation of the cerebral, nervous or cognitive processes” “is the study of how to make computer do things which, at the moment people do better” “Applies to a computer system that is able to operate in a manner similar to that of human intelligence; that is, it can understand natural language and is capable of solving problems, learning, adapting, recognizing, classifying, self-improvement, and reasoning” “is the branch of computer science that attempts to approximate the results of human reasoning by organizing and manipulating factual and heuristic knowledge. Areas of AI activity include expert systems, natural language understanding, speech recognition, vision, and robotics “ “Is a branch of Computer Science concerned with the study and creation of Computer systems that exhibit some form of intelligence, systems that learn new concepts and tastes, systems that can reason and draw
Author(s) J. McCarthy, 1955 M. Minsky, 1967
Poole et al.,1988 Russel & Novig, 2003 Rich & Knight, 2004. www. quantum3.co. za www.unesco. org www.gbc.hu Rich & Knight, 2004 www. quantum3.co. za
www.its. bldrdoc.gov
Patterson, 2004
(continued)
20
Artificial Intelligence and NDE Competencies
519
Table 2 (continued) Approach/ Perspective
AI as programming and computation
Connection with human traits
Hardware-Software integration
Definition useful conclusion about the world around us, systems that can understand a natural language or perceive and comprehend a visual scene and systems that perform other types of feats that require human types of Intelligence” “The field of computer science dedicated to producing programs that attempt to mimic the processes of the human brain” “The branch of computer science that deals with writing computer programs that can solve problems creatively” “The study of the computations that make it possible to perceive, reason and act” “The concept that computers can be programmed to assume capabilities such as learning, reasoning, adaptation, and self- correction” “The use of programs to enable machines to perform tasks which humans perform using their intelligence. Earlier AI avoided human psychological models, but this orientation has been altered by the development of connectionism, which is based on theories of how the brain works” “The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects” “Artificial Intelligence (AI) refers to systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or internet of Things applications).”
Author(s)
www. optionetics. com wordnet. princton.edu Winston, 1999 wwww.atlab. com www.filosofia. net
Russel & Novig, 2003
European Commission, 2018
relevant to the discussion of skills and competencies, including those related with NDT, in the final part of the chapter and how those smart non-human assistants permeated with NDE competencies will be important towards the future.
The Prehistory of AI Mishkoff describes in [30] “Long before there was a field called artificial intelligence—long before there were computers, or even a knowledge of electronics—
520
R. S. Fernandez Orozco et al.
people were irresistibly drawn to the idea of creating intelligence outside the human body.” He also states “Several examples dating back all the way to Greek mythology. Hephaestus, son of Hera, seems to have fashioned human-like creations regularly in his forge; and Talos, one of Hephaestus’ bronze men, guarded and defended Crete. Disenchanted with human women, Pygmalion made his own woman out of ivory; and Aphrodite brought Galatea, this man-made woman, to life. Daedalus, most famous for his artificial wings, also created artificial people.” In medieval Europe, Pope Sylvester Il is credited with building a talking head with a limited vocabulary and a knack for prognostication — Sylvester would ask it a simple question about the future, and the artificial head would answer yes or no. Arab astrologers are said to have constructed a thinking machine called the zairja; the missionary Ramon Lull answered with a Christian adaptation, the ArsMagna. In the early sixteenth century, Paracelsus, a prominent physician, claimed to have invented a homunculus, a little man. “We shall be like gods,“ he wrote enthusiastically, “We shall duplicate God ‘s greatest miracle—the creation of man.” If he was successful, he must not have been much of a businessman — Paracelsus died a pauper. Later in the sixteenth century, the Czech rabbi Judah ben Loew is reported to have sculpted a living clay man, Jospeh Golem, to protect the gentiles of Prague (golem has become a synonym for an artificial human).”
Turing Test In the 1950s, Alan Turing, a British mathematician published “Computing machinery and intelligence” where he posed the question of whether machines can think and an ultimate experiment, he called his thought experiment the Imitation Game to assess achieved equivalence to human intelligence. An overview of the test was to assess the ability of a computer to distinguish through text a man trying to present as a female and a woman to convince she is a female. The judge would be a man then comparing equivalence of a computer to match human performance. The expanded concept from over one hundred years earlier when Ada Lovelace’s vision [48] of more than to manipulate massive amounts of numbers, but as a general-purpose machine representing numbers as abstract items. Her vision was revolutionary for the time although she limited the capabilities to what was programmed while Turing saw the expanded potential of a thinking machine.
Early AI Research Ancient Greeks had myths about robotics and Chinese and Egyptian engineers made automatons. One can see the early traces of artificial intelligences. Anyoha [1] describes that “In the first half of the 20th century, science fiction familiarized the
20
Artificial Intelligence and NDE Competencies
521
world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds.”
The Dartmouth Summer Research Conference The computational evolution continues to press forward from Turing in 1956, a call to action was authored by J. McCarthy from Dartmouth, M. L. Minsky from Harvard, N. Rochester from IBM, and C. E. Shannon from Bell Telephone [29]. The challenge was to have 2-month, ten-man study on AI. The study under premise that learning and intelligence can be precisely described in a way that a computer can simulate. An attempt to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans and improve themselves. MIT cognitive scientist Marvin Minsky and others (see Lewin [27]) who attended the conference were extremely optimistic about AI’s future. “Within a generation [...] the problem of creating ‘artificial intelligence’ will substantially be solved,” Minsky is quoted as saying in the book “AI: The Tumultuous Search for Artificial Intelligence.
DARPA and Military Involvement The Defense Advanced Research Projects Agency (DARPA), a branch of the United States Department of Defense stimulates innovation and contributed to the non-military pre-cursor to the internet [8]. Additionally, it radically changed the scale of research with Information Processing Techniques Office (IPTO) in the early 1960s by converging many small projects to large-scale and was the significant contributors for the next 30 years. DARPA’s foundational support in the furtherance of AI was early and sustained through current development efforts.
Main AI Initiatives Through Academic Institutions with Government Support – North America (Canada, USA) Europe, Japan, etc. DARPA’s funds supported programs like Stanford Artificial Intelligence Laboratory (SAIL) putting them in the trio of leading research alongside MIT and Carnegie Tech [34]. Other programs established through DARPA include Speech Understanding Research (SUR) program a project initiated through IPTO with the initiative for a system to handle ten-thousand English words spoken by anyone in the early 1970s. Amidist some challenges, the program was reengaged in the early 1990s.
522
R. S. Fernandez Orozco et al.
The Impact of AI Developments Through Corporations and SMEs As AI fields enrich and diversify the landscape of global business and imposing impacts of the global pandemic, it enhanced opportunities for agile innovators to complete in new ways. Convergence of domain experts Data Scientist from other industries and academia can present game changing solutions leveraging the computational power and AI solutions. Srivastav [53] in his article “How Is Artificial Intelligence Revolutionizing Small Businesses?” dimensions the positive impact perception of AI in businesses “According to a survey of CEOs for small and medium-sized business, 29.5% of CEOs have spoken in favour of AI technology and its various benefits on business.” Technology is scaling and interested in deployment opportunities and applications. The value for entrepreneurs is to liaise the nuances and needs of NDT to the various solutions. Small companies can move quickly and do not require as much upfront valuation/validation. Spartaro [52] highlights the accelerated pace of technological development in the field through a quotation from Satya Nadella, Microsoft’s CEO, who in April of 2020 stated that “We have seen two years’ worth of digital transformation in two months.” Companies appear to be answering the call for drastic changes and assessing the value digitization has presented during the imposing events caused from the global pandemic. The abrupt global turn forced long roadmaps to compress and alternatives sought.
Generic Developments (Google, IBM, Tesla, etc.) Major players like Google, Amazon, Microsoft, Apple, and Facebook are presenting many open-source projects for individualized adaptability to train various algorithms. This access to massive amounts data allows technologist from all aspects of industry and academia to contribute and compete in algorithm development. Using Natural Language Processing (NLP), minimizes the barrier to engagement as the systems adapt to audience. It is noted that Amazon Alexa now has 100,000 “skills” and Google Assistant has more than one million capabilities as they seek to carve space for educational developments. In 2016 Crowdsource was launched by Google, it economizes the collective contributions through tasks such as Label Verification, Sentiment Evaluation, Handwriting Recognition and Translation thus facilitating the refinement of their Machine Learning. In October 2019, according with Sarin, Pipatsrisawat, Pham, Batra, and Valente [46], Google was able to attract three million global users representing 190 countries and completed 300 million tasks with one million images provided. As data is the fuel for the perpetuation of advanced development. Specialized Developments (Through SMEs): NDE May Have a Narrow AI Scope Leveraging the successful developments in the medical community presents a benefit to the NDE community as principal technologies of radiography and ultrasound are prevalent in both fields for non-intrusive detection. The medical industry presents an easier use-case for the initial adoption as the human body’s scope is
20
Artificial Intelligence and NDE Competencies
523
narrow in that it is 60% of the body is water and target inspections focus on normally defined locations. Industrial NDE does not share that simplicity. The challenges of well defined, large amounts of data for individual objects is relatively limited to production lines that also deploys consistent positioning with robotics.
Frontiers and Lines of Exploration Although the frontiers and lines of exploration in the AI field are moving in an exponentially accelerated pace, in every field of human experience, the aim of the authors is to provide along the chapter a series of references and links to a very diverse set of resources, observatories, and platforms that may allow to the readers to continue by their own their exploration of the frontiers of AI with an improved perspective of the landscape, an improved toolbox, and an improved compass.
The Phased-Out Parallel Advancement with the Health Sector Historically specific technological advances, such as digital radiography or phased array ultrasonics, that become common place in the NDE field are originated in other sectors, particularly in the defense industry and in the health sectors. This phased-out parallel advancement with the health sector is also evident in AI technologies. In the late 1950s, medical and healthcare researchers started prototyping computer-aided systems focused in decision-making process for diagnostic applications in medicine. Years later those initiatives generated some of the first commercially successful AI products in the form of expert systems such as MYCIN or CADUCEUS. It is advisable that NDE professionals monitor technological advancements in the health sector related with shared technologies in the NDE field for potential opportunities of migration and deployment to guide supplementary skills and competencies that will be required.
Where We Are Now Although AI research and competencies development are both dynamic fields, we are aiming the fundamental content of this chapter to be stable through time by referencing key ideas, insights, and conceptual frameworks rather than enlist the latest developments which quickly became obsolete. To help our readers to connect with some of the most recent development we opted to distribute along the content of the chapter links to relevant sources of information for the subject addressed within the chapter where our readers can obtain updated information of significant developments. While writing and publishing this chapter it was interesting to witness how the blend between legacy process as having chapters like this printed in paper (somehow frozen in time) and the intrinsically exponential and ever evolving nature of
524
R. S. Fernandez Orozco et al.
digital media, processes, and systems where published materials may be updated instantaneously to reflect the state of the art knowledge of the moment. The next elements, more than a picture that reflect the current status of things constitute a series of open doors where the reader can browse and get a glimpse of the status of the affairs in relevant subjects.
AI Awareness in a Work Environment In order to provide structure to user-awareness issues in technology-mediated environments, we should consider at least three interrelated awareness processes: situation awareness, automation awareness, and AI awareness (See Fig. 14). Situation awareness may be defined as the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. Automation awareness may be considered as a part of situation awareness which focuses specifically in automation, comprehending its status and its meaning to the system behavior, as well as projecting its future status and meaning. AI awareness in a work environment represent the worker’s perception of the current decisions made by the AI system, his or her comprehension of this decision and his or her estimate of the decisions by the AI system in the future. Figure 14, adapted from Karvonen, Heikkilä, and Wahlström [24], identify three key concepts related with AI awareness which are: • Transparency, which refers about how transparent the functioning of AI, also often referred as explainable AI. • Communication, which refers (1) the way in which the AI system communicates its functioning, intentions, capabilities, and limitations to the user and (2) the possibilities of the AI to understand human communication. • Appropriate trust, which refers to the aim that the human operator can trust the AI on an appropriate level, based on the knowledge about (1) The capabilities of the AI-based system, (2) The quality and relevance of the used data, or (3) has the system learned the required skill without becoming biased. In order to calibrate trust into an adequate level, the user has to have a clear idea of the capabilities of the used algorithm as well as the kind of data that has been used to train it. This notion should also include the consideration of the mitigation of the problematic effects of overtrust or distrust in automation.
AI Ethics and Security Human talent, ethics, competencies and commitment are at the core of the safety and value creation that NDE processes provide to our world. Its impact comprises our infrastructure, transportation equipment, installations, facilities, habitable spaces and even many everyday objects whose safety and value are intrinsically connected to a human perspective of NDE, where ethical implications constitute an ineludible subject to be addressed, regardless of the “industrial age” we are analyzing. In parallel, constraints in knowledge, skills, acquired experience, ethical behavior or motivation impairs the safety and the value that NDE processes create.
20
Artificial Intelligence and NDE Competencies
525
Fig. 14 The Staged Circles for Situation Awareness, Automation Awareness, and Artificial Intelligence Awareness, Adapted from Karvonen, Heikkilä, and Wahlström [24]. The Codes in the diagram represent 1A: Perception of the current situation, 1B: Comprehension of the current situation and 1C: Projection of the future situation; 2A: Perception of the current status of the automation, 2B: Comprehension of the current status of the automation and 2C: Projection of the future status of the automation; 3A: Perception of the current AI-based decision(s) by the system, 3B: Comprehension of the AI-based decision(s) by the system their basis, and 3C: Projection of the system’s AI-based decision(s) in the future and their basis
Bird, Fox-Skelly, Jenner, Larbey, Weitkamp, and Winfield [3] identified for the European Parliament six areas of ethical and moral issues associated with the development and implementation of AI (Fig. 15). Those six areas of concern, that in an overall perspective comprises from the impacts in the individuals to humankind as a whole, will also be necessary to be addressed specifically for the scope NDE ecosystem. For an insightful analysis of the ethical implications of several technological developments, including AI, please refer to ▶ Chap. 43, “Ethics in NDE 4.0: Perspectives and Possibilities,” in the present handbook. In the confluence of Artificial Intelligence developments and Ethics there are two notions that shall be addressed and differentiated: Ethics of AI and Ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that
526
R. S. Fernandez Orozco et al.
Fig. 15 Six areas of ethical and moral issues associated with the development and implementation of AI, as described by Bird et al. [3]
are related to AI. Ethical AI is an AI that performs and behaves ethically. Both are relevant and both provide supplementary approaches when we build the structure of ethics for NDE 4.0 (See Fig. 16) from the invaluable foundations provided by generations on NDE professionals. Instances such as the European Parliament has prioritized the potential impacts of AI and its ethical implications. The gap between technological advancements and legislation or normative development is particularly relevant in ethical subjects: “This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies. It also reviews the guidelines and frameworks which countries and regions around the world have created to address them. It presents a comparison between the current main frameworks and the main ethical issues, and highlights gaps around the mechanisms of fair benefit-sharing; assigning of responsibility; exploitation of workers; energy demands in the context of environmental and climate changes; and more complex and less certain implications of AI, such as those regarding human relationships.” Supplementary information and links related with the European Parliament involvement in this subject may be found in Table 3. The UNESCO aims to address a wide perspective of impacts: “We must make sure Artificial Intelligence is developed for us and not against us.” “We need a robust base of ethical principles to ensure artificial intelligence serves the common good.
20
Artificial Intelligence and NDE Competencies
527
Fig. 16 A three layered construction approach to build sounder ethics foundations for NDE 4.0, developed by the authors
We have made this process be as inclusive as possible since the stakes involved are universal,” Supplementary information and links related with the UNESCO involvement in this subject may also be found in Table 3. NDE organizations should monitor relevant developments in those subjects outside the NDE industry, particularly focusing in fields which should be in the leading edge of ethical issues research such as the medical practice, health sciences, genetic research or military developments. Monitoring of security normative, legislation and standards should include, besides all the fields described for ethics, the creation of validation mechanisms of information sources that ranges from simple logical tests to more sophisticated protocols and algorithms, such as the one used in banking operations, it may also include the use of cross-human validation for critical information.
AI Policy Research One of the signs of our age is the accelerated gap between Political / Economical / Technological / Environment / Social developments and the advancements in legislation and regulations that national and regional government bodies are able to generate. Artificial Intelligence is no exception. In recent years there has been an increasingly accelerated focus in several regions of the world to mitigate this gap through a series of initiatives, forums and platforms.
528
R. S. Fernandez Orozco et al.
Table 3 A compilation of relevant websites referenced along this chapter Initiative/Organization Canada – The Pan-Canadian Artificial Intelligence Strategy European Commission
European Parliament
G20 – G20 Digital Economy Task Force Italy – Arcidiocesi di Torino, SERVIZIO PER L’APOSTOLATO DIGITALE OECD – OECD.AI Policy Observatory UNESCO
USA Federal Government – American AI Initiative USA Department of Defense
URL https://cifar.ca/ai/ https://digital-strategy.ec.europa.eu/en https://digital-strategy.ec.europa.eu/en/library/ proposal-regulation-laying-down-harmonisedrules-artificial-intelligence-artificial-intelligence https://digital-strategy.ec.europa.eu/en/library/ coordinated-plan-artificial-intelligence-2021review https://www.europarl.europa.eu/RegData/etudes/ STUD/2020/634452/EPRS_STU(2020)634452_ EN.pdf https://www.meti.go.jp/press/2020/07/ 20200723001/20200723001-2.pdf https://www.apostolatodigitale.it https://oecd.ai/countries-and-initiatives. https://en.unesco.org/news/major-progressunescos-development-global-normativeinstrument-ethics-ai https://www.federalregister.gov/documents/2019/ 02/14/2019-02544/maintaining-americanleadership-in-artificial-intelligence https://media.defense.gov/2019/Feb/12/ 2002088963/-1/-1/1/SUMMARY-OF-DOD-AISTRATEGY.PDF
We will describe some of the most recent but is evident that new developments in these subjects will arise and therefore we will also include links to organizations that keep track of this progress. In 2017 Canada became the first country in the world to launch a national AI strategy, “The Pan-Canadian Artificial Intelligence Strategy was established with four key goals, to: increase the number of AI researchers and graduates in Canada; establish centers of scientific; develop global thought leadership in the economic, ethical, policy and legal implications of AI; and support a national research community in AI.” The current status of this initiative may be monitored at the link provided in Table 3. In 2019 the White House in the USA “issued an Executive Order launching the ‘American AI Initiative’ in February 2019, soon followed by the launch of a website uniting all other AI initiatives, including AI for American Innovation, AI for American Industry, AI for the American Worker and AI for American Values. The American AI Initiative has five key areas: investing in R&D, unleashing AI resources (i.e., data and computing power), setting governance standards, building the AI workforce, and international engagement. The Department of Defense has
20
Artificial Intelligence and NDE Competencies
529
also published in 2018 its own AI with a focus on the military capabilities of AI.” Links to supplementary information may be found at Table 3. As of May of 2019 several regions and nations have launched AI oriented initiatives including: The European Union, The United Kingdom, France, Denmark, Finland, Germany, The Nordic Baltic Region, Sweden, Mexico, China, India, Japan, Singapore, Taiwan, South Korea, and The United Arab Emirates are only the spearhead of a global movement. The European Commission has proposed rules and actions aimed to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States are intended to guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment, and innovation across the EU. Links to supplementary information related with this initiative may be found in Table 3: The Organization for Economic Co-operation and Development (OECD) launched the OECD. AI Policy Observatory which compiles and combines resources from across the OECD and its partners from all stakeholder groups. Its aim is to facilitate dialogue and provide multidisciplinary, evidence-based policy analysis and data on AI’s areas of impact. Links to supplementary information related with this initiative may also be found in Table 3. Also, the OECD has proposed a series of principles (see Fig. 17) to be considered by its state members to promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The G20, which is an international forum that brings together the world’s major economies and which members account for more than 80% of world GDP, 75% of global trade and 60% of the population of the planet integrated a G20 Digital Economy Task Force with the aim of creating global coordinated efforts in AI research and deployment: “Artificial Intelligence (AI) systems have the potential to generate economic, social, and health benefits and innovation, drive inclusive economic growth, and reduce inequalities as well as accelerate progress toward the achievement of the Sustainable Development Goals (SDGs). They could also have potential impacts on the future of work, the functioning of critical systems, digital inclusiveness, security, trust, ethical issues, and human rights. (. . .) We reaffirm our commitment to promoting a human-centered approach to AI and support the G20 AI Principles, which are drawn from the OECD AI Principles” which were previously described in Fig. 17. Djeffal in [11] provides an interesting series of perspectives on the impact of AI in our world: “Our understanding of what AI technologies can mean for our social coexistence in its infancy. Therefore, it is appropriate to look at these developments from different perspectives and with different assumptions. The possible outcomes and consequences of this technology can only be conceived when AI is simultaneously understood as an opportunity and a danger, when it is simultaneously developed from a technical and social point of view, and when it is viewed from the perspective of the humanities, social sciences and natural sciences. Then we will be able to construct a picture of a socially desirable and good AI. It might then be possible to create a more human and humane society through automation”
530
R. S. Fernandez Orozco et al.
Fig. 17 OECD AI Principles. (From OECD [36])
It is relevant for NDE 4.0 initiatives to be aware of this series of global, regional, and national AI initiatives but also invest in technological management system, particularly in technology monitoring processes to remain aware of the state of the art knowledge in a field that besides any perceived “AI-ice ages” is at present time flourishing and exponentially dynamic.
AI Implementation Besides any technology adoption model or related theoretical curve you may consider, what is relevant for AI implementation is: (a) Asses the maturity level of the organization related with AI Implementation, (b) analyze the company’s operations from a technology management processes perspective, (c) formulate an AI implementation strategy, (d) Implement the strategy devised based in the prior three stages, and (e) analyze the results obtained to formulate improvement actions. Figure 18 proposes a four stages maturity assessment model. This model positions the maturity from AI Novice up to AI advanced based in the evaluation of five support elements or pillars that are listed in Fig. 17. This connects with the actions that individuals with an organization perform on AI deployment initiatives, which are: ignoring, defining, adopting, managing, or integrating. The model takes in account that process and systems within organization do not to have a single maturity level, but show different maturity levels across them. That
20
Artificial Intelligence and NDE Competencies
531
Fig. 18 A four stages maturity assessment model for AI deployment. (From Ellefsen, OleśkówSzłapka, Pawłowski, and Toboła [13])
is why the development of dashboards to show the different maturity levels at each of the five pillars is suggested. The description of the pillars in Fig. 19 should serve as a guide to develop an initial assessment of AI maturity level in a specific initiative or organization. Due to the direct bond between AI and other technologies, the adoption and maturing of technology management process are highly advisable in a closed-loop that comprises technology identification or surveying, selection, acquisition, exploitation, and protection. Technology management processes in Fig. 20 are directly connected with the five pillars that constitute the maturity assessment model. This integrated approach should provide a solid scaffolding for any AI implementation initiative in order to be solid, proactive, and fruitful.
AI Social Impact Without noticing AI has permeated everyday life including the more evident elements such as facial recognition in public spaces, the personal assistants in our mobile phones, computers, living rooms, or kitchens, and even the algorithms that controls our feeds in social networks or streaming platforms, but there is a plethora of invisible application in everyday objects such as the “deep fusion” or “computational photography” algorithms that enhances the digital images we capture with our devices. AI has permeated the collective subconscious through literature, television, and cinema. Since Hal in Kubrick’s 2001 A Space Odyssey, there is a plethora of cinematic representations of AI, and connecting fiction with reality AI algorithms are permeating and revolutionizing media production. Taking cinema as an example,
532
R. S. Fernandez Orozco et al.
Fig. 19 Five pillars models to guide AI maturity assessment, based in Pringle and Zoller [42] (maturity levels shown in the dashboard are purely illustrative
AI are involved since screenplays draft generation, to Big Data analysis of audiences preferences to shape the cast, technical based in viewers habits and preferences, film score composing, film editing, or even image or performance enhancing of actors through the use of deep learning algorithms. Now I see old black and white movies being turned into color and old static photographs being brought to life with a smile or a wink, or turn of a head. There are profound debates not only practical everyday implication of AI technologies but even philosophical and religious debates. Initiatives such as the servizio per l’apostolato digitale (service for digital apostolate) developed by the Archdiocese of Turin in Italy are only an example of how AI is permeating all levels of the
20
Artificial Intelligence and NDE Competencies
533
Fig. 20 Gregory’s Technology Management process framework. (From Nokkola [35])
human experience, and from there a series of preeminent voices are raising in favor or against it. Clear examples are the positions sustained by Steven Hawking, Elon Musk, Bill Gates, or Steve Wosniak in recent years about AI and the profound foreseeable impacts for humankind, which are often distorted while are dispersed in media outlets. In a post-truth era situations such as the Cambridge Analytica scandal may hinder the overall perception over the use of any algorithms, including AI algorithms, in the general public, but as we have expressed before, AI has already permeated our everyday experiences without often being aware of it. Although those themes may seem absent from NDE 4.0, the truth is that personal perceptions in favor or against the adoption of AI present through general discussion in society permeates the discussions within NDE-related committees and workgroups and those personal or collective perceptions may become boosters or barriers for the proper assimilation of AI technologies for specific industries, geographies, companies, or initiatives.
AI Initiatives Development Structure Pertinent to NDE NDE is no different from other industries and the AI initiatives use the same AI technologies other industries are using, since AI is not about how you use, but about the data you put into it. Here is a general overview of those technologies that form a core component of the NDE 4.0 technologies portfolio:
534
R. S. Fernandez Orozco et al.
The Landscape of AI One of Artificial Intelligence’s qualities is that it can enable robots to learn from the data they collect from the activities they carried out in a factory and thus improve their skills in every interaction. This can be considered one of the fundamental bases of Industry 4.0, after all, and it will help make factories more autonomous and more productive.
AI-Related Technologies Figure 21 aims to serve as starting point for a AI related technologies portfolio. This portfolio comprises technologies that not only are having a relevant impact in general application of AI but are being explored specifically for NDE related projects. Natural Language Processing Natural Language Processing (NLP), is a branch of artificial intelligence that deals with the interaction between computers and humans using the natural language. AI in NLP basically means a certain level of understanding human language and in recent years has infiltrated most aspects of daily life through personal assistance systems like Siri, Alexa, and Google Home. The signal processing down-selects to string meaning and context to infer a proposed thought or question to be answered. NLP can also be presented in text with email filtering, predictive texts and other means. Further discussions can be found in subsequent ▶ Chap. 16, “NDE 4.0: Image and Sound Recognition.” Speech Recognition and Generation Pattern recognition is a fundamental component where computational capabilities excel and with the advancements in Natural Language Processing (NLPNLP) encroaches on many aspects of personal lives in modern days. Evidence is seen in the personal automated assistants like, Siri by Apple, Alexa by Amazon and Google Home. Siri’s use of Deep Neural Networks (DNNS) sequences off of the two connected commands of “Hello Siri” with the hidden layers fully connected. The top layer performs temporal integration. Apple chooses the number of units in each hidden layer of the DNN to fit the computational resources available. Networks used, typically have five hidden layers, all the same size: 32, 128, or 192 units depending on the memory and power constraints Through the training process adjusts the weights using standard back-propagation and stochastic gradient decent leveraging a variety of neural network training software toolkits, including those described by Theano, Tensorflow, and Kaldi [50]. As technology advances in computational power, more algorithms and increased datasets through usages become available advancements and refinement of these tools will accelerate and refine. As speech is an essential part of communication and understanding, cultivation of this aspect of development can permeate many aspects of the AI perpetuation.
20
Artificial Intelligence and NDE Competencies
535
Fig. 21 An AI technologies portfolio map, adapted by the Author. (From vincejeffs.com [61])
536
R. S. Fernandez Orozco et al.
Images Processing (Recognition and Generation) An additional critical building block serves through the function of seeing the world around us and is a human function that appears effortless. The training of the mental network begins at a very early age and compounds as we age, filtering the known and unknown then seeking to define the gaps. Geirhos, Janssen, Schutt, Rauber, Bethge, and Wichmann [16] reflect about the evolution of these technologies: “Until very recently, animate visual systems were the only ones capable of this remarkable computational feat. This has changed with the rise of a class of computer vision algorithms called deep neural networks (DNNs) that achieve human-level classification performance on object recognition tasks.” The escalation and acceleration in the image processing development arena sponsored through the automotive industries quest for autonomous vehicles, present benefits to inspection. The increased use of drones in NDE foster increased activities around the need to filter video data. To leverage the alternative platform by covering more area and increased access, foster the appeal, but the 1:2 review requirement of one acquisition viewed at deployment and second review for evaluation, the benefit is costly. To leverage the advancements in image processing would exponentially propel the deployment in inspection utilization. Robotics Robotics and artificial intelligence are not the same things at all, but when we add AI into robotics, we have artificially intelligent robots and these are controlled by AI programs, usually allowing them to perform more complex tasks, artificially intelligent robotics is the bridge between robotics and AI. Most common use of AI Robotics would be Assembly or Packaging Lines Deterministic Rules and Processes and Decisions Deterministic AI environments are those on which the outcome can be determined based on a specific state. In other words, deterministic environments ignore uncertainty. Usually, they use a deterministic algorithm to work they will always produce the same output from a given starting condition. An example can be a Program that plays tic tac toe, using an algorithm to deny the other player to set 3 in a row while trying to set them for itself. Event Processing An event processing architecture is based on interactions between three components: an event source, an event processor, and an event consumer. Known as “fast data” automates decisions and initiates actions in real-time, based on statistical insights from Big Data platforms. Event processing is used to determined Optimized pricing, Fraud detection, Cross selling, Rerouting transportation, Customer service, Proactive maintenance, Restock inventory.
20
Artificial Intelligence and NDE Competencies
537
Predictive Knowledge Management Predictive Analytics is an emerging field in knowledge management and business strategies that deals with the use of quantitative data to arrive at a model of predicting human behavior. Tries to predict events that have not happened yet which seems to be mundane when one considers the vast amount of information that is available on the internet in this information age. An example van be Deep Q&A Systems that try to find bugs by testing and emulating human behavior. Programing Languages for AI – Automatic Programming Just like in the development of most software applications, a developer has a variety of languages to use in writing AI. However, there is no perfect programming language to point as the best programming language used in artificial intelligence. The development process depends on the desired functionality of the AI application being developed. Here is a list of some of the most used programming languages for AI: • Python: Supports algorithm testing without having to implement them. • C++: Good for finding solutions for complex AI problems. • Java: Very portable; it is easy to implement on different platforms because of Virtual Machine Technology. • LISP: Fast and efficient in coding as it is supported by compilers instead of interpreters. • Prolog: Has a built-in list handling essential in representing tree-based data structures. Planning and Decision Support There are four basic steps when you build an AI or machine learning model: 1. Putting together a dataset The dataset must reflect real life predictions that the model will make. Training data can be sorted into classes; the model will pick out different features and patterns between the classes to learn how to tell them apart. For example, if you train a model to sort zebras and giraffes, the dataset will have images of the two animals, appropriately labelled. You want to prepare an all-inclusive dataset, so the model’s predictions won’t be inaccurate or biased. The dataset should be randomized, deduped, comprehensive, and split into training and testing sets. You use the training set to train the model, and the test set to determine the accuracy of the model and identify potential caveats and improvements. The training and test sets shouldn’t have overlapping data. 2. Choosing an appropriate algorithm Depending on the task at hand, amount of training data, and whether the data is labeled or unlabeled, you use a specific algorithm.
538
R. S. Fernandez Orozco et al.
Common algorithms for labeled data include: • Regression algorithms • Decision trees • Instance-based algorithms Common algorithms for unlabeled data include: • Neural networks 3. Training the algorithm on the dataset The model trains on the dataset over and over, adjusting weights and biases based on incorrect outputs. For example, if we have the equation for a straight line: y ¼ mx + b, the adjustable values for training are ‘m’ and ‘b’, or our weights and biases. We cannot impact the input (y) or output (x), so we would have to play around with m (the slope) and b (y-intercept). During the training process, random values are assigned to m and b, until the position of the line is affected in such a way that it will have the most correct predictions. As it continues to iterate, the model’s accuracy keeps increasing. 4. Testing + improving the model You can check the model’s accuracy by testing, or evaluating, it on new data that has never been used for training before. This will help you understand how your ML model performs in the real world. After evaluation, you can fine tune hyperparameters that we may have originally assumed in the training process; adjusting these hyperparameters can become somewhat of an experimental process that varies depending on the specifics of your dataset, model, and training process.
Specialized Hardware for AI The field of artificial intelligence (AI) has witnessed tremendous growth in recent years with the advent of Deep Neural Networks (DNNs) that surpass humans in a variety of cognitive tasks. The algorithmic superiority of DNNs comes at extremely high computation and memory costs that pose significant challenges to the hardware platforms executing them. Currently, GPUs and specialized digital CMOS accelerators are the state-of-the-art in DNN hardware. However, the ever-increasing complexity of DNNs and the data they process have led to a quest for the next quantum improvement in processing efficiency. AI chips, as the term suggests, refers to a new generation of microprocessors which are specifically designed to process artificial intelligence tasks faster, using less power. AI chips could play a critical role in economic growth going forward because they will inevitably feature in cars, which are becoming increasingly autonomous; smart homes, where electronic devices are becoming more intelligent; robotics, obviously; and many other technologies. Graphics-processing units are particularly good at AI-like tasks, which is why they form the basis for many of the AI chips being developed and offered today.
20
Artificial Intelligence and NDE Competencies
539
An Overview of Leading Machine Learning Forms The increased computational capabilities available through technological advancements, an expanded scope presents to exploit the capabilities leading Machine Learning. Leveraging techniques can be overwhelming, but in this section, we present a brief overview of the most widely used: Linear Regression, Logistic Regression, K-nearest neighbors, K-mean, Naïve Bayes, decision trees, Random Forest, Dimensionality Reduction, and Artificial Neural Networks. • Linear Regression: (See Brownlee [5]) Linear regression is a linear model, e.g. a model that assumes a linear relationship between the input variables (x) and the single output variable (y). More specifically, that y can be calculated from a linear combination of the input variables (x). When there is a single input variable (x), the method is referred to as simple linear regression. When there are multiple input variables, literature from statistics often refers to the method as multiple linear regression. • Logistic Regression: (See ml-cheatsheet.readthedocs.io [31]) Logistic regression is a classification algorithm used to assign observations to a discrete set of classes. Unlike linear regression which outputs continuous number values, logistic regression transforms its output using the logistic sigmoid function to return a probability value which can then be mapped to two or more discrete classes. • K-nearest neighbors (kNN): (See Zhang [67]) kNN classifier is to classify unlabeled observations by assigning them to the class of the most similar labeled examples. Characteristics of observations are collected for both training and test dataset. • K-mean: (See Piech [40]) K-Means is one of the most popular “clustering” algorithms. K-means stores centroids that are used to define clusters. A point is considered to be in a particular cluster if it is closer to that cluster’s centroid than any other centroid. K-Means finds the best centroids by alternating between (1) assigning data points to clusters based on the current centroids and (2) choosing centroids (points which are the center of a cluster) based on the current assignment of data points to clusters. • Naïve Bayes: (See Monkey Learn Blog [32]) Naive Bayes is a family of probabilistic algorithms that take advantage of probability theory and Bayes’ Theorem to predict the tag of a text (like a piece of news or a customer review). They are probabilistic, which means that they calculate the probability of each tag for a given text, and then output the tag with the highest one. The way they get these probabilities is by using Bayes’ Theorem, which describes the probability of a feature, based on prior knowledge of conditions that might be related to that feature. • Decision Trees: (See Yadab [65]) A decision tree is a flowchart-like structure in which each internal node represents a test on a feature (e.g., whether a coin flip comes up heads or tails), each leaf node represents a class label (decision taken
540
R. S. Fernandez Orozco et al.
after computing all features) and branches represent conjunctions of features that lead to those class labels. The paths from root to leaf represent classification rules. • Random Forest: (See Sharma [47]) The random forest then combines the output of individual decision trees to generate the final output. In simple words: The Random Forest Algorithm combines the output of multiple (randomly created) Decision Trees to generate the final output. • Dimensionality Reduction: (See Pramoditha [41]) Dimensionality reduction simply refers to the process of reducing the number of attributes in a dataset while keeping as much of the variation in the original dataset as possible. It is a data preprocessing step meaning that we perform dimensionality reduction before training the model. • Artificial Neural Networks: (See Hassoun [19]) Artificial neural networks are systems motivated by the distributed, massively parallel computation in the brain that enables it to be so successful at complex control and recognition classification tasks. The biological neural network that accomplishes this can be mathematically modeled by a weighted, directed graph of highly interconnected nodes (neurons). The artificial nodes are almost always simple transcendental functions whose arguments are the weighted summation of the inputs to the node; early work on neural networks and some current work uses node functions taking on only binary values.
Real-World Application Categories The infiltration of AI has tentacles in most every aspect of our lives. It can be seen in medical diagnostics, especially strong in cancer detection of radiographs in lung, breast and prostate and closely relates to the field of nondestructive testing. The consumer-lead autonomous vehicles have been progressing for decades with potential in sight. These developments support the boost of robotic utilization in the Energy sector for confined space entry inspections and facilities surveying of drones not only on the platform deployed and its mission planning and mapping capabilities, but additionally for the back-end analysis. Home/personal assistants like Google Home, Siri by Apple, and Alexa by Amazon can do everything from turning off lights, setting reminders, and ordering groceries. Video streaming uses AI to prioritize and maximize suggestions and bandwidth management. There is not only value to our lives in the value of personal-level deployment, but additionally presenting an ability to break down barriers to adoption as the advancements and capabilities continue to develop.
Permeating NDE Skills to Digital Assistants The advancements experienced through the digital transformation require a global elevation and convergence on definitions, deployment, confidence metrics and validation protocols. As these activities begin to standardize, industry must converge technology and workforce to leverage the augmented assistance potential.
20
Artificial Intelligence and NDE Competencies
541
The potential roadmap for industry to follow can be capitalized in the early adoption in the medical arena. The proactive, early engagement to the collaborative abilities leveraged through the technology is being adapted into a reformed medical education effort. Paranjape [39] reflects in the impact and evolution of technology adoption and training needs: “As the practice of medicine enters the age of artificial intelligence (AI), the use of data to improve clinical decision making will grow, pushing the need for skillful medicine-machine interaction. As the rate of medical knowledge grows, technologies such as AI are needed to enable health care professionals to effectively use this knowledge to practice medicine. Medical professionals need to be adequately trained in this new technology, its advantages to improve cost, quality, and access to health care, and its shortfalls such as transparency and liability. AI needs to be seamlessly integrated across different aspects of the curriculum.” A notable model would be prudent for general industry especially in the critical aspect of nondestructive testing. Organizations such as ASME are devoting to establishing good engineering practices for the construction of critical components. Covered within, are the linkages to referencing codes by the construction codes, such as Section V of the Boiler, Pressure Vessel Code that define the “how to” perform NDE. Augmenting these processes to accommodate the rapid developments in technology can be one aspect of support for successful, reliable and validated deployment of how these systems may be implemented. Globally the investigation in the recommended practices is underway as to not retard the advancements, but a keen eye on the criticality of nature of the inspection. An example of recent publication in the examination arena can be seen in the European Network for Inspection and Quality (ENIQ) “Qualification of an Artificial Intelligence/Machine Learning Non-Destructive Testing System, ENIQ Publication no. 64” [12]. Increased global collaboration must be convergent, consistent, and diligent for standardized adoption pathways. Although, at present time, there is neither an ISO workgroup/committee or specific ISO standards devoted for AI/ML/DL specifically for NDE, some of the standards created by ISO/IEC JTC 1/SC 42 integrated committee in AI [22] may be useful for NDE initiatives.
Knowledge, Skills, and Competencies in the NDE 4.0 Era Bertovic and Virkkunen [2]. describe a series of emerging diversified roles for NDE inspectors such as Materials expert, structural integrity expert, automation expert, system developer, UX-Designer, caretaker, problem solver, and even client, which will interact with NDE Systems through with different interfaces using smart automation technologies. Those human-machine interfaces, in the form of pure software elements or through a merge of specialized hardware and software, shall include several of the AI technologies described along this chapter in order to truly capitalize the richness of knowledge, skills, and competencies required for those new roles.
542
R. S. Fernandez Orozco et al.
Fig. 22 Evolving the skills and competencies of NDE inspectors certified under SNT-TC-1A requirements to adapt to Industry 4.0 work environments. (From Fernandez [14])
As is shown in Fig. 22, and is explored with more detail in section “Permeating NDE Skills to Digital Assistants” of ▶ Chap. 44, “Training and Workforce Re-orientation” of this handbook, these role diversifications are experiences of multi-dimensional role diversification paths may be enriched by innovative endorsement-based professional specialization programs based in two axes: ((1) A focused horizontal reinforcement processes of hard, digital, soft and management competencies, and 2) A vertical integration of multi-directional development paths through a series of focused endorsements oriented to diversify the roles that an NDE professional choose and perform.
NDE 4.0 Body of Knowledge This handbook as a whole is a part of the increasing wider Body of Knowledge that constitute the foundation of NDE 4.0. National NDE organizations such as the German Society for Non-Destructive Testing (DGZfP), the American Society for Non-Destructive Testing (ASNT), or The International Committee for Non-Destructive Testing (ICNDT) have created subcommittees and special interests groups involved actively in establishing NDE 4.0 collaboration, communication, and diffusion platforms and strategies. More information rated with one concrete repository of the NDE 4.0 Body of Knowledge in the form of a NDE 4.0 Wiki may be found at sections “Turing Test” and “Collective Learning and Communities’ Development” in ▶ Chap. 44, “Training and Workforce Re-orientation,” of this handbook.
Soft, Hard, and Digital Skills Revising the academic literature there are innumerable debates related with how skills and competencies, as a theoretical framework, may be structured and organized. The proposed organization structure that follows is the product of hands-on experience:
20
Artificial Intelligence and NDE Competencies
543
Hard Skills: They provide the foundation for the deployment of solid NDE 4.0 projects and initiatives. Those skills may be categorized as follows: (1) NDE specific skills, (2) Science skills, particularly physics and chemistry, (3) Technology skills, (4) Engineering skills, and (5)Mathematics skills. Soft Skills: They allow improved performance both in an individual basis and as a workgroup to contribute with the success of a project or initiative. These may include: Internal mindsets: They may include, but not be limited to, self-awareness character traits and attitudes. External relations: They may include, but not limited to social awareness, team effectiveness, interpersonal people skills, social skills, communication skills, career attributes, and emotional intelligence skills Management Skills: They constitute an expanded set of soft-skill intended to facilitate improved performance in a workgroup environment. They include, but are not limited to, skills related with the following managerial roles: planning, communicating, decision-making, delegating, problem-solving, motivating, and negotiating. Specific non-managerial roles may also benefit with the development of specific management skills according with the purpose and scope of their roles. Digital Skills: As a foundation of the ample digitalization associated with NDE 4.0 initiatives and projects, an ample promotion and expansion of digital skills are required to contribute with the deployment of a digital transition strategy. Those digital skills may be categorized as: Basic: This includes skills related with the everyday interaction with commonly available digital systems. This may include, but not limited to: using devices and handling information, programming, creating and editing digital content, digital communicating, digital transacting, online security, and online responsible behavior. Exponential: This includes the use of technologies related the five fields of technology exploration over the following decades exposed in the Exponential Convergence Diagram of Fig. 23. After contrasting several dozens of books and academic articles in the subject of skill and competencies for Industry 4.0 while preparing this analysis and categorization of skills sets, some insights arose: 1. There is no consensus in differentiating skills and competencies, and compilations tend to mix them both. 2. It seems that there is no consensus about the specific constituents that comprises a specific skills sets, therefore the frontiers that delimit each skills sets, such as soft skills or digital skills, do not constitute. 3. Also, there is no consensus about how the constituents within a specific skills sets should be categorized. 4. Skills sets are shown as separate sets with no interconnections between them. What practical experience has shown, by training hundreds of NDE professionals, is that there are profound interconnections between the four general skill sets proposed in the present chapter: These interconnections generate a network-like
544
R. S. Fernandez Orozco et al.
Fig. 23 Exponential Convergence. (Adapted from Ray, Forgey, and Mathias [43])
structure that synergize its constituents creating a diversified set of rich competencies. For any individual, those four skills sets should constitute a personalized portfolio tailor-made for their needs, motivations, aspirations, personal history, professional trajectory, and intended future path. The map of skills for Industry 4.0 / NDE 4.0 offered in Fig. 24 is intended to serve as both: (1) A visual element to transmit the reader the sense of richness in the diversity of skills and the sense of interconnectedness between them, and (2) Constitute a basic checklist and analysis guide for individuals and organizations for structuring specific skills sets portfolios required for specific industries, geographies initiatives, or projects. We encourage the readers to create their own mental maps to visually describe those skills sets and their interconnectedness. The experience undoubtedly will provide precious insights thar will contribute to understand how current competencies may be potentialized and how we may define new competencies.
20
Artificial Intelligence and NDE Competencies
545
Fig. 24 An interconnected map of skills for Industry 4.0/NDE 4.0 based in Fernandez [14], and Maisiri Darwish and van Dyk [28]
New Competencies Acquisition of new competencies will become a challenge for both human and AI assistants, but is a task where collaborative work between them may potentialize and accelerate.
546
R. S. Fernandez Orozco et al.
Many new tasks linked to human-machine interactions will require not only to do new and different things (such as training an AI assistant to recognize images of PT indications in a casting surface) but also to do things differently (Use that AI assistant to provide support information on image interpretation to an inspector located in a remote location). These new competencies for NDE 4.0 are at the time of publication of this chapter still a field in constant development and innovation as new roles emerge for NDE professionals.
Stable, Redundant/Evolved, and New Roles Towards the future we may categorize professional roles in three fundamental categories: (1) Stable Roles where the impact of technological, economic or social trends have very limited effect, (2) Redundant roles located in at the other extreme in the spectrum where technological, economic or social impacts will lead those roles obsolete, and (3) New Roles represent roles that are contently created and shaped by those technological, economic, or social trends. In the diffusion of NDT knowledge and competencies we have been witnessing the reinforcement of two supplementary trends: Increasing Specialization Versus Wider Democratization: Paradoxically, although in many of our countries there is an ever increasing unsatisfied demand of highly specialized NDE technicians, in parallel we are also witnessing the irruption and democratization of NDE technologies, such as infrared imagining or leak detection devices, directed to a user with limited or no previous NDT knowledge. This democratization trend will reinforce toward the future to capitalize the constantly increasing number of sensors incorporated in personal mobile devices. Permeable competencies diffusion between NDT inspectors and non-inspection collaborators: As industry 4.0 initiatives take form two supplementary trends reinforces: (1) An increase of demand for developing fundamental NDT competencies such as radiographic images interpretation in non-inspection-related roles such as 3D-Modelers or Customer Service support staff, and (2) The design of digital and soft competencies roadmaps for NDT inspectors trained and certified under SNT-TC-1A and the diagnosis instruments to detect training opportunities. Those two tendencies will promote the evolution and they may even disrupt training and certification processes towards NDE 4.0. There are supplementary research areas in NDE 4.0 training and certification that still need to be expanded or even initially explored: • We visualize evolved interaction forms between instructors and students. • Adjustments in the role of the NDT Level III as trainer and certification support for companies. • A multidisciplinary approach for the development of NDE 4.0 training programs that balance hard, soft and digital skills. • An expanded use of telepresence technology.
20
Artificial Intelligence and NDE Competencies
547
• Use of AI digital assistants to support instructors in evaluating specific elements such as the pre-requisite educational level in mathematics or physics. • Online diffusion and training platforms, for both NDT and Non-NDT competencies, as a supplementary resource for presential training. • The use of multiple teachers and platforms within a single training program. We also visualize evolved personnel certification processes that may include: • An increased use of digital competences certification as supplementary process for NDE 4.0 certifications. • The development of Technology-Specific or Industry specific certification standards. • The rise of certification processes for NDE 4.0 instructors • The development of supplementary certification standards for AI digital assistants to validate their effectiveness to aid inspectors to detect and evaluate NDE indications. • A symbiotic interrelationship between the advancement in global NDE 4.0 standardization and the structure and scope of NDE 4.0 personnel certification standards.
Conclusions – Competition Versus Complementation, a Balanced Pathway Toward the Formation of NDE Professionals in the Future As we have seen before, AI assistants are not intended to replace human intervention but in redundant and high computational roles. Stable/evolved roles will be profoundly enriched with the support AI assistance processes will be able to provide and those process themselves will be able to create exciting new roles and development opportunities for humans. In parallel to the permeability of NDE competencies to non-NDE areas of human activities, those NDE competencies will also permeate diverse technological developments, including AI to contribute with the purpose of NDE 4.0. In ▶ Chap. 9, “Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0,” we will expand and connect the notion of competencies with training and work-force reorientation processes but we would like to emphasize here the notion of Life-long, Life-wide, Life-deep learning processes as the path ahead for the maturing of competencies for NDE professionals in the future. AI will not replace the inspector. But an “inspector with AI” will replace the “inspector without AI.”
Summary Both artificial intelligence and competencies improvement are fields with profound implications in the destiny of humankind in general and in the evolution of NDE 4.0 in particular.
548
R. S. Fernandez Orozco et al.
This chapter aims simultaneously to spark the interest of the newcomer and enrich the resources of those in the field of artificial intelligence starting from the basic notions and some historic context but also provide useful resources and references to deepen because in such dynamic, although with some AI winter periods, in this field, with such exponential potential we all are apprentices. All categories of organizations will witness the vastest performance gains and value creation when humans and AI enabled assistants collaborate. Humans are required to train AI, explain their outputs, and ensure their responsible use. AI in turn can enhance human cognitive skills and creativity, freeing humans from low-level tasks and extend their physical capabilities. This synergy should motivate organizations to reimagine their operations, business models and strategies while assimilating AI technological impact and while contributing to provide the landscape for the evolution of NDE roles and for new competencies to thrive. That same collaboration and synergy also should motivate NDE professionals to involve actively in the creation of innovative solution for that will shape not only the face, but the heart and soul of NDE.
Relevant Websites Referenced in the Chapter The following table compiles a series of relevant websites for the content of this chapter, sorted in alphabetical order, that are referenced along the text:
Cross-References ▶ Ethics in NDE 4.0: Perspectives and Possibilities ▶ NDE 4.0: New Paradigm for the NDE Inspection Personnel ▶ Training and Workforce Re-orientation Acknowledgments A sincere thankful acknowledgment to Dr. Nathan Ida, Dr. Ripi Singh, and Dr. Johannes Vrana for their invaluable suggestions and contributions for improving the content of this chapter.
References 1. Anyoha R. The history of artificial intelligence. Science in the News 28, 2017. 2. Bertovic M, Virkkunen I. NDT 4.0: new paradigm for the NDT Inspection Personnel. NDE 4.0 handbook. United States: Springer; 2021. 3. Bird E, Fox-Skelly J, Jenner N, Larbey R, Weitkamp E, Winfield A. The ethics of artificial intelligence: issues and initiatives. Belgium: European Parliamentary Research Service; 2020. 4. Bolisani E, Bratianu C. The elusive definition of knowledge. In: Bolisani E, Bratianu C, editors. Emergent knowledge strategies: strategic thinking in knowledge management. Cham: Springer International Publishing; 2018. p. 1–22. https://doi.org/10.1007/978-3-319-60656_1. 5. Brownlee J. Linear regression for machine learning. Machine Learning Algorithms. 2016. Obtained from https://machinelearningmastery.com/linear-regression-for-machine-learning/
20
Artificial Intelligence and NDE Competencies
549
6. Chouhan VS, Srivastava S. Understanding competencies and competency modeling―a literature survey. IOSR J Bus Manag. 2014;16(1):14–22. 7. Corea F. AI knowledge map: how to classify AI technologies. In: An introduction to data. Cham: Springer; 2019. p. 25–9. 8. Defense Advanced Research Project Agency. DARPA history and timeline. Obtained from https://www.darpa.mil/about-us/darpa-history-and-timeline 9. Delua J. SME, IBM analytics, data science/machine learning, “supervised vs unsupervised learning: What’s the difference?”. 2015. Obtained from: https://www.ibm.com/cloud/blog/ supervised-vs-unsupervised-learning 10. Dicke U, Roth G. Neuronal factors determining high intelligence. Philos Trans R Soc B Biol Sci. 2016;371(1685):20150180. 11. Djeffal C. Artificial intelligence and public governance: normative guidelines for artificial intelligence in government and public administration. In: Regulating artificial intelligence. Cham: Springer; 2020. p. 277–93. 12. ENIQ. Qualification of an artificial intelligence/machine learning non-destructive testing system. ENIQ publication no. 64. Obtained from: https://snetp.eu/wp-content/uploads/2020/07/ ENIQ_Position_Paper_qualification_AI_ML_NDT_system_v13.pdf 13. Ellefsen APT, Oleśków-Szłapka J, Pawłowski G, Toboła A. Striving for excellence in AI implementation: AI maturity model framework and preliminary research results. LogForum. 2019;15(3). 14. Fernandez R. Business and Personal Toolboxes in Action – practical examples of redefining business models and individual skills for NDE 4.0. ASNT 2021 Research Forum Presentation. 15. Fidler D, Williams S. Future skills. Update and literature review. Hamilton: Institute for the future; 2016. 16. Geirhos R, Janssen DH, Schütt HH, Rauber J, Bethge M, Wichmann FA. Comparing deep neural networks against humans: object recognition when the signal gets weaker. arXiv preprint arXiv:1706.06969. Jun 21. 2017. 17. Grewal DS. A critical conceptual analysis of definitions of artificial intelligence as applicable to computer engineering. IOSR J Comp Eng. 2014;16(2):9–13. 18. Hamilton A. How can we create Predictive Knowledge?. Obtained from: https://customerthink. com/how_can_we_create_predictive_knowledge/ 19. Hassoun MH. Fundamentals of artificial neural networks. MIT Press, United States; 1995. 20. Hofstadter DR. Gödel, Escher, Bach: an eternal golden braid. Harvester Press, Hassocks, 26 21. IBM Research Team. AI Hardware. Obtained from: https://www.research.ibm.com/artificialintelligence/hardware/ 22. ISO. ISO/IEC JTC 1/SC 42 Artificial intelligence subcommittee website at www.iso.org. Obtained from: https://www.iso.org/committee/6794475.html 23. Kanzler D. Reliability 4.0 – POD evaluation: the key performance Indicator for NDT 4.0: the essential role of reliability evaluations in modern testing environments. Presentation at First International Virtual Conference on NDE 4.0. Germany. 2021 24. Karvonen H, Heikkilä E, Wahlström M. Artificial intelligence awareness in work environments. In: IFIP working conference on human work interaction design. Cham: Springer; 2018. p. 175–85. 25. Lee D. Birth of intelligence: from RNA to artificial intelligence. New York: Oxford University Press; 2020. 26. Legg S, Hutter M. A collection of definitions of intelligence. Front Artificial Intell Appl. 2007;157:17. 27. Lewis T. A brief history of artificial intelligence. LiveScience; 2014. 28. Maisiri W, Darwish H, van Dyk L. An investigation of industry 4.0 skills requirements. South Afr J Indust Eng. 2019;30(3):90–105. 29. McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI magazine. 15. 2006;27(4):12-. 30. Mishkoff HC. Understanding artificial intelligence. Texas Instruments. Inc., Dallas, TX. 1986 31. ml-cheatsheet.readthedocs.io. Logistic Regression. Obtained from: https://ml-cheatsheet. readthedocs.io/en/latest/logistic_regression.html
550
R. S. Fernandez Orozco et al.
32. Monkey Learn Blog. A practical explanation of a Naive Bayes classifier. Obtained from https:// monkeylearn.com/blog/practical-explanation-naive-bayes-classifier/ 33. Montaquim A. Top 25 AI chip companies: A macro step change inferred from the micro scale. Obtained from: https://roboticsandautomationnews.com/2019/05/24/top-25-ai-chip-compa nies-a-macro-step-change-on-the-micro-scale/22704/ 34. National Research Council. Funding a revolution: Government support for computing research. Washington, DC: National Academies Press; 1999. 35. Nokkala V. Developing Technology Management in Digia. Obtained from: https://www. theseus.fi/handle/10024/343013 36. OECD. OECD AI Principles overview. Obtained from: https://oecd.ai/ai-principles 37. O’Leary DE. Predictive knowledge management using mirror worlds. Intelligent Decision Technol. 2010;4(1):39–50. 38. Owen-Hill A. What’s the difference between robotics and artificial intelligence?. Obtained from: https://blog.robotiq.com/whats-the-difference-between-robotics-and-artificialintelligence 39. Paranjape K, Schinkel M, Panday RN, Car J, Nanayakkara P. Introducing artificial intelligence training in medical education. JMIR Med Educ. 2019;5(2):e16048. 40. Piech C, Means K. Based on a handout by Andrew Ng. Stanford. Obtained from: https:// stanford.edu/~cpiech/cs221/handouts/kmeans.html 41. Pramoditha R. 11 Dimensionality reduction techniques you should know in 2021, from Towards Data science. Obtained from: https://towardsdatascience.com/11-dimensionalityreduction-techniques-you-should-know-in-2021-dcb9500d388b 42. Pringle T, Zoller E. How to achieve AI maturity and why it matters. Ovum. 2018. Available on internet: https://www.amdocs.com (22/02/2019) 2018 Jun 14. 43. Ray BD, Forgey JF, Mathias BN. Harnessing artificial intelligence and autonomous systems across the seven joint functions. US Army Washington United States; 2020. 44. Rodriguez J. 6 types of Artificial Intelligence environments. Obtained from: https:// jrodthoughts.medium.com/6-types-of-artificial-intelligence-environments-825e3c47d998 45. Roziere B. Deep learning to translate between programming languages. Obtained from: https:// ai.facebook.com/blog/deep-learning-to-translate-between-programming-languages/ 46. Sarin S, Pipatsrisawat K, Pham K, Batra A, Valente L. Crowdsource by Google: a platform for collecting inclusive and representative machine learning data. 2019. 47. Sharma A. Decision Tree vs. Random Forest–which algorithm should you use?. 2020. Obtained from: https://www.analyticsvidhya.com/blog/2020/05/decision-tree-vs-random-forestalgorithm 48. Science Museum. Lovelace, turing and the invention of computers. Obtained from: https:// www.sciencemuseum.org.uk/objects-and-stories/lovelace-turing-and-invention-computers 49. Siau K, Wang W. Artificial intelligence (AI) ethics: ethics of AI and ethical AI. J Database Manag (JDM). 2020;31(2):74–87. 50. Siri Team. Hey Siri: an On-device DNN-powered voice trigger for Apple’s Personal Assistant. Obtained from: https://machinelearning.apple.com/research/hey-siri 51. Sommerville I. Event processing systems. Obtained from: https://iansommerville.com/ software-engineering-book/static/web/apparch/event-processing-systems/ 52. Source Now. Predictive intelligence for knowledge management. Obtained from: https://docs. servicenow.com/bundle/quebec-servicenow-platform/page/product/knowledge-management/ concept/predictive-intelligence-for-km.html 53. Spataro J. 2 years of digital transformation in 2 months. Obtained from: https://www.microsoft. com/en-us/microsoft-365/blog/2020/04/30/2-years-digital-transformation-2-months/ 54. Srivastav V. Entrepreneur, how is artificial intelligence revolutionizing small businesses? Available at: https://www.entrepreneur.com/article/341976 55. Team S. Hey siri: an on-device dnn-powered voice trigger for apple’s personal assistant. Apple Mach Learn J. 2017;1(6). Obtained from: https://machinelearning.apple.com/research/hey-siri
20
Artificial Intelligence and NDE Competencies
551
56. Trampus P, Krstelj V, Nardoni G. NDE integrity engineering – a new discipline. Proc Struct Integrity. 2019;17:262–7. 57. Valeske B, Lugin S, Schwender T. The SmartInspect system: NDE 4.0 modules for humanmachine-interaction and for assistance in manual inspection, first international virtual conference on NDE 4.0. 2021. 58. Vrana J, Singh R. NDE 4.0 – a design thinking perspective. J Nondestruct Eval. 2021;40:8. https://doi.org/10.1007/s10921-020-00735-9. 59. Vrana J, Singh R. Cyber-physical loops as drivers of value creation in NDE 4.0. J Nondestruct Eval. 2021;40 https://doi.org/10.1007/s10921-021-00793-7. 60. Vrana J. The core of the fourth revolutions: industrial internet of things, digital twin, and cyberphysical loops. J Nondestruct Eval. 2021;40:46. https://doi.org/10.1007/s10921-021-00777-7. 61. vincejeffs.com. An artificial intelligence (automated Intelligence) Mindmap. Obtained from: https://i2.wp.com/vincejeffs.com/wp-content/uploads/2017/03/AI_Automated_ Intelligence.png 62. Wahner K. How to apply machine learning to event processing. Obtained from: https://www. rtinsights.com/big-data-machine-learning-software-event-processing-analytics/ 63. Web.cortland.edu. The 9 intelligences of MI theory. 2020. Obtained from: https://web.cortland. edu/andersmd/learning/MI%20Table.htm 64. Wilson HJ, Daugherty PR. Collaborative intelligence: humans and AI are joining forces. Harv Bus Rev. 2018;96(4):114–23. 65. Yadav P Decision tree in machine learning. Available online: https://towardsdatascience.com/ decision-tree-in-machine-learning-e380942a4c96 66. Zarya D. Programming language for AI: your ultimate guide. Obtained from: https://www.zfort. com/blog/best-programming-language-for-ai 67. Zhang Z. Introduction to machine learning: k-nearest neighbors. Ann Translat Med. 2016;4(11): 1–7.
Smart Monitoring and SHM Bianca Weihnacht and Kilian Tscho¨ke
21
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Components of a SHM System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Method Selection and System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensor Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electronics and Energy Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal Processing and Data Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Necessary Investigations Prior the Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Preprocessing and Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prediction of Residual Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Integration and Reliability of the Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Example: Sensor Ring for Offshore Welded Seam Testing . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
554 556 557 559 559 561 562 562 563 563 564 564 565 565 567 568
Abstract
Structural Health Monitoring becomes an increasing subject for testing the integrity of structures during operation. Especially AI and machine learning give a vast variety of new possibilities to deal with large amounts of noisy data from operating structures. Smart Monitoring can be assigned to the area of Industry 4.0 or Internet of Things (IoT). The aim of Smart Monitoring is to avoid failures by early detection of failures or operational disturbances using various sensor data and an automated operation from measuring, processing to defect detection or even predictive maintenance.
B. Weihnacht (*) · K. Tschöke Fraunhofer Institute for Ceramic Technology Systems (IKTS), Dresden, Germany e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_10
553
554
B. Weihnacht and K. Tscho¨ke
The authors would like to give a short inside in the latest developments and the new possibilities opened by the significant increase of computer efficiency in respect of processing capacity but also miniaturization of electronic components. All components of a smart SHM system are listed and evaluated regarding the advantages and challenges. This refers to method selection and system design, the demands on electronics and energy supply, the signal processing and data evaluation, the regulations, and the system integration and reliability issues. An example for an offshore wind application is given at the end of the chapter to illustrate the complexity behind the development of such a monitoring system. Keywords
Monitoring · Ultrasonics · Data processing · Predictive maintenance · Smart monitoring
Introduction Smart Monitoring can be assigned to the area of Industry 4.0 or Internet of Things (IoT). The aim of monitoring is to avoid failures or operational disturbances by early detection using various sensor data. Therefore, a broad variety of sensor data are being combined to identify deviations from normal or optimal operation. Furthermore, data fusion has become a growing issue. This means, sensor data is combined and evaluated together and patterns for failures can be identified with novel techniques like machine learning (ML) to identify faults that cannot be detected by conventional techniques or expert knowledge alone. If these failures are detected quickly with these new technologies, shut off time can be reduced and a first step regarding predictive maintenance is achieved (see Fig. 1). Unwanted downtime can be avoided. If this can be integrated into the entire operation, process by further automation maintenance can be organized in such a way that ordering processes and the checking of the availability of spare parts become a lot more time-efficient and reduce the maintenance effort significantly. Furthermore, advanced NDE techniques and Smart Monitoring based on big data together with prognostics have the potential to make asset life management possible in a completely different way. Especially the residual life time is a target issue for most applications to enable maintenance when needed and not when determined. This gives the possibility to use materials longer and lower operation costs. In contrast to Condition Monitoring or Structural Health Monitoring (SHM), Smart Monitoring in the classic meaning mainly uses sensor data provided during
Fig. 1 From real-time monitoring to Smart Maintenance
21
Smart Monitoring and SHM
555
the processes in industrial applications anyway (e.g., temperature or power supply data). The idea of Smart Predictance may be transferred to SHM applications but demands additional instrumentation effort. In contrast to procedures from the field of machine monitoring (Condition Monitoring – CMS), which typically deal with the monitoring of rotating machine components, SHM is used primarily to monitor objects with load-bearing properties (structures). This chapter, therefore, addresses owners and operators of structures, designers, certifiers and inspectors of structures, service providers for metrological solutions, as well as manufacturers of SHM technology and authorities (keyword certification). An SHM system is defined as a system consisting of a monitoring object with measuring sensors, signal adaptation units, and data memories as well as of the data processing system and the automated diagnostic system. The scope of this standard concerns methods/systems for the continuous or periodic and automated determination and monitoring of the condition of a monitoring object within Structural Health Monitoring. This is done by measurements with permanently installed or integrated transducers and by the primarily automated analysis of the measured data. The determination of the condition of the monitored object can be carried out in different degrees. This can include the current recording of the stress (e.g., as a result of an acting load or environmental influences), the detection of damage, the determination of the type of damage up to the assessment of the effects (integrity of the monitored object, stability, and load-bearing capacity). In contrast to Condition Monitoring the object to be monitored are primarily structures with load-bearing properties and/or frequently statically supported structures such as rotor blades of wind energy plants, load-bearing or hull surfaces of ships and aircraft, pipelines, containers, vehicles (automobile, rail vehicles) and rails, high-voltage lines, foundation structures of offshore structures, bridges, and buildings. It is irrelevant whether the structural monitoring is to be carried out locally (hotspot) or globally (area monitoring). Generally, there are a number of applications in which permanently installed, spatially distributed sensors have advantages over manual, periodically recurring testing. Reasons for this are, for example, a high effort in manual inspection due to poor accessibility of the inspection area or reduced coupling possibilities of the transducers and the lower significance of periodic tests at intervals of several months or years compared to a permanent recording of a structural condition. Even though the advantages of permanent structural monitoring are obvious, there is a lack of available system solutions. The slow transfer of innovative permanent monitoring methods from the academic environment to industrial solutions and the lack of normative regulations pose great challenges for potential system providers and end users alike and hinder the introduction of suitable systems on a broad scale. This is definitely a growing field and demands for long-term reliable solutions for SHM applications. In contrast, areas of application where guidelines exist are quickly opened up by system providers. Examples are the gear monitoring of wind turbines, the monitoring of dams as well as the monitoring of buildings in seismically active regions, or
556
B. Weihnacht and K. Tscho¨ke
monitoring in geotechnics in general. Despite the existing application obstacles, studies also expect a market development for non-destructive testing methods towards continuous monitoring, cloud-based data processing and analysis, and intelligent expert systems for predictive maintenance. This development is also being driven by a lack of qualified personnel and a growing number of test objects as a result of global infrastructure development.
Components of a SHM System In principle, the methods and application of monitoring systems for condition monitoring are similar to the monitoring procedures used in seismic applications. Likewise, system design and signal processing methods are strongly based on the experience gained in geophysics and geotechnics. Very widespread methods for monitoring are ultrasonic based like vibration analysis or guided waves (acoustic emission and Acousto Ultrasonics). Therefore, the methods of condition monitoring can be either active or passive. In the following components for SHM monitoring systems with guided waves are briefly described (see Weihnacht et al. [11]). The core element of the monitoring system is the transducer, which converts data from the structure (vibration, displacements, etc.) into electrical signals (passive measurements). It is also possible to introduce waves, light, etc. to the structure and observe any changes of the introduced signal (active measurements). An example from aircraft industry is shown in Fig. 2 and shows all necessary parts for an SHM system. Furthermore, the system has to take into account the specifications regarding operational safety, electromagnetic compatibility, and explosion protection. The tailor-made solution for a plant or monitoring task is often the greatest challenge in the development of an SHM system, both with active and passive methods.
Fig. 2 Essential components of the monitoring system – an example from aircraft applications (from Giurgiutiu [6])
21
Smart Monitoring and SHM
557
Method Selection and System Design A comprehensive overview of all existing procedures for structural monitoring cannot be presented. The diversity is too large to be addressed in this chapter. In principle, the objective of the monitoring project should be clearly defined when selecting the method. Common objectives of monitoring projects are e.g.: • • • • • • •
The monitoring of stability The monitoring of the operational capability Life cycle analysis and prognosis The support of condition-based maintenance The complement and replacement of inspections The supply of decision bases for risk-based asset management Design verification
Structural monitoring procedures can be roughly divided into global and local procedures. The classification is not always clear due to the broad variety of communities that deal with SHM. Global procedures are typically using a few sensors (not necessarily close to the potential damage locations) to monitor the entire structure. It focuses on a nonspecific monitoring without knowledge of possible damage mechanisms or locations. The primary goal is only the detection of damage. Identification and localization of damage is rather difficult or even impossible due to the minimal number of sensors. Local procedures, on the other hand, have another approach. These techniques use transducers which are located closely to the potential site of damage and therefore are better adapted to the task of monitoring hot spots. This is essential when knowledge about the damage mechanism should be gained and information about the location of the damage is necessary. This method allows for the detection, identification, and localization of damage but requires, in general, a larger number of transducers and is, therefore, more likely to be less cost-efficient than global methods. The realization of a system solution requires the exact knowledge of the object to be monitored, the environmental requirements to which the test object is exposed, the interaction of the proposed method with the defects, as well as the expected or required accuracy of the defect detection. The target values for defect resolution are specified by the end users in industry, usually on the base of the results obtained by manual inspection. However, most standards and guidelines in the field of nondestructive testing are based on the consideration that the maximum allowable defect size to be detected by periodic, manual inspection does not lead to catastrophic failure of the component or system due to damage growth during the inspection interval. However, the component or the structural condition can be monitored in the inspection interval as often as required without any further effort. Any failures can therefore be detected more reliably for continuously operating systems due to the far higher number of measurement values.
558
B. Weihnacht and K. Tscho¨ke
Therefore, it must be ensured that critical defect sizes can be reliably detected under all environmental conditions with appropriate safety reserves. Based on this consideration, model errors are usually agreed upon for the design of the system, which must be reliably detected by the measuring methods. The verification is then carried out by experimental feasibility studies or by simulation of the error interaction of different wave modes with the model error in order to derive system parameters like, for example, energy input or frequency. From these findings, the required sensors, their distances from one another, and the associated requirements for data acquisition and signal generation are derived. As an example for ultrasonic measurements, Fig. 3 shows the comparison of simulated data compared to lab data from 3D laser vibrometry. Laser vibrometry is a noncontact measurement of surface vibrations. Laser beams are sent from a probe onto the surface to be examined. Using the Doppler effect, the vibration of the surface can be determined from the scattered beams. If three probes are used simultaneously, it is even possible to calculate the vibrations in all three spatial dimensions. Laser vibrometry is thus a powerful tool for the investigation of ultrasonic propagation in various materials. For modeling, both universal methods such as Finite Element Methods based on commercial software packages and specific methods such as the Elastodynamic Finite Integration Technique (EFIT) in specially developed software are used. EFIT is a Finite Volume Method that was first developed for problems in electromagnetics and later transferred to elastodynamics, that is ultrasonic [3, 4, 12]. Later, it was specially adapted for modeling of problems in the SHM area and is characterized by very low computation times with equally good accuracy compared to Finite Element Methods [8, 10]. Figure 3 shows the result of a modeling on the left side. The excitation of an ultrasonic signal took place in the coordinate origin (lower corner). Clearly visible in the wave field are the different guided wave modes, the S0 mode, the A0 mode, and the shear wave mode SH0. An analogous representation is provided by the evaluation of laboratory data on the right side. In the background you can see the piezoelectric transducer, which was used for the excitation of guided waves. In the
Fig. 3 An example for SHM system design: Comparison of modeling (left) and lab data from laser vibrometry (right)
21
Smart Monitoring and SHM
559
measuring field of the laser vibrometer, the propagation of the waves is visualized. Here, too, the three different wave modes S0, A0, and SH0 can be seen. If modeling and exemplary laboratory tests match, measurement systems can also be investigated and designed based on models, which can save enormous resources. Finally, based on the knowledge gained from simulation experiments and laboratory tests, a customized measuring system can be developed. The development of suitable transducers, the elaboration and implementation of a concept for energy supply and energy management, and the development of the data acquisition system and system integration are described in detail below.
Sensor Selection There are a broad variety of sensors on the market. They might be divided by the physical principle (e.g., piezoelectric, electrodynamical, or optical) or by the measurement setup (e.g., periodically, stochastically, continuously) depending on the monitoring task. An overview is found in Table 1.
Electronics and Energy Supply The circuit layout and the realization of the system are carried out considering the selected transducer elements and the requirements of the measurement methods to be used, the design of the energy management, and the circuit development. When designing the energy management system, the later application scenario of the monitored object must be considered in that the measuring parameters, for example, frequency and amplitude, are taken into account while selecting the energy source. Figure 4 shows the main possibilities for the energy supply of converter components and evaluates them with regard to their application and the amount of energy provided. In principle, electrical cabling is suitable for all applications where there are no special requirements for mobility or electromagnetic compatibility. In the past, special solutions have been implemented for applications where electrical cabling cannot be used, for example, for the instrumentation of a wind turbine rotor blade and the associated problem of lightning protection. In this case, fiber-optic cables can be used and the power supply via optical fibers can be realized by using laser or lightemitting diodes as source and photo diodes as receiver [5]. If the system is to be mobile, rechargeable batteries can be used as energy source, which can be recharged after a certain period of time. If sufficient energy is available in the environment, for example, through thermal gradients (thermogenerator), oscillation energy (vibration generator), kinetic energy (dynamo), or in the form of radiation (solar cells), completely self-sufficient, radio-based sensor systems can be realized in which the batteries are recharged by energy harvesting using the above methods. In addition, special solutions such as generators or fuel cells can be developed to operate systems in remote areas.
B. Weihnacht and K. Tscho¨ke
560 Table 1 Sensors for NDE and SHM as given by Su et al. [9] Sensor Ultrasonic transducers
Applications and features Detecting general structure
Acoustic emission (AE) sensor Magnetic sensor
Detecting general structural damage or measuring distance and structural thickness; exact and efficient Detecting cracks or measuring large deformations with magnetic leakage; magnetic field required Detecting damage in metal and measuring electromagnetic impedance, but not applicable for polymer composites; complicated operation and expensive equipment, high energy consumption Detecting acceleration and measuring structural dynamic responses; good for high-frequency response Detecting relatively large deformations; good for low-frequency responses, relatively large damage, low cost Detecting deformation and active control; active sensor, good for low-frequency responses, relatively large driving force Measuring derivation, displacement, and dynamic responses; contactless measurement with high precision, expensive equipment Detecting deformation, damage location, and temperature change; high precision but expensive equipment Detecting general structural damage; normally for metallic materials Detecting general damage; active sensor, good for high-frequency responses, low driving force/cost/ energy consumption Detecting general structural damage and measuring vibration or temperature change; suitable for non-flat shapes and low cost
Eddy-current transducer
Accelerometer
Strain gauge
Shape memory alloy
Laser interferometer
Fiber-optic sensor
Electromagnetic acoustic transducer Piezoelectric lead zirconate titanate (PZT) element PZT paint/ polyvinylidene fluoride (PVDF) piezoelectric films
Modality of attachment Surface attaching or air/fluid coupled Surface attaching or embedding Surface attaching Surface attaching
Surface attaching
Surface attaching
Surface attaching or embedding Contactless
Surface attaching or embedding Surface attaching Surface attaching or embedding Surface attaching or embedding
In general, the measuring signals need to be amplified by the analog circuit components of the measuring system by a few millivolts for the conditioned voltage input of the connected analog-to-digital converter (ADC). Two types of amplifier circuitry are commonly used for signal conditioning of sensor data: (1) amplification of the measurement signal via a voltage amplifier (electrometer amplifier) and (2) via a charge amplifier. Depending on the sensor type, adapted electrical circuits are used. Charge amplifiers or differential electrometer amplifiers with upstream measuring bridges are usually used for current and voltage measurements.
21
Smart Monitoring and SHM
561
Fig. 4 Possible ways to power supply the monitoring system
The conditioned measuring signal is then digitized by means of ADC in order to describe and evaluate the structural condition using data processing algorithms on the basis of the measuring signals. Depending on the area of application and data volume, the data processing is carried out in a microcontroller or FPGA (Field Programmable Gate Arrays). Microcontrollers are suitable for low data volumes and for mobile or wireless applications where energy efficiency is important. Data acquisition by means of integrated ADC is already implemented in many current microcontrollers (up to 5 MS/s, max. 16 bit). FPGA are commonly used in high-end data acquisition at high sampling rates (> 5 MS/s) and high resolution (> 14 bit).
Signal Processing and Data Evaluation In order to get good data evaluation results, it is essential to find a strategy to deal with mass data obtained by the SHM system efficiently and reliably. A workflow is given in Fig. 5. The physical signal, which reaches the sensor, is pre-processed and then evaluated to recognize patterns in the signals. A general approach to pattern recognition is also summarized again in Fig. 6 and the following text. An interruption of this data flow carries the risk that the goals of the SHM system will not be achieved. All these steps are carried out automated for a smart monitoring system. As pointed out before, the choice of the right classifier is a key issue and has to be carefully evaluated. As an example, Fig. 6 shows the detailed necessary steps to reach the damage identification for a SHM system using Lamb waves. The signals are recorded and pre-processed. Noise is removed by the Digital Waveform Transform (DWT) and features extracted by the Continuous Wavelet Transform (CWT). The signals are compressed afterward and if applicable the information displayed by mapping. As in Fig. 5, patterns are recognized and the damage finally identified. Selected aspects of this workflow will be discussed below. Here, work steps are named to which particular attention must be paid.
562
B. Weihnacht and K. Tscho¨ke
Fig. 5 Workflow for an SHM system [6]
Fig. 6 Data processing example for a SHM system [9]
Necessary Investigations Prior the Measurements Even before starting measurements, one should be clear about the signal processing algorithms to be used later. If well-trained processing algorithms for mass data, for example AI to ensure a high informative value of the result, are required, the system has to be trained in advance with real data sets and have to have well-organized data elimination algorithms for classifying irrelevant data. This is the key issue for all those techniques like AI, Neural Networks, and Machine Learning but also for rather “traditional” techniques like Acoustic Emission.
Data Preprocessing and Acquisition Data acquisition units collect normally preprocessed data. This is necessary to eliminate any irrelevant data either because the noise level or the source are not related to the damage (e.g., rain drops for acoustic emission, welded seam reflections for acousto ultrasonics, changes in the eigenfrequencies due to icing for rotor blades). This usually requires additional operation parameters of the monitored structure which has to be integrated into the preprocessing scheme. For SHM systems using automated algorithms this is a key issue.
21
Smart Monitoring and SHM
563
After the preprocessing the data needs to be recorded by an acquisition system. This can be either done by wired or wireless solution. Especially for wireless data transfer (e.g., 3G/4G (IoT), WiHART, Wi-Fi, etc.) data needs to be preprocessed for effective data transfer. An alternative might be cable bound solution (e.g., RS485, Ethernet, optical fiber, etc.). Manual collection of data (portable data storage) is not state of the art and should be avoided due to the high effort and the related costs.
Data Management Local (e.g. at operator’s side) or cloud-based data storage/management are widely used. Especially for the increasing market of cloud-based solutions, controlled (rolebased) access to data is important and needs a good concept approved by the operator. The question of data ownership could be briefly touched upon. Information on data security and data protection (including common standards and internationalnational laws) needs to be taken into account. The cloud solutions allow also for the collection of data sets from several SHM systems worldwide for data fusion. This is certainly a vast advantage, especially for international operators.
Data Analysis After having eliminated irrelevant data already at the pre-processing step, there is still certain correction to be applied to the data before being able to get into the damage analysis. The operational and environmental conditions can cause changes in the system response that make damage detection more difficult (e.g., fluctuating traffic loads on bridges, speed of rotating systems, changes in the speed of sound due to temperature changes, etc.). Therefore, the operating data are essential and need to be integrated into the analysis flow. As a next step, the feature extraction (filtering, data fusion, data preparation, visualization), the search for indicators, mapping the data, comparing the data to references, environmental and operational conditions (EOC) cleanup, and the quality assessment play an important role. Furthermore, the calculations of the Probability of Detection (POD) or Receiver Operating Characteristics (ROC), details in separate method standards (possibly statistical), and models of system behavior are often also integrated. As a result, damages are identified and detected within the range of the method (local vs. global measurements). Often the applied methods are baseline methods based on a comparison of the currently measured data with previous data or simulation data. A distinction is made between purely statistical evaluation methods and methods based on physical parameters. Examples of the former are principal component analysis and the use of autoregressive models, wavelet or correlation coefficients. Evaluation algorithms with physical parameters are based on the interaction of the wave with the damage, whereby the original signal is changed, for example, in transit time, amplitude, or
564
B. Weihnacht and K. Tscho¨ke
frequency content. Correlation algorithms to baseline data are also applied here. Furthermore, a time-frequency analysis is possible, which includes the dispersion properties of the material and mode conversions at interfaces in the evaluation. Suitable for this purpose are, for example, short-time Fourier transformations and wavelet transformations. To become independent of the baseline, a comparison between measurement and simulation result is also possible (model-based methods). Thus, damages can be modeled (e.g., FEM or analytical models) and the progress of the damage can be observed. The disadvantage here is that the model must be able to reproduce the structure as accurately as possible. This is often not the case.
Prediction of Residual Life The operator is usually not only interested in the detection of existing damages but also is interested in information about the residual lifetime for condition-based maintenance. For this approach, SHM is a key issue. Continuous monitoring, the permanent storage of data, and then the automated analysis of large data sets enable the change from manual inspections to holistic management. The processing of large data sets can help the implementation of actions to mitigate the susceptibility to material degradation as well as in the implementation of effective inspection, monitoring, and timely repair of material degradation. Current concepts on prognostics are discussed in Bond and Meyendorf [1]. The mentioned publication also describes research on the extent to which diagnostics and prognostics based on SHM can reduce operating costs of a technical facility. It shows the clear advantage of the new technologies that we call NDE 4.0 and Smart Monitoring. However, the data needs to be prepared for condition assessment and forecast (essential information, compatible formats). These approaches usually rely on simulated data evaluation using also techniques like statistical pattern recognition, machine learning, and cause predictive maintenance. Examples for the prediction are the assessment of the residual load capacity, the life cycle analysis, and the trend analysis.
Regulations Regulations cover general technical requirements for SHM systems in terms of operational capability in the intended application and required lifetime. This also includes the fulfillment of any local or national legal regulations or standards (e.g., Ex-proof devices). Nevertheless, these regulations often refer to specific requirements of the users in their environment. Generally applicable standards for SHM hardly exist. While the physical principles of monitoring systems are well described, the lack of options for describing the reliability of the damage detection of the systems prevents them from being in widespread use. This is widely known in the literature and was also the conclusion of a recent survey of over 700 industry representatives [2, 7].
21
Smart Monitoring and SHM
565
System Integration and Reliability of the Components Another key issue is the integration of the system. There are a broad variety of possible technologies which depend on the application field. The following questions need to be answered to choose the right technology: 1. 2. 3. 4.
What are the expected climate conditions (temperature, humidity, etc.)? What is the surrounding medium (water, chemicals, air, etc.)? Is the application to be placed in an ex-proof area? What are the demands on the electronics (dust, vibrations, etc.)?
The packaging of the system depends on the answers to these questions and needs to be considered. Furthermore, reliability plays an important role when it comes to long-term monitoring. If soldering joints are not designed for a long period of time to withstand tension and bending, if potting is not resistant to the surrounding materials, if transducers are loosened by vibration, or if mini-PCs do not meet the necessary standards for a dusty environment, the entire system will fail during the planned period of operation. Components, structural assemblies, electronics, and microsystems as well as technical installations should function reliably at all times. Therefore depending on the application, the materials might have a testing need prior field installation. There are numerous material characterization methods and process knowledge available to meet these demands like optical methods, structural mechanics simulations, and destructive test methods for the improvement of the component design, the quality testing, and the reliability evaluation.
Application Example: Sensor Ring for Offshore Welded Seam Testing SHM is of great importance for objects that are difficult to access, where classic NDT methods are only used to a limited extent. Applications for offshore structures such as wind turbines are an example. The foundation structures are exposed to heavy loads due to the harsh environmental conditions, access is very limited, especially in the winter months. SHM systems can make an important contribution here to ensure the integrity of the structure. The rise of renewable energy technologies also means that the number of offshore wind turbines is growing worldwide. Offshore locations usually exhibit much higher wind speeds than inland locations. On the one hand, this makes for much higher yields. On the other hand, offshore wind farms have to withstand much higher loads than their counterparts on land. At the same time, maintenance operations are made much more difficult by the rough weather conditions on the high seas. This significantly increases operating and maintenance costs. The more difficult conditions on the high seas severely limit the options for damage detection with conventional means of testing. The
566
B. Weihnacht and K. Tscho¨ke
Fig. 7 Application example: Sensor ring with piezoceramic transducers to monitor offshore welded seams under water; Left: sensor near electronics and sensor, Right: demonstrator with two rings in the wind park
enormous forces caused by the wind turbine’s own weight and by water currents and waves in connection with the dynamic loads from the operation of the converter act on the foundation structure may lead to damages, such as cracks in weld seams. This application is a good example for the above explained parts of a smart monitoring system. Since the access is limited, it has to work autarkic over a longer period of time. Furthermore, the data and the power supply are realized wirelessly by Remote Operated Vehicles (ROV). In the long term, they should be replaced by Autonomous Operated Vehicles (AUV). Since the sensor rings remain at the foundation for several years and will therefore have fouling attached to it, specific laminable shear wave transducers for guided waves were developed to meet the challenges of this task. Last but not least, all regulations have to be met to be able to substitute current techniques like Visual Testing (VT) and Alternating Current Frequency Technique (ACFM). The sensors are distributed as a ring around the loaded spots and adapted to the specific requirements. A number of barrier layers protect the sensors and other electronic parts by permanently preventing seawater penetration. Fresnel volume migration provides data analysis in visual form. Environmental data, such as temperature or air humidity, are additionally taken into account for the analysis. This correction is necessary because external factors also affect the measuring signals. The sensors and pre-amplifiers are shown in Fig. 7 on the left side, a test with a demonstrator in the offshore windpark Baltic 1 on the right side, same picture. As an example, the result of a 4.5 cm weld seam edge defect with a depth of 7 mm (overall wall thickness 20 mm) is shown in Fig. 8. The colors could make the severity of the defect and be connected to a traffic light system that automatically contacts the operator if the damage might endanger the safe operation.
21
Smart Monitoring and SHM
567
Fig. 8 Example result of data processing from a sensor ring measurement
Summary This chapter gives an overview of the current state of SHM for smart applications. There are still some challenges to be met especially for the listed components necessary to make SHM systems self-operating. The goal, in any case, is the operation, the data evaluation, and the predictive maintenance to be carried out without any manpower after installation until the operator of monitored project is informed about the results. For some application, this works well already, for others there are still deficits to be worked on. But it can be stated, that this is definitely a growing market with the increasing IT possibilities of data acquisition and processing.
568
B. Weihnacht and K. Tscho¨ke
References 1. Bond LJ, Meyendorf NG. NDE and SHM in the age of Industry 4.0. In: 12th international workshop on structural health monitoring, 2019. 2. Cawley P. Structural health monitoring: closing the gap between research and industrial deployment. Struct Health Monit. 2018;17:1225–44. 3. Clemens M, Weiland T. Discrete electromagnetism with the finite integration technique. Prog Electromagn Res. 2001;32:65–87. 4. Fellinger P, Marklein R, Langenberg K-J, Klaholz S. Numerical modeling of elastic wave propagation and scattering with EFIT – elastodynamic finite integration technique. Wave Motion. 1995;21:47–66. 5. Frankenstein B, Fischer D, Weihnacht B, Rieske R. Lightning safe rotor blade monitoring using an optical power supply for ultrasonic techniques. In: 6th European workshop on structural health monitoring, 2012. 6. Giurgiutiu V. Structural health monitoring with piezoelectric wafer active sensors. Academic; 2008. 7. Mueller I, Moll J, Tschöke K, Prager J, Kexel C, Schubert L, Lugovtsova Y, Bach M, Vogt T. SHM using guided waves – recent activities and advances in Germany. In: 12th international workshop on structural health monitoring, 2019. 8. Schubert F. Numerical time-domain modeling of linear and nonlinear ultrasonic wave propagation using finite integration technique – theory and applications. Ultrasonics. 2004;42:221–9. 9. Su Z, Ye L, Pfeiffer F, Wriggers P, editors. Identification of damage using lamb waves – from fundamentals to applications. Berlin/Heidelberg: Springer; 2009. p. 48. 10. Tschöke K, Gravenkamp H. On the numerical convergence and performance of different spatial discretization techniques for transient elastodynamic wave propagation problems. Wave Motion. 2018;82:62–85. 11. Weihnacht B, Lieske U, Gaul T, Tschöke K, Ida N, Meyendorf N, editors. Handbook of advanced non-destructive evaluation structural health monitoring. Springer Nature Switzerland AG; 2018. p. 1–19. 12. Weiland T. Time domain electromagnetic field computation with finite difference methods. Int J Numer Modell Electron Networks Devices Fields. 1996;9:295–319.
Sensors, Sensor Network, and SHM
22
M. Faisal Haider, Amrita Kumar, Irene Li, and Fu-Kuo Chang
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHM as in Situ NDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scheduled and Automated SHM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to Sensors for SHM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal Processing and Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Damage Index Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnostic Imaging Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimating Impact Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimating Impact Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rotorcraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Damage Verification Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flight Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paper-Manufacturing Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . High Speed Train . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IIoT Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
570 570 571 572 576 576 578 579 581 582 585 585 587 589 591 595 597 599 600 601
Abstract
Structural health monitoring (SHM) is an emerging technology that provides a high-resolution real-time damage state-sensing awareness and self-diagnostic capabilities enabled by a distributed sensor network. This technology is being M. Faisal Haider · F.-K. Chang (*) Aeronautics and Astronautics Department, Stanford University, Stanford, CA, USA e-mail: [email protected]; [email protected] A. Kumar · I. Li Acellent Technologies Inc., Sunnyvale, CA, USA e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_58
569
570
M. Faisal Haider et al.
used in the Industrial Internet of Things (IIoT) environment (a) to extend the duration of the service life of structural platfroms; (b) to increase their reliability; and (c) to reduce their maintenance and operational cost. This chapter discusses SHM systems, their design, sensor types, and usage through distributed sensing for reliable monitoring in IIoT applications. A distributed network of structurally integrated sensors for SHM diagnostics enables efficient real-time monitoring of entire structures. A carrier layer (SMART layer) is designed in such a way as to eliminate the need for each sensor to be installed individually in a structure. The layer consists of a network of distributed piezoelectric sensors that are individually bonded to it and is manufactured utilizing a standard flex-circuit construction technique in order to connect a large number of sensors. Advanced signal processing, diagnosis methods, and system hardware are implemented for damage monitoring in different complex structures such as rotorcraft, pipeline, paper manufacturing equipment, and high speed train. Keywords
Sensors · Sensor network · SHM · IIOT · Remaining useful life · Reliability · SMART layer · PZT
Introduction SHM as in Situ NDE Structural health monitoring (SHM) is a new technology in the Industrial IoT environment and is increasingly being used by the industry as a method to improve the safety and reliability of structures and thereby reduce their operational cost. Significant benefits are expected in all fields of application, i.e., during laboratory and usage, to reduce the maintenance actions in service and to improve the efficiency of the design. The core of the technology is the use of nondestructive evaluation (NDE) principles in the development of self-sufficient systems for the continuous monitoring, inspection, and damage detection of structures with minimal labor involvement. The aim of the technology is not only to detect structural failure, but also to provide an early indication of physical damage. The early warning provided by an SHM system can then be used to define remedial strategies before the structural damage leads to failure. SHM systems utilize distributed, permanently installed sensors at certain structural regions and apply diagnostic algorithms to extract meaningful health information from the sensing data. A comparison between SHM with traditional NDE techniques reveals differences in data input and interpretation between the two methods (Fig. 1). NDE relies more on the equipment and the resolution of direct measurements from the structure, while the SHM is more dependent on the sensitivity of sensors and the diagnostic software. In addition, SHM has the ability to monitor structures in real-time while NDE requires scheduled maintenance. Additionally, in traditional NDE procedures, factors due to operators pose the dominant
22
Sensors, Sensor Network, and SHM
571
Fig. 1 NDE versus SHM
uncertainty whereas SHM-based technology is mainly challenged by in situ effects, such as changing environments (temperature, loads, humidity, and wind), operating conditions (ambient loading conditions, operational speed, and mass loading), variation in coupling, aging, measurement noise, etc., as well as the sensing network layout itself. At a minimum, an SHM system will be able to detect the occurrence of an event exceeding a prescribed threshold. The outputs depend upon the sensor types, the number of sensors, and their positions on the structure, as well as the software/ algorithms used in the SHM system.
Scheduled and Automated SHM The objectives of SHM methods are the following: (a) to extend the duration of the service life; (b) to increase the reliability; and (c) to reduce the maintenance cost. Damage detection is one of the primary concerns for a structure in service. Therefore, early damage detection prevents catastrophic failure and can provide structural safety. The traditional method for detecting damage in a structure is expensive with respect to both time and money. The SHM system can be used in two possible ways to reduce this cost: 1. Scheduled SHM 2. Automated SHM
572
M. Faisal Haider et al.
Scheduled SHM is when each inspection is independent of time and is associated with an off-board SHM system. In an off-board SHM system, sensors are designed to be integrated onboard with one or more critical structures (such as an aircraft wing) where inspection intervals are high, or access is limited. An offboard data acquisition system will be used for data acquisition. Data can be collected periodically by trained personnel. Data is analyzed to inform maintenance personnel if damage is present as well as information on its size and location. Automated SHM is when the system operates in a continuous time domain and is associated with an onboard SHM system. Sensors are designed to be integrated onboard with one or more critical aircraft structures (such as the wing) where inspection intervals are high or access is limited. An onboard data acquisition system is used for data acquisition. A timer can be set to automatically collect data at specified time intervals. Data can be transmitted to a central location and analysis conducted to inform maintenance personnel if damage is present as well as information on its size and location. Data can also be passed to any onboard integrated vehicle health management (IVHM) system.
Introduction to Sensors for SHM Structural health monitoring (SHM) is a monitoring method to estimate the state of the structural condition by measuring physical features [1]. Different sensors could be used in SHM applications based on the kinematical, mechanical properties measurement and damage types of the host structures. With the recent advancements in sensor technology (wired and wireless sensors), SHM has been widely applied in various engineering sectors such as aerospace, civil infrastructure, etc. to continuously monitor structural condition through real-time data collection. To design an effective SHM system, the first step is to determine the most appropriate type of sensor that can effectively measure the physical features of the structures qualitatively and quantitatively. Moreno-Gomez et al. and associated references provided a comprehensive literature survey of different types of sensors that could be used for NDE or SHM applications [2]. The following sections present a summary of the different types of sensors that are widely used in SHM applications. 1. Accelerometer sensors: Accelerometer is an electromechanical device that measures the static or dynamic accelerations of the host structure due to the presence of induced vibrations. In general, there are four types of accelerometers that are used in SHM: capacitive, piezoelectric, force balance, and microelectromechanical (MEMS) devices. 2. Velocity sensors: Velocity sensors measure the vibration of the structure in terms of velocity. There are two different kinds of velocity sensors that are widely used
22
3.
4.
5.
6.
7.
8.
9.
Sensors, Sensor Network, and SHM
573
in SHM: Doppler effect such as laser doppler vibrometer (LDV) and electromechanical device. Piezoelectric velocity sensors are also widely used which are based on an accelerometer but with an integration circuit onboard to produce a dynamic output in terms of velocity instead of acceleration. Displacement sensors: A displacement sensor or displacement gauge is used to measure the distance of an object with respect to a reference position. This can be a resistive based transducer, linear variable differential transformer (LVDT), or global positioning satellites (GPS) to measure the displacement in SHM applications. Strain sensors: The conventional strain sensors are made based on the strain gage technology. To measure the strain values effectively in SHM applications, the strain gage converts external parameters, such as force, pressure, weight, etc., into a change in electrical resistance which is easy to measure. Piezoelectric transducers, and vibrating wire strain gauges, are also used to measure the strains of the structures. Temperature sensors: Temperature changes may influence the measurements of sensors in SHM applications. Therefore, measuring the temperature is essential. Also, temperature measurement often describes the material states. Several sensors including thermocouples, RTDs (resistance temperature detectors), thermistors, and semiconductor-based integrated circuits (IC), and IR thermometer are used. Pressure sensors: Pressure sensor is a transducer that converts an input mechanical pressure into an electrical output signal. The sensor may be resistive, capacitive, piezoelectric, optical, or MEMS. Eddy current sensors: Eddy current testing is an electromagnetic technique that generates electric current in a conducting material resulting from induction effect due to a moving or varying magnetic field. When eddy currents interact with flaws in a conducting structure, they produce an output signal that provides information about flaws or material conditions. Eddy current testing is suitable for the application in detecting surface cracks, subsurface cracks, coating thickness measurements, etc. Optical fiber sensors: Fiber optic sensing works on the principle of change in the wavelength of the backscattering of light occurring in an optical fiber when the fiber encounters dimension change due to external effects such as vibration, strain, or temperature change. Optical fibers can be used as sensors to measure strain, temperature, pressure, and other quantities for SHM and NDE applications. Piezoelectric sensors: The common piezoelectric sensors are made with piezoelectric ceramics such as aluminum nitride (AlN), barium titanate (BaTiO3), lithium niobate (LiNBO3), gallium phosphate (GaPO4), lead zirconate titanate (PZT), etc., using the piezoelectric effect to measure changes in physical quantities (pressure, acceleration, temperature, strain, force, etc.) by converting them to an electrical signal [3]. Usually, piezoelectric sensors are made of piezoelectric material with electrodes deposited on the upper and lower surfaces as grounding
574
M. Faisal Haider et al.
and supplying electrodes to polarize the electric field through the thickness of the sensor. Piezoelectric sensors made with lead zirconate titanate (PZT) have emerged as one of the major SHM technologies for a variety of damage detection methods such as propagating ultrasonic guided waves, standing waves (E/M impedance), and phased arrays. The PZT ceramic with large coupling coefficient, high permittivity, and quick fast response makes it an excellent candidate as piezoelectric sensor in SHM and NDE applications. PZT sensors are very small, lightweight, and inexpensive and require low power, which is an ideal solution for in situ and ex situ inspection in SHM and NDE applications. PZT sensors can be easily bonded as a layer on a host structure or between layers of a structure. An electric field is generated due to the change in the dimension of the PZT materials or vice versa. Through their two-way intrinsic electro-mechanical coupling, these piezoelectric sensors act as both sensors and actuators. Since a relationship exists between the mechanical properties of the host structure and the electrical response of the PZT, any change in the structural state can be obtained by measuring the coupled electro-mechanical properties of PZTs. Piezoelectric sensors are mostly used sensors to diagnose the health of composite and metal structures using built-in distributed sensor networks. Due to the importance of the piezoelectric sensors, the following paragraphs highlight the basic principle of piezoelectric sensors. The electro-mechanical coupling of the piezoelectric sensors as an actuator and sensors can be described by the following equations: Sij ¼ sEijkl T kl þ dkij Ek ðactuationÞ
ð1Þ
Di ¼ dikl T kl þ eTik Ek ðsensingÞ
ð2Þ
Here, Sij: strain tensor; sEijkl : compliance tensor; Ek: applied electric field; eTik ¼ dielectric constant; Tkl ¼ stress tensor; dikl ¼ piezoelectric constant; and Di ¼ dielectric displacement. The piezoelectric constant is one of the important properties of the piezoelectric sensors. For actuation, the piezoelectric constant implies the mechanical strain (S) generated by the piezoelectric material per unit applied electric field whereas, for sensing, the piezoelectric constant implies the generated electrical field per unit applied mechanical stress (T). The piezoelectric constant index defines the polarization direction and applied direction for the piezoelectric sensors. For example, d31 denotes the polarization direction in X3 direction for strain in the X1 direction (e.g., ε3 ¼ d31E3). Similarly, d33 denotes the polarization direction in X3 direction for strain in the X3 direction (Fig. 2). Piezoelectric materials can be considered as transversely isotropic materials, where d32 ¼ d31, d24 ¼ d15, and ε22 ¼ ε11. Therefore, the constitutive equations become
22
Sensors, Sensor Network, and SHM
575
Fig. 2 Electro-mechanical coupling responses of piezoelectric materials
9 8 8 S1 > > > 3 2 > > > > > > > > SE11 SE12 SE13 0 0 0 > > > > > > > > > > S E E E 7 6 > > > 2 > > > S S S 0 0 0 > > > 7 6 21 22 13 > > > > > > 7 6 < S = 6 SE SE SE < 7 0 0 0 3 13 13 33 7 6 ¼6 > 6 0 > 0 0 SE55 0 0 7 > > 7> S4 > > > > > > 7> 6 > > E > > > 5 4 0 0 0 0 S55 0 > > > > > > > S > > > 5> > > E > > > 0 0 0 0 0 S > > 66 > ; : : S6
9 S1 > 3 2 > > > 0 0 d31 > > S2 > 6 0 > > 0 d32 7 > 78 9 > 6 > 7> E1 > 6 = 6 0 0 d33 7< = S3 7 E2 6 þ6 7> > > 0 d 0 15 > 7: ; 6 S4 > > 7 E3 6 > > 5 4 d 0 0 > 15 S5 > > > > 0 0 0 > ; S6 ð3Þ
8 9 2 D1 > > 0 > = < > 6 D2 ¼ 4 0 > > > ; : > d31 D3
0 0
0 0
0 d 15
d 15 0
d32
d 33
0
0
8 > > > > > > > > 3> > 0 > > < 7 05 > > 0 > > > > > > > > > > :
9 T1 > > > > > > T2 > > > 2 T > > e11 > = T3 6 þ4 0 > T4 > > > 0 > > > T5 > > > > > ; T6
0 eT22 0
38 9 > = < E1 > 7 0 5 E2 > ; : > E3 eT33 0
ð4Þ
576
M. Faisal Haider et al.
Fig. 3 Sensor network-based SMART Layer. (Courtesy Acellent Technologies Inc.)
Introduction to Sensor Networks SHM systems involve multidisciplinary fields including sensors, materials, signal processing, system integration, signal interpretation, etc. The essence of the technology is the development of autonomous systems for the continuous monitoring, inspection, and damage detection of structures with minimal labor involvement. The technology can provide structural data on the structure from conception through life with the help of interpretation software, reducing the dependence on machines. An important part of the SHM system is the proper integration of the sensors and actuators with the structure. There exist many methods for integration of single sensors and actuators into structures. However, a novel way to integrate a number of sensors into a structure is through the SMART Layer (Fig. 3). The SMART Layer utilizes a built-in network of miniature piezoelectric transducers (PZT) embedded in a thin dielectric carrier film to query, monitor, and evaluate the condition of a structure. This method eliminates the need for each sensor and actuator to be installed separately and drastically decreases the number of wires required during installation and usage. Additionally, the system can detect damage in all regions surrounding the sensor/actuators providing an “image” of the structure as compared to technologies, which require the damage to be in the path of the sensors/actuators or in “line-of-sight” for it to be detected. The SMART Layer can either be treated as an extra ply and integrated within a composite structure during fabrication or retrofitted on the surface of any existing metal or composite structure. The layer can incorporate sensors other than PZT, such as strain, temperature, and moisture gages, into the embedded network to monitor the complete state of the structure.
Signal Processing and Diagnostics The damage detection using PZT sensor can be categorized into two main major sensing modes as shown in Fig. 4 [4, 5]:
22
Sensors, Sensor Network, and SHM
Actuator
ACTIVE MODE
577
PASSIVE MODE
Sensors
Sensors
• Finds location of structural changes
• Finds location of impacts
• Can scan large areas in minutes.
• Records date/time of occurrence.
• Can identify type/size of damage when calibrated with known damages.
• Determines impact force/energy (to predict structural damage), but requires calibration with known impacts.
Fig. 4 Active and passive sensing modes. (Courtesy Acellent Technologies Inc.)
(a) Active sensing mode: In active damage detection, energy is imparted in the structure using transducers to create elastic waves. These incident waves then travel in the structure and are scattered when they encounter a structural flaw, damage, or boundary condition. The scatter field is sensed using PZT sensors. The scatter fields are then compared with the incident waves to calculate scatter coefficients, damaged index, or frequency domain features. In a typical SHM or NDE system, these features are analyzed to detect and characterize the damage. However, to identify damage using scatter wave fields, the first thing is to understand the effects of different types of damage on these scatter waves. (b) Passive sensing mode: In passive damage detection, sensors are used to sense an acoustic event such as impact or crack propagation in the structure. In passive sensing, energy is not imparted into the structure using the transducer, rather the elastic waves are generated due to the energy release from an impact event or crack growth. For far field damage detection using active sensing methods, there are two basic configurations of the sensors that could be used: (1) pitch-catch, (2) pulse-echo. In the pitch-catch method, two sensors are used: one is the actuation sensor and another one is the receiver sensor. The waves are generated by the actuation sensors and then interacted with the damage. The scattered waves can be captured by the receiver sensors, and by analyzing the wave from the receiver’s sensors the damage can be detected. In this method, often a baseline signal or pristine structure signal is needed to understand the change in the wave features due to damage. In the pulse-echo method, the same sensor is used for both actuation and sensing. In contrast to the pitch-catch method, the wave is reflected from the damage and can be captured by the same sensor. Figure 5 shows the pitch-catch and pulse-echo configurations. Typically, two different active sensing methods, a damage index method and a diagnostic imaging method, are used to quantify damage in metallic or composite structures.
578
M. Faisal Haider et al.
Fig. 5 Possible damage location based on the time delay of a single scattered wave
Damage Index Method The damage index method is useful for indication for damage detection in a pitchcatch sensors configuration with a single actuator–sensor path. The features of the signal changes are quantified as damage index, and it is related to the change in local material properties. Since the scattered wave contains the information of both amplitude and phase delay of the wave due to the presence of a flaw, an appropriate damage index can be selected to correlate the change in sensor measurements to identify the damage and quantify the damage size to some extent. However, the limitation of the damage index is that it utilizes the information of the scattered (or directly transmitted) wave, and may be limited in damage types, damage size, and actuator-sensor configuration. Yet, this is the most straightforward and powerful technique to assess the damage of the structure. Several damage index methods that can be extracted from Lamb wave signals exist in the literature [6–10]. The change in the wave features is calculated from the time domain, frequency domain, and time-frequency domain. The following section presents four damage indexes to measure the signal variations. A straightforward damage index can be calculated from the amplitude of the incident and scattered waves, which are also known as scattered coefficients. However, the amplitude of the waves often varies due to environmental parameters rather than damage features. Therefore, it is often useful to calculate the damage index using the time-of-flight (TOF) of the signal, rather than the amplitude variation of the signal. Equation (5) shows the damage index of a measured signal compared to the baseline signal for a specific time window.
DI ¼ 1
Ð t1 b ð t Þ μ m ð t Þ μ b ð t Þ m ð t Þ t0 σ bðtÞ σ mðtÞ
ð5Þ
where b(t) and m(t) represent the baseline signal and monitoring signal, respectively; t0 and t1 are the time of the selected signal time window; μb(t) and μm(t) are the mean value of the signals; and σ b(t) and σ m(t) are the variance of the signal.
22
Sensors, Sensor Network, and SHM
579
The next damage index is calculated based on the differences of the time domain signal [9]. To be most useful, the difference signal d(t) should be independent of the amplitude of the original signals. Thus, the measured and reference signals are scaled with scaling factor α. Hence, the damage index is an amplitude-independent measure of the difference between the signals.
DI ¼
ð t1
d ðtÞdt
ð6Þ
t0
d ðtÞ ¼ DðtÞ αbðtÞ Ð t1 DðtÞbðtÞ dt mðtÞ q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi q where DðtÞ ¼ Ð t ; α ¼ t0 ffiffiffiffiffiffiffiffiffiffiffiffiffi Ðt 1
t0
m2 ðtÞ dðtÞ
ð7Þ
1 2
t0
b ðtÞ dt:
A third damage index is calculated based on the spectrum magnitude difference. In this formulation, the damage index is only affected by the change in the energy of the signal, not by the change in TOF vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uÐ ω1 u ω ðjbðωÞj jmðωÞjÞ2 dω DI ¼ t 0 Ð ω1 2 ω0 ðjbðωÞjÞ dω
ð8Þ
Ðt Ðt where bðωÞ ¼ t01 bðtÞejωt dt and mðωÞ ¼ t01 mðtÞejωt dt ; ω0 and ω1 are the frequency of the selected frequency domain window. The last signal variation index is a frequency domain cross-correlation similar to time domain cross-correlation, except it is in the frequency domain. It should be noted that this signal variation index is affected by both changes in the energy and shape or the TOF of the signal. vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u nÐ o2 u ω1 b ð ω Þm ð ω Þdω u ω0 Ðω DI ¼ 1 tÐ ω1 2 b ðωÞdω ω01 m2 ðωÞdω ω0
ð9Þ
Diagnostic Imaging Method To quantify the damage and to build up a reliable SHM system, it is essential to characterize the SHM system for its sensitivity in terms of its detection capability. The first step toward this goal is to understand the factors and parameters that affect the damage detection sensitivity. Typically, a standard SHM system involves four functional levels referred to as technology classification levels (TCLs):
580
M. Faisal Haider et al.
I: Detection of occurrence of an event II: Identification of the geometric location of the event III: Determination of the magnitude or severity of the event IV: Estimation of the remaining service life/strength (prognosis) The recent advances in sensors technology and SHM systems have demonstrated the feasibility of detecting and locating damage in complex structures based on sensor measurements. However, quantifying the damage based on sensor configurations remains a challenging task. Quantifying damage is very important to understanding the severity, the structural condition, and estimating remaining life. The difference in the scattered wave from a damage carries information about the damage. By comparing with different signals, it is possible to estimate the size of the damage. However, the method is not accurate enough. A widely used method to identify the damage size or location is the diagnostic imaging method using multiple sensors or a sensor network [4]. Figure 6 illustrates the approach. For example, the scattered response from actuator 1 and sensor 2 contains only a scattered wave packet arriving at a given time. The total time delay t, of the scattered wave, should correspond to the wave travel time from actuator 1 to the damage (D) and then from the damage to sensor 2 such that: t ¼ t1d þ td2 t1d ¼
Fig. 6 Possible damage location based on the time delay of a single scattered wave packet
ð10Þ
l1d l ¼ d2 ,t c d2 c
ð11Þ
Sensor 2
Sensor N Damage, D
Id-2
Id-2 I1-d
Sensor 1 (Actuator)
Sensor 3
Sensor 4
22
Sensors, Sensor Network, and SHM
581
Fig. 7 Demonstration of the active sensing using damage index method and diagnostic imaging method
where l1-d is the linear distance between the actuator 1 and the damage, ld-2 is the linear distance between the damage and sensor 2, and c is the speed of the wave. For unknown damage, there are many such combinations for a set of N sensors. The generic equation for the time delay can be written as follows: t ¼ tid þ tdj tid ¼
ldj lid ,t ¼ c dj c
ð12Þ ð13Þ
As shown in Fig. 6, assuming the velocity c is constant, the locus of the possible damage location is an ellipse with actuator 1 and sensor 2 as the foci. To find the exact location of the scatter source or damage, other ellipses generated from other actuator/sensor pairs need to be used. Let I be an illumination value, and I becomes maximum at the damage location for a summation of all I from all actuator–sensor pair locations (Eq. 14). I SUM ¼
N X N X i¼1
Sij ðτP Þ
ð14Þ
j¼1
Figure 7 shows a demonstration of damage index and diagnostic image methods for damage detection in SHM applications.
Estimating Impact Location Acoustic or impact events are quite common in both metallic and composites plates. The impact can happen from many different sources such as sudden impact from a foreign object (tool drop, bird strike, and debris hit), fatigue crack generation in the
582
M. Faisal Haider et al.
structures, matrix cracking, fiber breakage, and/or delamination of the composites, etc. [11–13]. Qualitative and quantitative measurement of the damage from all possible events is an essential research area in SHM. Most of the impact detection algorithms are based on the time of arrival (TOA) of the elastic waves generated due to impact. There are several methods to calculate the TOA such as threshold, peaksignal, and double-peak methods [12, 14, 15]. However, the TOA estimation method is not always accurate enough to localize and estimate the damage with high accuracy. For this reason, estimating an impact location by calculating the centroid of power distribution over a structure from sensor signals due to impact loading is a useful method. The distributed sensors network can collect a series of impact events from all over the structure. When a structure has sufficient sensors for monitoring impact events, the power distribution due to impact can provide a good estimation for the impact location [16]. Once impact occurred, the signals due to the impact loading are recorded from the sensors distributed over the structure. First, the method is used to determine the signal power from the time average of energy for a given time window; then the rms power is calculated to obtain the power distribution. P¼
1 t f t0
ð t1
jsðtÞj2 dt
ð15Þ
t0
Here, t0 is the initial time and tf is the final time. The rms power can be calculated as
Prms
pffiffiffi ¼ P¼
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ð t1 1 2 jsðtÞj dt t f t 0 t0
ð16Þ
Once the rms power for all sensors is calculated in a given time window, the power distribution can be found over the structure. By interpolation, a smooth power distribution can be obtained so that the interpolating surface satisfied the biharmonic equation and therefore has minimum curvature. Once, the power distribution is obtained, the impact location can be found from the following equation in terms of x and y coordinates: Prmsi ¼ Prmsi ðxi , yi Þ i ¼ 1, 2, 3, . . . :
ð17Þ
P P Prmsi xi P y P xc ¼ ; y ¼ P rmsi i Prmsi c Prmsi
ð18Þ
Estimating Impact Force Estimating the impact location as well as determining the impact force is a nonlinear inverse problem. Load identification method (or the load identifier) can be used to
22
Sensors, Sensor Network, and SHM
583
estimate impact location as well as the impact force [17]. This method consists of two parts: a system model and a response comparator (an identification program) as shown in Fig. 8. The system model is needed to characterize the dynamic response of the structure for known impact force and location, and the response comparator compares the measured sensor outputs with the estimated measurements from the model to estimate the impact location and force history. For a beam type structure, the response is governed by the following equation: ρA
@2w @4w þ EI 4 ¼ f ðx, tÞ . . . . . . . . . 2 @t @x
ð19Þ
where ρ is the density, and A and EI are the cross-section area and the moment inertia, respectively. w is the deflection and f (x, t) is the force history. Equation (19) can be transformed into a set of two first-order differential equations which can be written in a state space form as [17] _ Z5Az þ Bf . . . . . . . . .
ð20Þ
Equation (20) defines that the time derivative of the state at a certain time is the sum of the effect of the current state (Az) and the input (Bf). For a linear system, the sensor output is represented by a linear combination of state variables as follows y ¼ C xj z.........
ð21Þ
As the data acquisition is done in digital form, the discrete version of the system equation can be written as follows znþ1 5Φzn þ Γ f n . . . . . . . . . yi ðnÞ5C x j zn n ¼ 0, . . . , ðN 1Þ . . . : . . . . . .
ð22Þ ð23Þ
Here, N is the total number of sampling points. yj (n) is the measured quantity (displacement, strain, etc.) at the sensor located at a distance of xj from the wave
Fig. 8 Schematic drawing of the proposed impact load identification system
584
M. Faisal Haider et al.
origin at time t ¼ nTs. Ts is the sampling period. The system matrices of the two representations are related as Φ ¼ exp ðAT s Þ . . . . . . . . . Γ¼
ðTs 0
exp ðAT Þdt B . . . . . . . . .
ð24Þ ð25Þ
The response comparator is designed to compare the sensor measurements with simulated sensor outputs from a system model and to determine from the comparison the location and force of the impact. Let xo be the actual distance between a designated sensor and the impact point and xe be its estimate. The objective of the comparator is to find the location parameter, xe, and force history, f (n), so that the model prediction matches the measured response as closely as possible. Let v be the difference between the measured outputs ( y) and the model predictions (ye), which contains both the effects of measurement noise and error due to incorrect estimation of the location or history of the external force, i.e., vn ¼ ye,n ðxe, f Þ ym,n . . . . . . : . . .
ð26Þ
To minimize the difference, the problem can be defined as follows N 1 N 1 1X 1X T min J ¼ ½zð0Þ z0 T S0 ½zð0Þ z0 þ f nQ f n þ v Rv 2 2 n¼0 2 n¼0 n n xe, f
1 þ ½zðN Þ zN T S f ½zðN Þ zN . . . . . . . . . 2
ð27Þ
Here, Q is weighting for the input force and R for the states. So and Sf are weighting matrices for the initial and final conditions, respectively. The minimization of the above equation is subject to the constraints of the system Eq. (23). The performance index is modified to accommodate these constraints using Lagrange multipliers λ: J¼Jþ
N 1 X n¼0
λT
h
i 1 Φzm þ Γ f n znþ1 . . . . . . . . . 2
ð28Þ
To minimize J , the algorithm developed by Idan and Bryson [18] based on a smoothing technique was adopted. The parameter xe (location of the force measured from a sensor) is updated using a quasi-Newton procedure to minimize J 1
Xe,new ¼ Xe,old J xx J xe 1
The inverse of the Hessian matrix J xx is estimated numerically from successive values of the gradient vector J xe using a rank-two update procedure [17]. A local minimum is found for each initial guess following the procedure shown in Fig. 9.
22
Sensors, Sensor Network, and SHM
585
Fig. 9 Impact location and force identification procedure flowchart
Application Examples Scheduled and automated SHM systems have been tested in a number of applications. Selected examples are provided below.
Rotorcraft A timer-based automated Tail Boom Crack Detection System (TBCDS) was designed, manufactured, and tested for the OH-58D helicopter that provides the capability to sense changes in structural integrity of the aircraft [19]. The target monitoring area was the aftmost rivet securing the tail rotor driveshaft cover left hand support (Fig. 10). This rivet is currently inspected via visual inspection or eddy currents. Sensors placed in the tail boom area of interest were designed to monitor the structural health and communicate the health status with the HON-1134 Honeywell Health and Usage Monitoring System (HUMS) and Personal Computer – GroundBased System (PC-GBS). The PC-GBS processes the data and displays results to the user to indicate the health of the monitored portion of the tail boom. A system block diagram is shown in Fig. 11.
586
M. Faisal Haider et al.
Fig. 10 Preliminary target monitoring area [19]
Tailboom area
Instrument area Customized shielded cable
SMART layer
HUMS
ScanGenie II Ethernet
Fig. 11 TBCDS block diagram [19]
The TCBDS consists of three subsystems: (i) An eight sensor SMART Layer (ii) Onboard data acquisition and control hardware that interfaces with the OH-58 HUMS system (iii) Data analysis and management software integrated with the ground station for structural health assessment Design considerations include the following: • Detection of 0.19600 crack with 90% probability of detection (POD), 95% confidence • Low clearance for connector • Cabling of sensor layer on the tailboom Based on these design considerations, the SMART Layer was designed as shown in Fig.12.
22
Sensors, Sensor Network, and SHM
587
Fig. 12 Design of the sensors
Fig. 13 Coupon with the stiffener and the rivets
Damage Verification Testing An eight PZT sensor-based SMART Layer was manufactured and bonded to a test coupon for detecting, localizing, and quantifying fatigue cracks initiating at the hotspot regions in the structure as shown in Fig. 13 [20]. Each PZT has a diameter of 5 mm and thickness of 0.4 mm. The test coupon with dimension of 305 mm 305 mm 1.6 mm was loaded in a uniaxial testing machine. A stiffener and three rivets were added into the main plate to represent the tailboom section of the rotorcraft as shown in Fig. 13. Before the fatigue test, a damage simulator was used to collect in advance for quantification.
588
M. Faisal Haider et al.
Level I Damage Detection The DI value with respect to the fatigue cycles is plotted in Fig. 14. The first visually detected crack size was 1 mm. The fatigue crack growth was monitored by controlling the fatigue cycles. The trend of the DI level is well matched with the crack growth. The threshold for damage detection for the given configuration is a DI level of 0.1. Level II Damage Localization After damage detection, the damage localization was performed. A reflection-based algorithm (RBA) described was used to estimate the damage location because the fatigue crack occurred outside of the sensor layout. Figure 15 shows the generated damage localization images, which correctly localize different fatigue cracks. Level III Damage Quantification After detecting and localizing fatigue cracks, it is important to quantify the crack size to help in the selection of a damage mitigation approach and also to accurately predict the remaining useful life (RUL) of the structure for prognostics in the future. A novel nondestructive damage simulator (metal block) was used to generate fatigue crack calibration curves by analyzing experimental and damage simulator-based sensor data. The approach consists of two steps (1) estimating a calibration curve by learning the relation between damage simulator and actual damage with respect to sensor data, and (2) correlating actual damage size using calibration curve and sensor data on any other identical structure. Using this approach, cracks of varying lengths are simulated without creating any actual damage in the structure under inspection. The calibration curves will be used to estimate actual damage size through the sensor signal collected from structures similar to the one used for estimating the calibration curves. The developed damage simulator approach helps in (i) noninvasively 0.4 15mm
0.35
Damage Index
0.3 0.25 0.2
2mm 1mm
0.15
10mm 8mm 5mm 6mm 4mm 3mm
0.1 0.05 0 0
20
60 40 Cycles (x103)
Fig. 14 DI history at 200 kHz with respect to the fatigue cycles
80
22
Sensors, Sensor Network, and SHM
589
Fig. 15 Damage localization from different fatigue crack sizes at 200 kHz Fig. 16 Damage simulators of varying sizes and cross section of damage simulator
estimating calibration curves for fabricated structures as well as in-service structures, (ii) accurately quantifying the size of crack, and (iii) not changing/affecting the performance of the structure under inspection. Figure 16 shows damage simulators of different lengths to simulate fatigue cracks of different sizes. The damage simulator, of 6 mm size, was attached near the critical/ high stress region at rivet 1 using removable adhesive (Aquabond 55(™)) and active diagnostic data collected. The damage simulator was easily removed by pouring hot water onto the damage simulator. This data collection procedure was repeated to collect simulated data with 9 mm, 15 mm, and 18 mm damage simulators. For any given identical structure, damage simulator data can be collected with several sizes of damage simulator to estimate the relationship between the DI and the damage simulator size for the given structure. Figure 17 shows the comparison of actual and estimated crack size from this approach [20].
Flight Testing The complete TBCDS system was installed on a Bell OH-58 demonstrator aircraft as shown in Fig. 18 [21]. Once installed, a functional test procedure (FTP) was
590
M. Faisal Haider et al.
16
COMPARISON OF ACTUAL AND ESTIMATED CRACK SIZE (MM) ACTUAL CRACK SIZE ESTIMATED CRACK SIZE
Actual and Estimated Crack Size (mm)
14 12 10 8 6 4 2 0 4.5
5
5.5
6
6.5
7
# Fatigue Cycles
7.5
8
8.5
9 4
X 10
Fig. 17 Comparison of actual and estimated crack size [20]
developed to ensure the system was installed correctly and working as expected. All testing was completed successfully, with the TBCDS operating as expected. As part of the program, an onboard version of the SHM system for hot-spot monitoring was developed and tested in accordance with the Airworthiness Qualification Plan (AQP) to satisfy the Flight Test and Fielding Demonstration Airworthiness Requirement (AWR) requirements. Testing was successfully conducted in accordance with MIL-STD-810G. The entire system along with communication with the HUMS system was successfully flight tested. The objective of the flight test demonstration was to establish the benefit or identify any challenges with taking automated TBCDS data on a helicopter during flight. The stress, vibration, and acoustic environment experienced on a flying helicopter are much different than that experienced in a laboratory or test environment. For the flight test demonstration, 12 data sets were taken in 6 different flight conditions. Two data sets were taken for each condition, to allow for a backup in case of any data collection issues. These conditions were chosen since a near constant load could be maintained in the area of interest during data acquisition so that sensitivity of the TBCDS to flight loads could be evaluated.
22
Sensors, Sensor Network, and SHM
591
Fig. 18 TCBDS diagnostic hardware and sensors installed on aircraft [19]
The flight log lists the data collection points as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
A/C on ground, electric powered, no engine running A/C on ground, electric powered, no engine running A/C on ground, engine running at idle A/C on ground, engine running at idle A/C in steady hover, in ground effect A/C in steady hover, in ground effect A/C in forward level flight, ~112 KIAS A/C in forward level flight, ~112 KIAS A/C in steady heading side slip, 1/2 right ball, 101 KIAS A/C in steady heading side slip, 1/2 right ball, 101 KIAS A/C in rearward flight, steady 20 knots A/C in rearward flight, steady 20 knots
Data was collected from the sensors during each condition. The impedance value from each sensor was measured right before each data measurement and was found to be consistent from all datasets. The mean and standard deviation of the damage index (DI) values from all the sensor paths were plotted for each dataset in Fig. 19. It was observed that the maximum DI value from all the flight data was under 0.04, which is an order of magnitude less than typical structural damage DI values.
Pipeline The natural gas pipeline industry consists of transmission and distribution companies. These pipeline systems can be simple or complex. Due to the potentially catastrophic nature of damage occurring in the pipelines, it is important that utility providers be warned immediately whenever damage occurs so that the utility can take immediate remedial action to limit damage and economic loss. Under a program funded by the California Energy Commission PIR-12-013, a Real-time Active
0.06 0.05 0.04
6
8
A/C in rearward flight, st eady 20 knots
Damage Index
0.07
A/C in steady heading side slip, 1/2,n right ball, 101 KIAS
0.08
A/C in forward level flight, ~112 KIAS
0.09
A/C on ground, engine running at idle
0.1
A/C in steady hover, in ground effect
M. Faisal Haider et al. A/C on ground, electric powered, no engine running
592
0.03 0.02 0.01 0
0
2
4
10
12
Flight dataset # Fig. 19 DI values for the flight maneuvers [19]
Pipeline Integrity Detection (RAPID) system for scheduled corrosion monitoring in pipelines was integrated, and tested under predeployment. The RAPID system is an active structural health monitoring (SHM) system designed for scheduled inspection of pipeline corrosion and damage events [22]. The system combines the SMART Layer technology, diagnostic hardware, and software. As the operation of the RAPID system is dependent on a distributed network of sensors, a key parameter of the system development is the determination of appropriate sensor spacings. In order to determine the best sensor arrangement for use in detecting corrosion damage on pipes, a series of tests were performed on different sensor spacings utilizing sections of steel pipes. For these tests, corrosion was simulated by material removal via grinding. Based on the results of these tests, it was found that the optimum system performance was achieved with utilizing two rows of sensors installed around the circumference of the pipe with the two rows spaced 12 inches apart along the length of the pipe and individual sensors spaced 3 inches apart within each row. The result is sensors spaced every 12 inches along the length of the pipe and every 3 inches around the pipe circumference. An example of this sensor layout can be seen in Fig. 20a. In addition to the sensor-spacing design work, work was also performed to determine an appropriate coating to be placed over the sensors to protect against environmental degradation. Based on previous work Acellent had performed, it was
22
Sensors, Sensor Network, and SHM
593
Fig. 20 (a) Optimum sensor spacing, (b) SMART Layers installed on a pipe with protective coating [22]
found that the system performed effectively when protected with a fiberglass epoxy film. An example of the protective coating with SMART Layer sensors installed underneath can be seen in Fig. 20b. In order to ensure efficient operation of the RAPID system, the three primary components of the system had to be effectively integrated together. For the system to operate properly, the hardware needed to collect accurate data from the installed sensors and transmit that data to the remote inspection office where the damage detection software could analyze the data and report the result to the end user. In order to ensure the system operation, a significant effort was developing and testing the software and hardware to link the individual components together into a complete system. This task incorporated enabling the in situ hardware and sensor package with a wireless Internet connection. This was then complemented by a pair of data collection, management, and transmission programs working over the Internet link to remotely retrieve data collected by the hardware and store it in the appropriate location on the remote inspection terminal in the pipeline inspection office. This management software was then linked directly to the graphical user interface (GUI) along with the damage detection algorithms to allow the system to autonomously collect data, analyze it, and display the result on the GUI along with an analysis of how any damage has changed over time. This integrated autonomous system requires minimal user inputs and is designed to be easily utilized by an individual with minimal training. To ensure the reliability of the system for in situ operation, a significant amount of component testing was performed during the system development. Two specific sets were performed to ensure system reliability. First, the sensor coating combination was tested for survival against external impact to ensure that they would survive in the field. Second, the data transfer system was tested to ensure that data could be transmitted from the hardware to the inspection office even under poor conditions. For the sensor survivability tests, a standard impact test was performed where a 1 kg weight was dropped directly on top of the sensor from a height of 70 cm and the sensor response was measured. This test chosen as sensor survivability against impact is a critical test for ultimate certification of the system for use in the oil and
594
M. Faisal Haider et al.
Fig. 21 (a) Example of an impact test performed on the coated sensor layers; (b) summary of testing performed and results [22]
natural gas industry. Based on these tests, it was found that the sensors would survive even under multiple repeated impacts on the same location. A sample demonstration of the tests can be seen in Fig. 21a. The data transfer system was also tested for reliability to ensure the effective transfer of data from the hardware to the end user. This testing was performed in partnership with Verizon at their wireless testing facilities in San Francisco. The communications environments that were tested include poor network quality, wireless disconnection and reconnection, and fading network quality. The system was found to be able to reliably transfer data from the hardware to the remote use under any condition except that of a long-duration disconnection of the hardware from the wireless network, in which case failure cannot be avoided. Even under these conditions, the system noted the failure and terminated the process gracefully, allowing the system to reconnect once communication was restored. As a result, it was determined that the data transfer system was adequately designed against failure and would operate reliably in the field. To validate the system for use in field applications, a test was performed on a test loop at PG&E Advanced Technology Services Center in San Ramon, CA. For this test, a prototype system was installed on the test loop to monitor a 1-foot section of the pipeline. The system was monitored remotely while workers at PG&E applied simulated corrosion using a plasma gouge to the pipe section in a blind manner. Data was then analyzed to determine where damage had been applied and how large and deep the damage was estimated to be without having any prior knowledge of when, where, or how large this damage was. Overall, the RAPID system proved capable of detecting the damage without any foreknowledge of its existence. A summary of the testing performed and the results can be seen in Fig. 21b.
22
Sensors, Sensor Network, and SHM
595
Paper-Manufacturing Equipment Automated monitoring of the integrity of rotating roller(s) of paper machinery equipment to optimize the paper production processing parameters is crucial to improve the production efficiency and quality of paper sheet and board products. The paper manufacturing industry has found that they can obtain significant cost savings by improving the nip profiles, parent roll hardness profiles, and tension profiles for those rolling components in the machines [21]. Utilizing the SMART Layer sensing technology, a real-time integrity detection system called iRoll, shown in Fig. 22, was developed for in situ integrity monitoring of rotating roller(s). The iRoll system utilizes the SMART Layer technology that can be surfacemounted on metallic structures. The iRoll system uses a long sensor layer, up to 12 meters in length that is permanently mounted on the metallic roll surface at an angle with composite wrapped protection. A wireless power supply system and a remote receiver are used to conduct data collection and house the monitoring software. Figure 23a shows a picture of where the long layer sensors are installed in order to measure the pressure on the surface of a roller. Figure 23b shows the finished surface of a roller that has a long sensor strip embedded in it. With the integration of the long sensor layer strip in the roller, the whole system is called iRoll [21]. When the iRoll system is in operation, the sensors mounted on the roller rotate under the wrap angle generating a continuous load signal. A mapping of the pressure profiles can then be generated as a function of the angular position of the roll (see Fig. 24). Data is processed by a signal conditioning module and transmitted from the rotating roll with a digital radio transmission. The system can evaluate and diagnose uniformity of loading or tension on the surface of the roll. It can also be utilized on a covered roll in paper, board, pulp, or tissue production machines to expand the roll’s primary function to include use as a transducer for sensing crossmachine nip linear load or sheet properties such as parent roll hardness profile for ensuring the product quality and improving operational efficiency. A 12-meter long SMART Layer strip was installed in an iRoll system for testing the performance of sensors. An example of a cross dimension nip load profile measured on a rotating roll in paper machine environment by using the iRoll system is seen in Fig. 25.
Fig. 22 Real-time integrity detection system- iRoll
596
M. Faisal Haider et al.
Fig. 23 Sensor layer mounted on the metallic roll surface at an angle with composite-wrapped protection to measure the pressure
Fig. 24 Pressure map profiles visualization from iRoll
Another example of cross dimension nip load profiles measured on a rotating roll by using the iRoll system is shown in Fig. 26. This color map illustration shows the changes in the nip load over a longer period in time. A cross dimension illustration of nip impulse measurements obtained by using the iRoll system is shown in Fig. 27. Figure 27a shows a machine direction nip load distribution measured on a rotating roll using the iRoll system. Figure 27b illustrates the ability to detect the nip impulse in the tangential direction throughout the roller nip and also to detect the nip length.
22
Sensors, Sensor Network, and SHM
597
Fig. 25 Mechanical load on the surface of roller monitored from iRoll
Fig. 26 Pressure of roller in color map
High Speed Train In order to accommodate the rapidly developing economies in various countries, the high-speed train has attracted much attention during the past two decades. A case study of using automated SHM technology to monitor the structural integrity of high-speed EMU (electric multiple unit) train was performed [23]. SMART Layer sensor networks as shown in Fig. 28 were employed to collect sensor data using appropriate hardware and software. A line test program involving a high-speed EMU train was conducted to identify and evaluate compatibility issues for the customized SHM system under normal train operations. Figure 29 shows the overall concept of operations for the onboard SHM system for monitoring the high-speed train components. Due to the time constrain during the sensor installation, the SMART Layers were installed with room temperature curing of the adhesive requiring at least 21 days for the data to stabilize. Figure 30 shows the DI plot of the signals collected during the month of June, July, and September. It can be seen that there occurs a sharp rise in DI
598
M. Faisal Haider et al.
Fig. 27 “nip” loads on the surface of a roller in color map (2D and 3D) Fig. 28 Sensor layout for the line test [23]
Fig. 29 Concept of operations using the SHM system for high-speed train [23]
22
Sensors, Sensor Network, and SHM
599
Fig. 30 Stable data collected during line tests [23]
of the data collected at the beginning of June which eventually settles down toward the end of June; this is due to the fact that SMART Layers take at least 3 weeks to settle down and emit stable signals. Damage index for all the data and the paths collected between the months of June and November using the simulated data is shown in Fig. 30. It can be seen in Fig. 30 that after the curing period, the DI for all the measurements are found to fall below the damage detection threshold which verifies that the SHM system is robust against environmental and operational variation occurring during the operation of the train. The results also showed that the SHM system works smoothly within the inter-city operations and is effective for monitoring the structural integrity of the components in a high-speed EMU train in service.
IIoT Solutions The new generation of Industrial Internet of Things (IIoT) will transform the industrial age into the information age by exploiting opportunities and benefits offered by Information Age technology and techniques. Assets and infrastructure continue to drive innovation in systems and deployment of IoT technologies that have the potential to deliver new capabilities and cost savings. A key enabler for IIoT are sensors and data procured from them. Sensor-based IoT devices can gather more data, facilitate more complex analysis and faster reactions, and reduce human error, delivering more precise and efficient capabilities.
600
M. Faisal Haider et al.
Sensor network-based structural health monitoring (SHM) systems have the potential to provide real-time and historical sensor-based data on the integrity of any structural platform to enable IIoT. SHM systems can reliably detect, localize, and quantify damage in components in existing and new structural assets and use the identified damage to make a decision on when to remove the component or for prognosis of the remaining useful component life and system performance. Platform readiness, minimization of costs from unnecessary teardowns, as well as enhanced safety are some of the major goals of SHM. SHM systems have been used commercially by a number of markets including aircraft, heavy machinery, mining, pipeline, etc. The system can provide the following application benefits to these markets: • Enhanced readiness – the SHM system can increase readiness by providing any platform the ability to accurately detect, localize, and quantify damage initiation and growth and define periodic inspection requirements based on the actual usage of the platform. The platform life can then be extended based on actual usage. • Reduces cost – the SHM system can decrease structural inspection costs, increase platform readiness, and enable service life extension. The system can continuously monitor the health of critical structures without the need for costly inspections requiring structure disassembly and reassembly along with reduced false alarms. • Improve the timeliness and thoroughness of test and evaluation outcomes: The SHM system can be a self-powered diagnostic unit that can quickly collect data from the region being monitored and provide damage information. This information enables decision-making on the readiness of the platform based on usage and provides advice on maintenance actions. SHM systems can be used to periodically collect data remotely from any structure for use in data analysis and structural health monitoring. Figure 31 shows the SHM system for data collection and monitoring. Data for the SHM system is collected remotely periodically and transferred using a cloud-based data/information platform.
Summary An efficient SHM method is essential for structures which (i) can perform inspection from manufacturing to in-service while being cost effective and labor efficient, (ii) can be automated and performed anytime and anywhere, and (iii) has no need to bring the structure out-of-service for disassembly. The ideal method to fit these requirements is a built-in inspection tool based on integrated sensors used for SHM, which can be used to assess the structure. This method must be as follows: (a) Low cost to fabricate and easy to install (b) Low weight and having no penalty to structural properties (c) Accurate and reliable in real time
22
Sensors, Sensor Network, and SHM
601
Fig. 31 Schematic of system
This chapter described an SHM system as one in which a network of sensors is attached to or embedded into the structure; these sensors monitor the structure for changes. An effective and robust diagnostic tool using ultrasonic wave-based methods for near-field (local) and far-field (remote) damage detection and assessment using the embedded sensor networks was discussed. SHM systems can be classified into two types: passive sensing and active sensing systems. The passive sensing techniques use the built-in sensors to collect data in a passive mode which is utilized to detect, evaluate, and determine the state of health of the structure. In contrast, the active sensing techniques rely on built-in actuators to generate predefined excitation to the structure and use the neighboring sensors to collect these propagating acoustic wave signals. Again, the sensor data is used to detect, evaluate, and determine the state of health of the structure. The sensor data is processed using advance signal processing techniques in order to extract appropriate damagesensitive features. SHM has the potential to solve the problem of large-area interrogation of composite and metallic structures.
References 1. Janapati V, Kopsaftopoulos F, Li F, Lee SJ, Chang FK. Damage detection sensitivity characterization of acousto-ultrasound-based structural health monitoring techniques. Struct Health Monit. 2016;15(2):143–61.
602
M. Faisal Haider et al.
2. Moreno-Gomez A, Perez-Ramirez CA, Dominguez-Gonzalez A, Valtierra-Rodriguez M, Chavez-Alegria O, Amezquita-Sanchez JP. Sensors used in structural health monitoring. Arch Comput Meth Eng. 2018;25(4):901–18. 3. Gautschi G. Piezoelectric sensors. In: Piezoelectric Sensorics. Berlin/Heidelberg: Springer; 2002. p. 73–91. 4. Ihn JB, Chang FK. Pitch-catch active sensing methods in structural health monitoring for aircraft structures. Struct Health Monit. 2008;7(1):5–19. 5. Markmiller JF, Chang FK. Sensor network optimization for a passive sensing impact detection technique. Struct Health Monit. 2010;9(1):25–39. 6. Boller C, Chang FK, Fujino Y, editors. Encyclopedia of structural health monitoring. Wiley; 2009. 7. Su Z, Ye L. Identification of damage using lamb waves: from fundamentals to applications. Springer Science & Business Media; 2009. 8. Qiu L, Yuan S, Chang FK, Bao Q, Mei H. On-line updating Gaussian mixture model for aircraft wing spar damage evaluation under time-varying boundary condition. Smart Mater Struct. 2014;23(12):125001. 9. Michaels JE, Michaels TE. Detection of structural damage from the local temporal coherence of diffuse ultrasonic signals. IEEE Trans Ultrason Ferroelectr Freq Control. 2005;52(10):1769–82. 10. Qing XP, Beard S, Shen SB, Banerjee S, Bradley I, Salama MM, Chang FK. Development of a real-time active pipeline integrity detection system. Smart Mater Struct. 2009;18(11):115010. 11. Wang CS, Chang FK. Built-in diagnostics for impact damage identification of composite structures. In: Proceedings of the 2nd international workshop on structural health monitoring; 1999. p. 8–10. 12. Seydel R, Chang FK. Impact identification of stiffened composite panels: I. System development. Smart Mater Struct. 2001;10(2):354. 13. Haider MF, Giurgiutiu V. Analysis of axis symmetric circular crested elastic wave generated during crack propagation in a plate: a Helmholtz potential technique. Int J Solids Struct. 2018;134:130–50. 14. Gunther MF, Wang A, Fogg BR, Starr SE, Murphy KA, Claus RO. Fiber optic impact detection and location system embedded in a composite material. In: Fiber optic smart structures and skins V, vol. 1798. International Society for Optics and Photonics; 1993. p. 262–9. 15. Kammer DC. Estimation of structural response using remote sensor locations. J Guid Control Dyn. 1997;20(3):501–8. 16. Park J, Ha S, Chang FK. Monitoring impact events using a system-identification method. AIAA J. 2009;47(9):2011–21. 17. Choi K, Chang FK. Identification of foreign object impact in structures using distributed sensors. J Intell Mater Syst Struct. 1994;5(6):864–9. 18. Idan M, Bryson AE. Parameter identification of linear systems based on smoothing. J Guid Control Dyn. 1992;15(4):901–11. 19. Girard W, Tucker B, Bordick N, Lee SJ, Kumar A, Zhang D, Li F, Chung H, Beard S. Flight demonstration of a SHM system on an OH-58 aircraft. Struct Health Monit. 2013;2013 20. Lee SJ, Pollock P, Kumar A, Li F, Chung H, Li I, Janapati V. Fatigue crack characterization for rotorcraft structures under varying operational conditions, AHS international 69th annual forum. 21. Li F, Li J, Chung H, Cheung C, Kettunen K, Pikanen T, Kumar A. Long sensor layers for machinery monitoring. Struct Health Monit. 2015;2015 22. Real-time Active Pipeline Integrity Detection system for gas pipeline safety monitoring, California Energy Commission (CEC), Final report for Contract Number: PIR-12-013, CEC report # CEC-500-2015-095. 23. Chung H, Mishra S, Singhal T, Li F, Li I, Kumar A, Ding S, Liu S, Lin P, Du M, Ma L. Introducing ATOMS – active train online monitoring system. In: 2nd international workshop on structural health monitoring for railway system (IWSHM-RS 2018) 2018.
Probabilistic Lifing
23
Kai Kadau, Michael Enright, and Christian Amann
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deterministic and Probabilistic Lifing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simple Disk Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Probabilistic Risk Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Probabilistic Design Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Health- and Safety-Related Acceptable Risk Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Business and Cost Performance Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relative Risk Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aero Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
604 607 607 609 614 614 616 618 619 620 621 626 636 637 638
Abstract
In this chapter, fundamental concepts are presented for probabilistic fatigue life prediction including the influences of nondestructive inspection. A review of the traditional deterministic lifing approach (involving safety factors) is provided and contrasted with the probabilistic approach that involves random input variables K. Kadau (*) Siemens Energy, Inc., Charlotte, NC, USA e-mail: [email protected] M. Enright Southwest Research Institute, San Antonio, TX, USA e-mail: [email protected] C. Amann Siemens Energy, Mülheim, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_11
603
604
K. Kadau et al.
and acceptable risk levels. The selection of acceptable risk levels for design and service decisions is discussed in the context of health, safety, and business cost/ performance considerations. Application examples are provided for critical rotating equipment commonly found in the aero-engine and energy industries. The concepts presented in this chapter are general and can be applied to life prediction of components and systems in other industries. Keywords
Probabilistic design · Lifing · Rotor · Gas turbine · Anomaly distribution · Forging flaws · Probability of detection · Fracture mechanics · Acceptable risk · NDE
Introduction Conventional gas turbine rotor life prediction methodologies are based on nominal conditions that do not adequately account for material and manufacturing anomalies that can degrade the structural integrity of high-energy rotors. For example, premium grade titanium materials commonly found in aircraft engine fan and compressor disks may contain brittle anomalies that form during the triple vacuum arc melting process. If undetected during manufacturing or subsequent field inspection, they can ultimately lead to uncontained failure of the engine. Two uncontained aircraft engine events have occurred over the past several decades that have motivated a change in the methodology that is used for the design and life management of high-energy rotating components. The first incident occurred in 1989 near Sioux City, Iowa [1], involving the rupture of a titanium rotor due to an inherent material anomaly. This event led to the development of an enhanced life management process documented in Federal Aviation Administration (FAA) Advisory Circular (AC) 33.14-1 [2]. A second incident occurred in 1996 in Pensacola, Florida [3], involving the rupture of a fan disk due to a machining-induced anomaly in a bolt hole. The eventual outcome of this event was another FAA Advisory Circular, AC 33.70-2 [4]. This AC presents a damage tolerance approach for life management of turbine engine rotating parts containing machined hole features. Both ACs use a probabilistic damage approach to address rare material anomalies that can lead to failure of a component. This approach serves as a supplement to the existing safe-life methodology for assessment of aircraft gas turbine engines. The conventional damage tolerance approach focuses on the prediction of fatigue crack growth (FCG) life. Anomalies of a specified size are virtually placed at key locations within a part, FCG lives are computed at these locations, and the minimum FCG life is identified for the part. The computed minimum FCG life can be compared with the design target life (DTL) to determine if the component will fail or survive during its lifetime. Factors of safety are typically applied to account for the uncertainty in the computed minimum FCG life.
23
Probabilistic Lifing
605
In contrast, the probabilistic damage tolerance approach is focused on the likelihood that the computed FCG life is less than the DTL. This is accomplished by establishing a limit state and computing the probability that the limit state will be violated. This is called the probability of failure (PoF) of the part. For example, the FCG failure limit state can be defined in terms of the stress intensity factor K associated with the geometry and loading and the fracture toughness Kc associated with the material [5]: gðX, N Þ ¼ K C K ðX, N Þ 0,
ð1Þ
where X is a vector of key input random variables, and N is the number of applied cycles. A negative or zero g(X, N ) represents a failure event. The probability of failure associated with this event is given by: PoF ¼ P½gðX, N Þ 0:
ð2Þ
The computed PoF can be compared to the design target risk (DTR) to determine if the risk of failure meets the required risk during its lifetime. A number of different random variables can be considered when computing the PoF associated with the FCG limit state. For parts containing material anomalies, the potential X random variables include material anomaly size and location, stress, and material properties (among others). For parts containing material anomalies, the likelihood of FCG failure is dominated by the occurrence probability of an anomaly that can be located anywhere within a part. To account for the uncertainty in the location of anomalies, a zonebased risk integration approach [5] can be used in which the part is divided into a number of zones of approximately equal risk. The risk is computed in each zone, taking into account the zone anomaly occurrence probability. The FCG risk for the part is given by: PoF ¼ P½F1 [ F2 [ . . . [ Fm ,
ð3Þ
where Fi is a failure event in zone i. If the anomaly occurrence rate is small, the above equation can be simplified as: PoF
Xm i¼1
P½Fi ,
ð4Þ
where m is the number of zones. This can also be expressed as [5]: PoF
X
δ i pi ,
ð5Þ
where δi is the defect occurrence probability in zone i and pi is the (conditional) probability of failure of zone i given a single anomaly in zone i. The zoning approach provides smaller zones in regions of a part where the risk is expected to be large, and larger zones in regions where the risk is expected to be low. Another approach is to divide the component in equally sized regions called voxels
606
K. Kadau et al.
Fig. 1 Illustration of the zone-based and voxel-based representations of a simple rectangular component (shown in gray): (a) The zone-based approach consists of four unequally sized zones, and (b) the voxel-based approach consists of m equally sized voxels where the component is a subset of the computational domain
(i.e., volume-pixel elements). In this case, the component is embedded in a computational Cartesian domain. This approach is numerically efficient and enables the utilization of high-performance computing algorithms and hardware including distributed hardware and graphics-processing units (GPUs) [6–8]. The zone-based and voxel-based approaches are illustrated in Fig. 1. Sampling-based probabilistic analysis methods can be used to compute PoF for a specified N. Monte Carlo simulation provides accurate results (the accuracy is dependent on the failure probability, confidence interval, and number of random samples) but is relatively inefficient because a fatigue crack growth life computation must be performed to evaluate the failure limit state for each random sample. Computation time can be reduced using response surface methods. See [5] for further details regarding these methods and additional efficient computational approaches for computing FCG failure probabilities. A “brute-force” Monte Carlo approach can be computationally challenging. There are several ways to improve computational efficiency and turn-around time: expressing (part) of the model by numerical representations (for instance, high-speed look-up tables) instead of analytical expressions, use of high-performance computing algorithms and hardware [6, 7, 9], fast turn-around surrogate models, and physics-based models supported by machine learning algorithms [10]. For failure response surfaces with “mild” nonlinearities, the response surface can be approximated linearly or with higher order terms. These first- and second-order reliability methods (FORM/SORM) are efficient for “smooth” problems but can have limits [11, 12]. For instance, challenges can arise in fracture mechanics including
23
Probabilistic Lifing
607
transitions from embedded – into surface – cracks, as well as other model nonlinearities. Polynomial-chaos extensions take this idea further by expanding a complex nonlinear failure response surface in terms of a series of orthogonal polynomials such as Hermite polynomials [13]. When the Monte Carlo simulation method is used, PoF can be calculated as: PoF ðN Þ ¼
f ðN Þ , S
ð6Þ
where f(N ) is the number of samples failed after N cycles out of the total number of Monte Carlo samples simulated. Another interesting quantity that can be derived is the Hazard rate: H ðN Þ ¼
PoF ðN þ 1Þ PoF ðN Þ : 1 PoF ðN Þ
ð7Þ
H(N ) is a measure of the risk of failure within the next cycle under the condition that no failure has occurred before. For small PoF values, the hazard rate is approximately the derivative of PoF. Note that a probability density function (PDF) is defined as the derivative of a cumulative distribution function (CDF). Since PoF is a CDF and H(N ) is its derivative, H(N ) can be thought of a PDF for small values of PoF. From these statistical quantities, other interesting measures, such as the risk of failure for a given period of time, as well as local properties such as component risk contours can be derived [6]. Although this chapter is focused on fatigue life prediction which is defined in terms of the number of cycles or starts N, the presented concepts can be applied to time-driven failure mechanisms or a combination of time- and cycle- driven mechanisms. Care must be taken when deriving statistical input properties from material test and other sensor data for probabilistic lifing methodologies, particularly correlation that may exist among material property variations [14], operating conditions, or component geometry deviations [15].
Deterministic and Probabilistic Lifing Methods Traditionally, deterministic lifing methods have been to quantify reliable service life of a component or system. This assessment may involve multiple failure mechanisms such as strength, cyclic failure (e.g., fatigue crack initiation and growth), and timedriven failure (e.g., creep, oxidation), among others [16–20]. These failure mechanisms are influenced by uncertainties in the input variables such as operational load, temperature, geometry, material properties, and material anomalies (among others).
Basic Concept Within deterministic assessments, the aforementioned uncertainties are typically accounted for by applying safety factors either to the input variables or to the
608
K. Kadau et al.
resulting calculated lives (or both). For example, consider a simplified example involving the design of a bridge with respect to load carrying capability. Suppose that the bridge has an average strength and an average assumed load profile that is based on data from local traffic patterns. In this case, a safety or design factor of two on strength and load would mean that if half of the average strength can carry twice of the average load, the design would pass this strength deterministic design criterion. The advantage of this approach is that it is relatively easy to implement and consists only of one mechanical strength assessment. The drawback is that the choice of safety factors is crucial for the reliability analysis and associated cost of the design. In practice, these safety or design factors are oftentimes the result of experience and engineering judgement. In contrast, for a probabilistic design the input uncertainties mentioned above are explicitly considered and the lifing model (e.g., the strength assessment in the bridge example) must be evaluated for all possible instances and combinations of the input uncertainties in order to obtain a distribution of life. Figure 2 illustrates this process schematically for multiple input parameters. In this schematic figure, the inputs are shown as normally distributed parameter distributions, but other distributions are possible (e.g., parametric, nonparametric, and numerical). For illustration purposes, the nominal input values (i.e., average values with safety or design factors applied)
Fig. 2 Illustration of probabilistic lifing. Input uncertainties are propagated through a life model and yield a life distribution. Input parameters can include material properties, flaw population, boundary conditions, and loading (among others). The vertical dashed lines represent nominal properties utilized in the conventional deterministic lifing approach, where oftentimes minimum/ maximum inputs are utilized to obtain a conservative life prediction (indicated by the vertical dashed line in the output life distribution)
23
Probabilistic Lifing
609
Fig. 3 Schematic component life distribution for a component exhibiting a narrow distribution (solid line) and a broader distribution (dashed line), respectively. The application of a deterministic safety or design factor to the average life (dotted line) can lead to different failure risks at the deterministic design life (green vertical line intersection with respective distribution) [15]
are shown as vertical lines in this figure. The resulting output component life distribution is typically a nonanalytical numerical result. In Fig. 3, the risk of failure at the (deterministic) design life is indicated by a dashed vertical line. The disadvantage to the deterministic approach based on safety factors is that it provides a “yes” (i.e., design is acceptable) or “no” (i.e., design is unacceptable) result for the analysis. No information is provided regarding the risk of failure or the robustness of the design with respect to variations in the input variables. Considering again the bridge example, the risk of failure would certainly be higher for a design that has a larger variation in strength and load profile (assuming that the average values remain unchanged). This aspect is not captured in deterministic design process using safety factors and can lead to different risks for different parts designed with the same deterministic design process. In other words, it is possible that for the same design rules, some parts may be overdesigned (i.e., expensive) and other parts may have an unacceptable level of risk in service. This is shown schematically in Fig. 3 which illustrates the variation of risk at a deterministic design point. The “broadness” of the resulting life distribution depends on the “broadness” and interplay of the input uncertainties. In contrast with the deterministic approach, the probabilistic approach quantifies the risk of failure and the robustness of the design with respect to variations in the input variables. Another advantage of the probabilistic approach is that the PoF versus cycles curve typically provides a relatively smooth functional that can be applied to design optimization.
Simple Disk Example This example illustrates the differences between deterministic and probabilistic fracture mechanics life assessment of a rotating steel disk. The density and Poisson ratio of the steel are ρ ¼ 7820 kg/m3 and ν ¼ 0.3, respectively. The disk has varying
610
K. Kadau et al.
thickness t, outer radius R0 ¼ 1 m, and a varying central hole with radius Ri and rotates with frequency f ¼ 50 Hz. A 2D axisymmetric model of the disk is shown in Fig. 4. For this simple model, the two stress components can be expressed as follows [21]: R2o R2i 1 þ 3ν 2 3þν 2 2 2 σ tan ðr Þ ¼ ρð2πf Þ Ro þ Ri þ 2 r 8 3þν r |fflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflffl}
ð8Þ
R2o R2i 3þν 2 2 2 2 σ rad ðr Þ ¼ ρð2πf Þ Ro þ Ri 2 r 8 r |fflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflffl}
ð9Þ
σ0
σ0
These expressions are illustrated in Fig. 5 indicating the dominance of the tangential stress component for a rotating disk with a center hole. The steel disk can contain material anomalies (forging flaws) that can grow under cyclic loading and eventually lead to failure of the disk (i.e., fracture). The disk is
Fig. 4 Illustrative example of a rotating disk containing a semielliptical surface crack (shown in red): (a) 2D axis-symmetric model, and (b) fracture mechanics model (rectangular plate solution approximation). The tangential stress is in the direction indicated, and the radial stress is in the x-direction
23
Probabilistic Lifing
611
Fig. 5 Representation of radial and tangential stress components for a rotating cylinder with a central hole Eqs. (8) and (9)
Fig. 6 Axisymmetric spatial representation of the tangential stress as described by Eq. (8) for six different components denoted 1–6. The different thicknesses t of the disks can be seen, and the stress maximal at the inner radius Ri. The black semicircle indicates the location of the assumed flaw for the deterministic and probabilistic single indication analysis [15, 24]. (© Siemens Energy 2021)
inspected via ultrasonic testing (UT) [22] prior to placement in service. The probability of detecting a flaw is dependent on and increases with flaw size (see section “Power Generation” for further discussion regarding probability of detection). For the deterministic analysis, it is assumed that a certain flaw size can be detected. Just to have a tangible number, we assume this size here is 1 mm, and disks that have an indication of larger than 2 mm are rejected for service operation. A fracture mechanics approach is used in which a crack with an initial size is numerically integrated over a number of cycles until a failure criterion is fulfilled [23]. For the deterministic approach, a 2 mm flaw is virtually placed in the critical location at the bore as shown Fig. 6. The fracture mechanics analysis is performed using conservative values for material parameters. This process was repeated for the six simple disk examples indicated in Fig. 6. Deterministic lives Ndet were computed for the six disks, also indicated in Fig. 7. As anticipated, life values were related to the maximum stress at the bore. Disks with different thickness t values had the same Ndet values. This was also to be expected as this calculation only takes into account the most critical location and not the size of the critical region.
612
K. Kadau et al.
Fig. 7 Single Indication PoF for the six simple rotor geometries denoted Co. 1–6. The dashed lines indicate the corresponding deterministic life Ndet. The PoF at the deterministic life for all components is very similar [15, 24]. (© Siemens Energy 2021)
Table 1 Comparison of the deterministic life Ndet and PoF(Ndet) for a single indication and design analysis [15] Component 1 2 3 4 5 6
Ri (m) 0.15 0.15 0.3 0.3 0.5 0.5
R0 (m) 1.0 1.0 1.0 1.0 1.0 1.0
t (m) 0.15 0.9 0.15 0.9 0.15 0.9
V (m3) 0.92 5.53 0.86 5.15 0.71 4.24
σmax (MPa) 644 644 664 664 714 714
Ndet 3920 3920 3130 3130 1730 1730
PoF(Ndet) Indication 5.41 104 5.41 104 5.58 104 5.58 104 5.97 104 5.97 104
PoF(Ndet) Design 2.59 107 1.34 106 2.96 107 1.37 106 3.83 107 6.20 106
Two cases were considered for the probabilistic analysis: (1) single indication (detection of a flaw of specific size for a specific component, i.e., inherent flaw distribution and detected indication are considered), and (2) fleet assessment (detection of a flaw smaller than rejection limit of 2 mm possible, i.e., inherent flaw distribution is considered only). The single indication case (also called an indication probabilistic fracture mechanics assessment) is only applicable for a specific disk. Values for the PoF at Ndet are shown in Table 1, and the PoF evolution is shown in Fig. 7. As in the deterministic case, the risk values do not depend on the thickness of disk that only affects the size of the critical region. Also, there is only a moderate dependency of the PoF on the inner radius somewhat correlated to the Ndet values. The exact numbers and ratios of those numbers certainly depend on the details of the deterministic and probabilistic modeling, and some further information can be found in [15]. For the fleet assessment case (also called the design case), it is assumed that the disk is inspected, and if an indication is found it will be smaller than the rejection limit of 2 mm. Based on this assumption, indication data bases and associated probability of detection (PoD) curves for the UT inspection, a forging flaw-size population distribution can be established [25]. For this probabilistic design
23
Probabilistic Lifing
613
Fig. 8 Simulated PoF development of the six simple rotor geometries assuming no indication larger than a given decision limit has been found. The dashed lines indicate the corresponding deterministic life Ndet. As one can see here, in the probabilistic assessment, the total volume has a significant influence on the simulated PoF. In comparison to Fig. 7, the PoF for the components without observed indications is much less than the PoF of a component with indication [15, 24]. (© Siemens Energy 2021)
example, the PoF at Ndet values was computed as indicated in Table 1, and the PoF evolution is shown in Fig. 8. Clearly noticeable is the overall reduced PoF, which indicates that the probability of having a flaw in the forging of a significant size in the critical region is very low. This is typically the case due to very controlled manufacturing process in conjuction with highly sophisticated UT and other nondestructive examination (NDE) inspection techniques that allow for the detection of defects and subsequent removal of components including critical defects. Also clearly noticeable is that the risk of a thick disk with the same inner and outer radius is significanly larger than that of a thin disk (compare component 1 and 2, 3 and 4, and 5 and 6). This is a result that a larger region of high stress increases the chance of having a forging flaw in this area in the case that the flaws are distributed homogeneously in the volume of the component – note that flaws do not necessarily have to be homogeneously distributed. This can be described as a probabilistic size effect, i.e., components with a larger highly stressed area typically have a higher risk of failure. Note that this size effect does not have to be linear and can also display some sort of saturation for larger structures as well. For instance, in Table 1, the PoF ratio of component 1 and 2 is not quite a factor of 6 as the thickness ratio would suggest. This is due to risk contribution from the surfaces perpendicular to the axial direction basically being unchanged when increasing the thickness of the disk, i.e., this part is not scalable with the thickness. In Table 1, the parameters for Eq. (8) and results for deterministic, probabilistic indication and probabilistic no indication assessment are given. In Figs. 7 and 8, the results are shown as plots. Another major difference between the deterministic and the probabilistic analysis is that the deterministic approach only yields one number, i.e., Ndet. This certainly is not in line with field experience of component failures that typically show a spread in
614
K. Kadau et al.
component life. In contrast, a probabilistic approach reveals the development of the PoF as function of cycles. This has advantages compared to the limited information of a deterministic approach. For instance, it can be much better compared to field experience, as it is very unlikely that two or more components fail at the exact same number of cycles. Also, this approach allows for the risk quantification of a delayed service inspection, and a quantified risk management in general. In a probabilistic approach, quantities can be incorporated that cannot be incorporated in a deterministic model, for instance, the flaw occurrence rate or distribution in the component. A deterministic fracture mechanics approach has to always assume the existence of a crack in a certain critical location, as this is the basis for the single lifetime calculation. In a probabilistic approach, it is possible to establish a flaw distribution, which means flaws can be distributed in all locations of the component, i.e., not only the most critical regions, and components do not necessarily have to have a flaw. The details certainly depend on the exact circumstances, such as the manufacturing processes and the quality control process, and NDE inspection techniques applied. This aspect is reflected when comparing Figs. 7 and 8. Two aspects can be seen here. First, the probabilistic design calculation PoF shown in Fig. 8 is significantly smaller compared to the single indication PoF shown in Fig. 7, and the PoF @Ndet is no longer equal for all components. This reduction in the PoF is controlled by the distribution of initial flaws, i.e., that not all components in the simulation have a flaw at the most critical location. Flaws can essentially be present everywhere in the component. Depending on the underlying flaw-generating mechanisms, there might be a distribution with a radial and/or axial flaw dependence.
Probabilistic Risk Contours The simplified example in section “Simple Disk Example” illustrated some of the aspects of a probabilistic approach and contrasted it with a traditional deterministic approach. Real engineering components and procedures are certainly more complex. In Fig. 9, the local failure probability plots (or risk contours) of two different rotor disk designs are shown. The risk contour shows the location of high risk for a component and illustrates the above-mentioned size effect, i.e., that the component with the larger critical area has a higher risk. This local information is very useful in the design phase in order to guide designers to optimize designs, either manually or in an automated fashion. Also, this local information can optimize and focus nondestructive inspection schemes to the most critical regions for both initial quality inspection, as well as in-service inspections and assure that the most critical regions are scanned thoroughly.
Probabilistic Design Criteria Even the best probabilistic method and tool are only truly useful when the results such as the PoF (Eq. (6)), hazard rate H (Eq. (7)), risk contours, etc. can be utilized to understand a component or system, and to derive conclusions and decisions with
23
Probabilistic Lifing
615
Fig. 9 Probabilistic failure rate maps (or risk contours) of a medium- (left) and low- (right) risk component [15]. This type of information allows to guide and focus nondestructive inspection schemes, for instance. (© Siemens Energy 2021)
respect to design or service operations. For instance, in an early design stage, probabilistic design criteria can help to decide which designs to focus on for the most reliable, economic, efficient, and service robust component design. In a later design stage, concrete probabilistic design targets, oftentimes in the form of probabilistic design criteria, need to be specified for the guidance of designers. On the other hand, for already in-service parts and systems, probabilistic design criteria can support and guide reliable decisions on needed service inspections, allowable operating conditions, as well as component refurbishments or replacements needs. In this section, two categories are considered for probabilistic design criteria: (1) strictly regulated health and safety topics (e.g., gas turbine rotor disk burst), and (2) business cost/performance-driven topics (e.g., spallation of a thermal barrier coating on a GT turbine blade, contained loss of a turbine blade in a land-based turbine). It is acknowledged that in practice these two distinctions are not always strictly possible (e.g., health/safety-related events may also affect cost/performance) and a gradual transition from one to the other is certainly possible. During the discussion, different types of acceptable risk criteria will emerge. A probabilistic design criterion can be established based on a probability of failure for the whole desired lifetime of a component or system, over only a certain period of time such as a year, per event such as a start of an engine or a flight of commercial airplane, or on a relative scale compared to a previous design. At the end of this section, the concept of relative risk changes from one design to another will be briefly discussed. This approach is particularly valuable if the design of interest is based on an already existing design with sufficient service experience. An example would be a change in service conditions without a direct component design change, component material exchange, or evolutionary modifications of component geometries. Note that a probabilistic criterion also needs to be established and might also take a significant amount of experience with a specific type of design. In fact, the safety factors associated with deterministic design may require decades of experience to
616
K. Kadau et al.
quantify. However, once quantified, a probabilistic design supports a robust design and typically yields an integral evaluation of a component or engineering system as discussed in this chapter.
Health- and Safety-Related Acceptable Risk Criteria The US Nuclear Regulatory Commission specifies an annual acceptable risk of rotor disk burst for a steam turbine rotor in a nuclear power plant [26]. There are multiple considerations for the acceptable risk limit; however, for a standard designed nuclear power plant, the annual risk for such an event should not be larger than 104 for the whole rotor. There are variations to this limit. For instance, if the power plant design is such that the nuclear reactor is located to the side of the steam turbine, the annual risk of rotor failure should be lower than 105. This acknowledges the potentially more severe consequences for such a design in the event of a rotor disk burst. On the other hand, for temporary situations, these limits can be increased as well. There are national differences in these regulations, and typically an OEM has to cover all of them, for instance, by choosing a certain level and additional internal safety factors. This demonstrates that established risk of failure limits should be related to potential failure consequences and their likelihoods. This aspect is certainly true also for failures with consequences not related to health and safety, such as business and cost-related failure consequences discussed in the next section [6]. Another example are risk limits for aero-engine disks utilized in commercial jet aircraft engines. Here, the FAA specifies in the Advisory Circular 33.14 [2] the risk limits for a rotor disk failure due to a manufacturing anomaly, such as hard alpha inclusions or dirty white spots to 109 per disk and flight. Additional lifetime feature-based risk limits are specified also by the FAA in the later released Advisory Circular 33.70 as 105 per feature [3], whereby a feature can be a bolt – or cooling hole in a rotor disk. The annual number of flights controlled by the FAA was approximately 16.5 million for FY2019 [27]. Assuming two gas turbine engines with ten relevant disks per commercial airliner, and an allowable flight-risk of disk failure due to anomalies of 109 as specified above, we can quantify an upper bound for a disk burst happening for any jet of the total FAA controlled flight volume to 0.33/year (¼16.5 million 2 10 109). Considering that this is an upper bound risk estimation for such an event happening, it seems to be in line with the rarity that these events happen. A recent example includes the high-pressure turbine IN718 disk 2 burst of a GE CF6 engine on a Boing 767 during takeoff at Chicago O’Hare in 2016 due to a dirty white spot-induced crack [28]. Even though the consequences associated with such an event can be dire for passenger and crew aboard, they are predominantly confided to the people on board, and oftentimes no fatalities are resulting due to applied safety protocols in the industry. This might be different for power plant operational safety considerations, as here the general public can also potentially be affected. The annual acceptable risk limit of a rotor failure for a nuclear power plant and the above discussed risk limit of a commercial jet engine are compared in Fig. 10. The
23
Probabilistic Lifing
617
Fig. 10 Schematic evolution of annual risk of rotor failure for turbines with service inspection and without. The acceptable annual risk of failure for a nuclear steam turbine as specified by NRC of 1.0 104 is shown as well. For comparison, an example for acceptable annual risk of failure of 4.0 105 for commercial aero engine as specified by FAA is shown as well. Note that both regulated acceptance limits are challenging to compare and include assumptions and approximations. Therefore, the figure is for illustration purposes only. The noise visible in the curves is due to limited number of Monte Carlo samples utilized in this example case
FAA risk limits for commercial jet engines are specified on a per flight and individual rotor disk basis, rather than on the whole rotor annual basis as in the case of a nuclear power plant. We have to make several assumptions in order to compare the two: We assume 20 relevant disks per commercial airplane (e.g., two jet engines with ten relevant disks each), which is in line with the above discussion and assumptions on FAA-controlled flight volume. We further assume an average of five flights per day for a typical commercial plane. These assumptions lead to an annual risk of failure limit for a rotor failure for a specific commercial airplane of 3.65 105 (¼109 5 365 20) ~ 4.0 105. Under the assumption made, this risk limit is somewhat less than the risk limit for a nuclear power plant rotor disk burst. However, it shows that both risk limits are similarly low. Care must be taken in this comparison because of the assumptions made, and the different scope of consequence due to such an event. Another aspect to consider is that due to the increasing risk with time, the true risk will be significantly smaller than the sum of risk limits. Figure 10 illustrates an example of such a risk increase with time with and without a performed service inspection, where grown cracks can be detected, and components exchanged if needed. This certainly illustrates the significance of inspections and their advancements into a more digitized and automated environment, as they can significantly reduce the risk of operation.
618
K. Kadau et al.
There are other health- and safety-regulating bodies and documentations such as the British Health and Safety Executive [29] and International Standard Organization. For instance, the International Organization for Standardization ISO21789 on gas turbine operation [30] offers several considerations of the individual subsystems of a gas turbine, and potential hazards are specified, as well as a broadly acceptable annual individual risk of 106. To relate this individual acceptable risk with an acceptable risk failure for a system or subsystem, event trees relating potential failures with individual risk need to be evaluated. These event trees need to consider things such as likelihood of critical missile being present, time of potential exposure of individual workers, among other aspects. For instance, if a power plant rotor disk fractures that does not mean that a power plant worker or the general public is harmed, as a previous low-pressure steam turbine rotor burst in 1987 in Irsching, Germany, shows [31]. In this event, high energy rotor fragments left the power plant and could later be retrieved as far as over 1000 m away from the location of the turbine. The root cause analysis of this event led to an improvement of NDE requirement for large turbine forgings utilized in the energy sector [32]. Other earlier turbine rotor failures in fossil and nuclear power plant have been reported as well [33–35].
Business and Cost Performance Criteria Economic analysis is used in insurance industry to identify the cost of policies. This analysis is based on the likelihood of an event and the associated amount paid to beneficiary. A similar approach can be adapted for setting a reliability design target. Here, the total life cycle cost C of a component or engineering system is based on the initial cost C0 (design + manufacturing) and failure cost Cf (i.e., outage and repair costs etc.) C ¼ C0 + PoF x Cf. The probabilistic aspect is the failure probability PoF that quantifies the likelihood of a failure with associated cost Cf (similarly as in the insurance policy case). As is shown in Fig. 11, both of the cost-contributing constituents can depend on design parameters. For example, a turbine stator vane design parameter could be a wall thickness. With increasing wall thickness, the design would typically cost more, and at the same time the probability of failure PoF would decrease, and hence the expected cost of failure would decrease as well. Decreasing the wall thickness would decrease initial cost but increase expected cost of failure, as likelihood of failure increases. The minimum total cost and associated risk of failure PoF can be utilized as a probabilistic design target. A design process of a complex part such as a turbine stator vane or blade of a gas turbine is certainly challenging, and a multiplicity of design parameters must be considered, i.e., multiple wall thicknesses, shapes, radii, choices of materials, cooling flows, etc. Also, oftentimes engine performance parameters such as efficiency, power, and flexibility need to be accounted for as well. This can lead to an even more complex multidisciplinary optimization process (MDO), or a relation between engine efficiency parameters and cost needs to be established. These business-driven scenarios also provide an opportunity, as there are no regulated limits.
23
Probabilistic Lifing
619
Fig. 11 Initial cost of design C0 and expected cost of failure PoF Cf add up to total expected lifetime cost. In this schematic, the X-axis is a design parameter, for instance, a wall’s thickness. In practice, oftentimes multiple design parameters will be important to consider, such as cost function analysis can also consider engine performance parameters. This can lead to a multidisciplinary optimization (MDO) problem unless engine performance parameters can also be translated into expected lifetime cost
Other methods for a specification of a probabilistic design target include a guidance based on service experience with comparable components or engineering systems. For instance, if the average observed probability of failure for a turbine blade for its lifetime is PoF, then this might be a good basis for an acceptable risk target. In case of a positive field experience, one can increase this risk target, in case of a more negative one, the target can be decreased. Other aspects such as new market or specific customer requirements also need to be accounted for as well.
Relative Risk Approach Oftentimes in a probabilistic analysis, not all relevant uncertainties and underlying phenomena can be accounted for. Complexity of physical phenomena involved and too little knowledge on variations and distributions of important input parameters are some of the reasons. This, of course, is no different from a more traditional deterministic design process and its underlying assumptions. Oftentimes in these cases – depending on circumstances – one can utilize conservative assumptions for the simplifications of physical phenomena, or the neglect of uncertainties for a quantification of an upper bound of the “true” risk. Another approach is the utilization of a relative risk approach, i.e., for a fielded design with sufficient field experience, one can quantify the relative risk change for a new design. The design under consideration may be a design upgrade for improved service performance or other improved economic parameters. This relative risk approach is more robust against the simplifications and assumptions. This robustness assumes that the two designs that are compared have sufficient similarities, so that made assumptions and
620
K. Kadau et al.
Fig. 12 Relative risk comparison of two designs. The relative risk of a probabilistically quantified risk increased of two designs is oftentimes more robust than the absolute risk quantification itself when simplification and assumptions in the calculations must be made. Design 2 could have a more efficient or more economically design based on positive field experience of Design 1
simplification have a similar effect on the quantified failure probabilities, i.e., “cancel-out” on a relative scale. An example of a relative risk approach would be the evaluation of a turbine component that has service experience with specific operating conditions. Here, the quantification of the relative risk change for a “new” service condition can guide design targets. Changes can include operating temperature profile changes, material exchange, or geometric changes such as wall thicknesses or radii changes. Figure 12 illustrates schematically such a relative risk increase for two designs. In this case, the reference Design 1 has a lower PoF, and the Design 2 that is to be evaluated has a somewhat larger PoF. In this case, the ratio increases slightly with time. In practice such a comparison might be less monotonic and can include changes in the relation, for instance, due to different initial failure probabilities, i.e., “infant mortality” – or the improvement or inclusion of in-service inspections. See next section for more information on service inspections.
Application Examples The application examples in this chapter focus on probabilistic life prediction of gas turbine rotors. However, the concepts presented are general and can be applied to many other components and industries. Examples include early studies of probabilistic-based life approaches [36], probabilistic lifing of ceramic components in Diesel engines [37], probabilistic lifing of gas turbine blades and vanes [38–40], service considerations of mature gas turbine rotors including blade attachments [41],
23
Probabilistic Lifing
621
and modal solution emulator for the probabilistic study of geometrically mistuned bladed rotors [42], as well as aircraft system lifing maintenance concepts including uncertainties [43].
Aero Engine The process for certification assessment of aircraft gas turbine engines is provided in FAA Advisory Circulars 33.14-1 [2] and 33.70-2 [4] (among others). This process has been implemented in DARWIN ®, probabilistic damage tolerance analysis software developed by Southwest Research Institute ® under the guidance of the FAA and two industry committees [44]. For further information regarding DARWIN, see references [5] and [44].
FAA Calibration Test Case Use of DARWIN for certification assessment of hole features is illustrated for the calibration test case described in AC 33.70-2. Complete details regarding the inputs for this problem such as geometry and applied loadings, material properties, and others are provided in Appendix 1 of AC 33.70-2. A diagram of the problem is shown in Fig. 13a, which consists of a generic disk sector with a hole feature that is assessed at the inner and outer diameter of the hole relative to the disk axis. A 3D FE model of the geometry and associated stress for this problem is shown in Fig. 13b. Initial cracks were placed at the four locations shown in Fig. 13a. DARWIN was used to slice the 3D model at each location to obtain 2D cross sections for definition of fracture mechanics model geometries. An example cross section is illustrated in
Fig. 13 Illustration of DARWIN for AC 33.70-2 Calibration Test Case: (a) diagram provided in the AC indicating location of anomalies; (b) 3D finite element model of component viewed in DARWIN; and (c) 2D cross section and associated fracture model geometry at location 4 [44]
622
K. Kadau et al.
Fig. 13c for a surface crack at location 4 along with the associated rectangular plate fracture model and stress gradient (indicated by the line with the blue arrow near the center of the plate). The AC 33.70-2 default anomaly distribution and associated frequency reduction factor were used for the analysis. A single deterministic inspection was performed using the default PoD curves provided in the AC. Deterministic crack growth life results are shown in Fig. 14. The minimum life location was zone 2, corresponding to location 2 in Fig. 13a. Probability of fracture results with and without inspection are shown in Fig. 15, respectively. For this example, the risk values associated with the risk critical zone (zone 2) were within the required bounds for the calibration test case specified in AC 33.70-2. For certification assessment of an actual engine component, the risk values would be compared to the DTR specified in the AC.
Aero-Engine Example Use of DARWIN for assessment of titanium materials is illustrated for an aeroengine disk. A 3D FE model of the geometry of a titanium compressor disk and associated stresses for this problem are shown in Fig. 16. An anomaly distribution provided in AC 33.14-1 was used to model the size and frequency of titanium material anomalies contained in the example component [2, 5]. The anomaly distribution is shown in Fig. 17. In-service inspections were simulated in DARWIN at regular inspection intervals (5000, 7500, 10,000, and 15,000 cycles) using POD curves provided in AC 33.14-1
0.16 Zone 1 (CC10) Zone 2 (SC18) Zone 3 (CC10) Zone 4 (SC18)
0.14
Crack Area, in2
0.12 0.10 0.08 0.06 0.04 0.02 0.00 0.0
2.0
4.0
6.0
8.0
10.0
12.0
14.0
16.0
Cycles (thousands) Fig. 14 DARWIN crack growth life results associated with each feature for a user-defined initial crack size [44]
Probabilistic Lifing
Probability of Fracture
23
623
10-4
Zone 1 (CC10) Zone 2 (SC18) Zone 3 (CC10) Zone 4 (SC18) AC33.70-2 Bounds
10-5
10-6 0
5000
10000
15000
20000
Cycles
Probability of Fracture
10-4
10-5 Zone 1 (CC10) Zone 2 (SC18) Zone 3 (CC10) Zone 4 (SC18) AC33.70-2 Bounds 10-6 0
5000
10000
15000
20000
Cycles Fig. 15 DARWIN probability of fracture results for the AC 33.70-2 Calibration Test Case: (a) without inspection, and (b) with inspection [44]
[5]. The exterior surfaces of the disk were inspected using the Eddy Current method. The POD curve selected for this inspection is shown in Fig. 18a. For detection of cracks in the interior regions of the disk, ultrasonic inspection was used as shown in Fig. 18b.
624
K. Kadau et al.
Fig. 16 Illustration of DARWIN certification assessment of titanium materials for an example 3D FE model [45]
Fig. 17 An anomaly distribution provided in AC 33.14-1 was used to model the size and frequency of titanium material anomalies contained in the example component [2, 5]
The DARWIN prezoning algorithm was used to identify the prezones for the analysis. 100 prezones were initially identified using a simple grid that was based on values of temperature, stress, and proximity to the free surfaces of the part. The prezones were refined via the algorithm to address high-stress regions near the bore of the model. The final prezone mesh containing roughly 800 prezones is shown in Fig. 19. The prezones were then reduced to 33 zones using the DARWIN optimal auto-zoning algorithm [45].
23
Probabilistic Lifing
a
625
1.0 0.9
Probability of Detection
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
8
10
12
14
16
18 20 DEPTH (mils)
22
24
26
28
30
b 1.0 0.9
Probability of Detection
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0
1000
2000
3000 4000 Area (square mils)
5000
6000
7000
Fig. 18 Nondestructive inspections were simulated using POD curves provided in AC 33.14-1 [2, 5]: (a) Eddy current inspection for exterior surfaces of the disk, and (b) ultrasonic inspection for interior regions of the disk
626
K. Kadau et al.
Fig. 19 Final prezone mesh containing roughly 800 prezones [45]
PoF versus cycles results are shown in Fig. 20 with and without the influence of NDE. For this example, it can be observed that NDE had a substantial influence on risk reduction for the 3D FE model. Conditional PoF values can also be computed under the assumption that anomalies are definitely present everywhere in the disk. This information can be useful to identify the regions of the part that are sensitive to the presence of material anomalies and the potential risk reduction associated with NDE. Conditional risk contours without and with the influence of NDE are shown in Fig. 21a, b, respectively. The largest risk values were focused on the interior rim of the disk (Fig. 21a). This result is consistent with the higher stress and lower constraint values in this region. Figure 21b illustrates the regions of the disk that experience risk reduction when the influence of NDE is considered.
Power Generation This section discusses probabilistic lifing applications of heavy-duty gas turbines utilized in the energy sector [46]. These turbines produce a power output of over 100 MW and weigh multiple hundred metric tons. In Fig. 22, a Siemens Energy 4000F gas turbine for the 50 Hz market with a capacity of over 300 MW is shown [47]. This gas turbine has 15 compressor stages and 4 turbine stages and three torque-disks connecting the compressor and turbine section covering the midsection where the combustion chamber is located. The 22 disks together with a front and rear shaft for bearing support are stacked and tightened with a central tie-bolt, i.e., center tie-bolt design.
23
Probabilistic Lifing
627
Fig. 20 PoF versus cycles results with and without NDE for the example 3D FE model
Rotor Disks A typical weight of one rotor disk is about 5 tons with diameters and thicknesses up to 2 m and 0.4 m, respectively. Such a gas turbine disk is shown in Fig. 23 at different manufacturing stages: after forging and heat treatment, the ultrasonic contour, and the rough machined contour. The initial quality ultrasonic inspection procedures are performed in the polished hollow cylinder shape as shown in the center part of Fig. 23. This shape allows for a precise automated inspection with a high probability of flaw detection during the inspection [25, 32] – for the positive influence of a high probability of detection on component reliability quantified by probabilistic lifing and examples of probability of detection curves, see below in this section. After successful passing all the quality characteristics, the rotor components are further machined, i.e., potential cooling holes are drilled, blade attachments are typically broached, and the so-called Hirth-serration that transmits the torque from disk to disk is machined by a milling operation. Such a final machined disk ready to be assembled is shown in Fig. 24.
628
K. Kadau et al.
Fig. 21 Conditional risk contours for example 3D FE model: (a) without NDE, and (b) including NDE
23
Probabilistic Lifing
629
Fig. 22 Siemens 4000F 50 HZ gas turbine with a capacity of over 300 MW. The overall length of the engine is about 11 m with a total weight of over 300 metric tons. The gas turbine consists of 15 axial compressor stages and 4 turbine stages. (© Siemens Energy 2021)
Fig. 23 Heavy-duty GT rotor disk forgings at different manufacturing stages: after forging and heat treatment (left), ultrasonic contour (middle), and rough machined contour (right). A typical weight is about 5 tons with diameters and thicknesses up to 2 m and 0.4 m, respectively. (Picture courtesy of Alexander Zimmer (Saarschmiede GmbH, Germany) and Johannes Vrana (Vrana GmbH – NDE Consulting and Solutions)) [48]
Even though there are many similarities with aero-engine rotors discussed in the previous section, there are some differences as the design, size, as well as service operation expose some differences. For instance, due to the large size of the individual rotor disks, there can be relevant transient stress and temperature
630
K. Kadau et al.
conditions driven by thermal imbalance. Figure 25 shows a transient simulation of the internal stress evolution in a rotor disk during a fast cold start. The transition from blue to red shows the increased stress levels. Relevant transient stress states are visible and can be larger and in different sections of the disk than at steady state, where the stress is the highest at the central bore of the disk in this example. These transient conditions are most pronounced for fast cold starts as in addition to the centrifugal loads’ thermal stresses, due to the time it takes to heat the disk, interior sections have to be accounted for.
Probabilistic Analysis The needed input for a probabilistic fracture mechanics lifing of such a rotor disk are material property distributions and potential correlations, the relevant stress and temperature conditions in the disk, as well as the initial flaw population [15, 48]. The probability of failure for a specific rotor disk is shown in Fig. 26 for a variety of relevant service conditions. As can be seen, the risk for a cold start is Fig. 24 Machined rotor disk of a heavy-duty Siemens energy gas turbine. (© Siemens Energy 2021)
23
Probabilistic Lifing
631
Fig. 25 Transient simulation of the internal stress evolution (a–d) in a rotor disk during a fast cold start of a heavy-duty gas turbine commonly used in the energy sector. The transition from blue to red shows the increased stress levels. Relevant transient stress states are visible in (c). The simulation shows that at steady state the stress is highest at the central bore of the disk (d) [7]. (© Siemens Energy 2021)
highest due to the thermal imbalance and associated stresses and temperatures as discussed above. For a so-called inlet-guide-vane load-following cycle, the risk is lowest. In such a cycle, the power output can be reduced or increased by increasing or decreasing the mass flow by adjusting the angles of the front compressor inlet guide vanes. Such a cycle only has smaller stress changes; however, the stress cycle strongly depends on the rate of load change, i.e., the faster the load changes the larger the associated stress cycle. The risk of failure for an ISO cycle – i.e., cold start at 20 C – is in between the two aforementioned cycles. These cycles together with fast start-up capability are very crucial for the energy transformation including intermittent renewables such as photovoltaic and wind turbines. These service operations are typically a mixture of the “pure” cycles mentioned with varying contributions. The evolving risk for these service conditions is in between the “pure” cases discussed. The exact risk of course is dependent on the contribution of individual cycles and will vary from customer to customer. This is where probabilistic analysis in conjunction with service operation data is very valuable to understand the remaining useful service life and perform condition-based service interventions, i.e., perform service when, where, and exactly what is needed. Besides the risk of a component or system, the regions that pose the largest risk are important to understand to improve either the design or maintenance measures. In Fig. 27, risk contours for a gas turbine rotor disk for different service conditions are shown. The highlighted areas indicate the higher risk and therefore support the improvement and optimization of service inspections. For instance, the area near the
632
K. Kadau et al.
Fig. 26 Multiple duty-cycle probability of failure curves compared to single-mission PoF for cold start (C-S), ISO start (ISO-S), and inlet-guide-vane load-following cycle (IGV-LFC) are shown [6]. (© Siemens Energy 2021)
Fig. 27 Risk contour for inlet-guide-vane load-following cycle (IGV-LFC) only (left), cold start (C-S) only (middle), and duty cycle consisting of 0.5% C-S and 99.5% IGV-LFC (right). The shown MC simulation consists of 1 billion samples, and the spatial resolution is 0.1 million 2D two-dimensional voxels. Note, the contours are shown on a logarithmic scale over a range of four decades – for clarity each contour has a different range [6]. (© Siemens Energy 2021)
23
Probabilistic Lifing
633
central bore is an area where an NDE during service should have a focus. Depending on the design, there will be other regions of interest of a rotor disk as well including cooling holes and blade attachments. For large gas turbine in the energy sector, the different grid frequencies of 50 Hz and 60 Hz typically lead to a 50 Hz and a geometrically smaller scaled version for the 60 Hz market. Oftentimes for a deterministic approach, the life calculations might not change much as only the most critical locations are considered and centrifugal loads can be kept constant when scaling rotor geometries. However, differences in thermal loading and transient effects will complicate the transferability. For probabilistic lifing procedures, the complexity even increases as they integrate the whole component geometry. In order to overcome these design challenges, i.e., having to perform for 50 Hz and 60 Hz design calculations, scalability considerations for probabilistic rotor lifing have been investigated [24]. Another interesting aspect of probabilistic fracture mechanics is the recent development of probabilistic models describing the nucleation process of a forging flaw into a crack that allows for an improved quantification of risk [49].
Initial Quality and Service Inspections As mentioned earlier, the probability of detection is an important quantification of an NDE inspection process. The PoD quantifies the ability of an inspection procedure for a specific component – or region thereof – to find a flaw of a specific size [25]. In Fig. 28, different qualities of a probability of inspection as a function of the true flaw size (TFS) are illustrated. In this example, the different PoD curves depend on a parameter characterizing the inspection by a threshold KSRTh value ranging from 0.5 mm to 3 mm [25]. The assumption here is that disk-shaped reflector (KSR) size values above the detection threshold value are detectable. The transformation from KSR into the TFS space can be done by a distribution connection KSR and TFS [25]. In many situations, a combination of complex NDE simulation techniques [50] and tests might be needed in order to establish a PoD (see ▶ Chaps. 1, “Introduction to NDE 4.0,” ▶ 2, “Basic Concepts of NDE,” ▶ 8, “From Nondestructive Testing to Prognostics: Revisited,” and ▶ 9, “Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0”). A reliable initial or quality NDE inspection after the part is manufactured supports quality control for fielded parts. For instance, if in a forging a UT indication above a certain size is detected, the part would not be accepted for service. The better the PoD of the initial inspection, the higher the reliability of finding manufacturing deviations. This supports the successful rejection of components with manufacturing deviations. This leads to lower initial flaw distribution and service risk in the fleet [25]. This is illustrated in Fig. 29, where the solid black line shows the PoF for a baseline initial inspection and the dashed line shows the PoF for an improved initial or quality inspection. Depending on the differences in the inspection technique and associated PoDs, the difference can be significant, underlining the importance of high-quality NDE technique for fracture critical rotating equipment. Similarly, the positive influence of a well-performed service inspection is illustrated in Fig. 29 as well. After a service inspection, the PoF increases significantly slower (i.e., lower risk of failure) than without the performed service inspection. In addition, an even further improved service inspection can further reduce the risk. Service inspections
634
K. Kadau et al.
Fig. 28 Different qualities of a probability of inspection as a function of the true flaw size (TFS). In this illustration, the different PoD curves depend on a parameter characterizing the inspection by a threshold KSRTh value ranging from 0.5 mm to 3 mm [25]. Disk-shaped reflector (KSR) sizes above the detection threshold are assumed to be detectable. The transformation from KSR into TFS space can be done by a distribution connection KSR and TFS [25]
enable to find cracks or other degradation instances such as corrosion pits that were initiated during service and were therefore not detectable during the initial quality inspection. Service inspections also can enable to see cracks that have grown in service to a detectable size that have not been found in the initial quality inspection – or a previous service inspection. Service inspections are typically associated with service downtime of the engineering asset, such as a gas turbine. In Fig. 30, an assembled Siemens energy H-class rotor during an outage is shown. The rotor is already removed from the casing and partially de-bladed. A comprehensive NDE inspection of the rotor requires further disassembly and hence is a critical aspect of service concepts. Figure 30 also shows that an inspection for the fully machined rotor disk, including blade attachments and other features, is more challenging than an initial quality inspection performed in the cylindrical shape shown in Fig. 23 (middle). This clearly indicates that once an outage for inspection is performed, a high-quality inspection procedure with large PoD should be performed to optimize the return (lower service risk) on investment (cost of outage). NDE 4.0 strongly supports improved probabilistic lifing including the development of probabilistic digital twins by making NDE data directly available for lifing analysis. To understand this better, let us consider the discussed case of gas turbine rotor forgings. As mentioned earlier, at the forging vendor, the forgings undergo their initial quality inspections and have to pass quality criteria for the parts to be shipped to the gas turbine original equipment manufacturer (OEM). The inspection data from the vendor needs to be digitalized and added to a data base with defined interfaces for the OEM to be able to utilize it. In the case of a forging, the data not only consist of
23
Probabilistic Lifing
635
Fig. 29 Schematics of the influence of inspections on the probability of failure for a component, such as a forged rotor disk (see Fig. 23). The positive impact of a service inspection at a certain time is clearly visible, as well as the further risk reduction with an improved service inspection. The positive impact of an improved initial inspection before the part is fielded is also illustrated
Fig. 30 Assembled Siemens Energy H-class rotor during outage. The rotor is already removed from the casing and partially de-bladed. A Comprehensive NDE inspection requires further disassembly and hence is a critical aspect of service concepts [41]. (© Siemens Energy 2021)
636
K. Kadau et al.
the aforementioned UT indications (e.g., size, location, and scan direction, among others) and part specific material test data, but also details of the inspection procedure such as transducer specifications, exact scans and grid, and the name and credentials of the inspector. This information can be utilized by UT simulation software such as CIVA [50] to quantify a PoD for the UT procedure. This PoD and UT data can be utilized to establish an initial flaw population in the fleet, as well as for specific parts. Together with service engine operation data from the power plant operators, a probabilistic lifing digital twin for a specific customer can be created – or design probabilistic lifing calculation procedures can be established that are relevant for an entire service fleet. As another example, consider components that are constructed using additive manufacturing (AM) processes. Current AM processes such as direct-to-metal laser sintering (DMLS) can occasionally produce components containing material anomalies that may occur anywhere in a part. Radiographic NDE inspection simulation such as XRSim [51] can be used to guide equipment settings and view orientations that can lead to an optimal inspection protocol that significantly reduces the number of views and associated cost of inspection. Deterministic damage tolerance assessment can be used to identify anomaly sizes and orientations that must be found by NDE. NDE simulation can be used to create location-specific PoD curves for every location in a part. Probabilistic damage tolerance can then be applied to assess the influence of location-specific NDE on PoF. An illustration of this approach is presented in [52]. In Fig. 31, an idealized automated data flow and analysis enabled by NDE 4.0 are shown. It can be observed that different data are generated within this process that need to be made available for further analyses and may even have interdependencies. As of today, many of these steps still require (semi-) manual procedures, including data preparation and analysis as data are often not directly machine-readable. It can also be seen that many parties with different ownership and access rights are involved. The infrastructure therefore needs to support data sovereignty so that the data can be accessed and utilized by authorized parties. These two aspects (seamless stream, readability, and automated analysis of data; regulation and management of data sovereignty and access rights) are key considerations of NDE 4.0. For more details, see ▶ Chaps. 1, “Introduction to NDE 4.0,” ▶ 2, “Basic Concepts of NDE,” ▶ 8, “From Nondestructive Testing to Prognostics: Revisited,” and ▶ 9, “Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0.” The described example can be transferred to other parts and systems of a gas turbine or any other complex engineering system, such as gas turbine hot gas path components such as blades and vanes.
Summary and Outlook In this chapter, the probabilistic lifing approach has been considered from both conceptual and industrial application perspectives. Theoretical and computational aspects of the methodologies, as well as practical industrial applications, were outlined and discussed. Advantages as well as challenges associated with the deterministic approach were also part of the discussion. The application examples have
23
Probabilistic Lifing
637
Fig. 31 Schematic automated data flow and analysis enabled by NDE 4.0 for large gas turbine rotor forging example. The arrows indicate data flow, the ellipses data sources and analysis
focused on gas turbine rotor applications in the aero and energy industry as this area has been the most successful and advanced in applying these techniques to guarantee reliable and safe operation of critical rotating equipment. There are certainly other areas of applications for probabilistic lifing, and even more that could potentially benefit from such an approach. So, what is hindering the community from applying these approaches? This question becomes even more obvious considering the continuously increasing computational capacity which certainly supports these computationally expensive methodologies. Another supporting aspect is the immense amount of data currently available which will continue to grow exponentially in the foreseeable future. Part of the question is in which form the data is available, i.e., digestible for computational methods such as probabilistic lifing. Here, standardization of data formats, as well as access and ownership regulations, plays an important role. The ongoing industrial revolution – industry 4.0 of which NDE 4.0 is a significant part – will further support the development and applicability of digital twins and probabilistic methods, which leads to a better utilization of resources. As an example, high-performance computing including GPU hybrid systems [9], and the ability of high-fidelity simulation of NDE procedures [50], already supports the vision of a probabilistic digital twin on a component and system level. This can be further supported by data-driven machine learning and other artificial intelligence methodologies, enabling multiscale and hybrid physics-based methods supported by artificial intelligence. On this journey, we still need to work on the cultural change that is needed to develop, apply, and finally accept these methods.
Cross-References ▶ Basic Concepts of NDE ▶ From Nondestructive Testing to Prognostics: Revisited ▶ Introduction to NDE 4.0 ▶ Reliability Evaluation of Testing Systems and Their Connection to NDE 4.0
638
K. Kadau et al.
References 1. Aircraft accident report – United Airlines Flight 232 McDonnell Douglas DC-10-10 Sioux Gateway Airport, Sioux City, Iowa, July 19, 1989. National Transportation Safety Board, NTSB/AAR-90/06, 1990. 2. Advisory Circular 33.14-1 damage tolerance for high energy turbine engine rotors. Washington, DC: Federal Aviation Administration; 2001. 3. Aircraft accident report – uncontained engine failure, Delta Air Lines Flight 1288, McDonnell Douglas MD-88, N927DA. National Transportation Safety Board, NTSB/AAR-98/01, 1996. 4. Advisory Circular 33.70-2 damage tolerance of hole features in high-energy turbine. Washington, DC: Federal Aviation Administration; 2009. 5. Wu Y, Enright M, Millwater H. Probabilistic methods for design assessment of reliability with inspection. AIAA J. 2002;40(5):937–46. 6. Kadau K, Gravett PW, Amann C. Probabilistic fracture mechanics for heavy-duty gas turbine rotor forgings. J Eng Gas Turbines Power. 2018;140:1. 7. Gajjar M, Amann C, Kadau K. High-performance computing probabilistic fracture mechanics implementation for gas turbine rotor disks on distributed architectures including graphics processing units (GPUs). In: Proceedings of the ASME Turbo Expo 2021 turbomachinery technical conference and exposition, 2021. 8. Direct simulation probabilistic fracture mechanics (DSPFM) of components. Patent PCT/EP2013/069096, 2013. 9. Krull F. Simulation tools in the thrill of speed [Online]. https://www.siemens-energy.com/ global/en/news/magazine/2020/super-fast-material-simulations.html 10. Yucesan YA, Viana FAC. Hybrid physics-informed neural networks for main bearing fatigue prognosis with visual grease inspection. Comput Ind. 2021;125:103386. 11. Rahman S, Rao B. Probabilistic fracture mechanics by Galerkin meshless methods – part II: reliability analysis. Comput Mech. 2002;28:365–74. 12. Madia M, Riesch-Oppermann H, Zerbst U, Beretta S. A new full-probabilistic framework for the structural integrity assessment of structures containing cracks. In: 18th European conference on fracture: fracture of materials and structures from micro to macro scale, 2010. 13. Augustin F, Gilg A, Paffrath M, Rentrop P, Wever U. Polynomial chaos for the approximation of uncertainties: chances and limits. Eur J Appl Math. 2008;19(2):149–90. 14. Annis C. Probabilistic life prediction isn’t as easy as it looks. J ASTM Int. 2004;1:1–12. 15. Amann C. Probabilistic fracture mechanics of forged rotor disks. Karlsruher Institut für Technologie (KIT); 2017. 16. Anderson TL. Fracture mechanics fundamentals and applications. CRC Press; 2005. 17. Berger C. Fracture mechanics proof of strength for engineering components. Frankfurt/Main: VDMA-Verl; 2005. 18. Janssen M, Zuidema J, Wanhill RJH. Fracture mechanics. VSSD; 2002. 19. Correia JA, De Jesus AM, Fernandes A, Calçada R. Mechanical fatigue of metals. Springer; 2019. 20. Rösler J, Harders H, Bäker M. Mechanisches Verhalten der Werkstoffe. Wiesbaden: Springer Fachmedien; 2016. 21. Hearn EJ. Mechanics of materials 2, vol. 3. Butterworth-Heinemann; 1997. 22. Krautkrämer J, Krautkämer J. Werkstoffprüfung mit Ultraschall. 5th ed. Berlin/Heidelberg: Springer; 1986. 23. Amann C, Kadau K. Numerically efficient modified Runge-Kutta solver for fatigue crack growth analysis. Eng Fract Mech. 2016;161:55–62. 24. Amann C, Kadau K, Gumbsch P. On the transferability of probabilistic fracture mechanics results for scaled 50Hz and 60Hz heavy duty gas turbine rotor forgings. In: ASME paper no. GT2018-75561, 2018. 25. Vrana J, Kadau K, Amann C. Smart data analysis of the results of ultrasonic inpsections for probabilisitc fracture mechanics. VGB PowerTech. 2018;7:38–42. 26. Standard review plan for the review of safety analysis reports for nuclear power plants, 2007.
23
Probabilistic Lifing
639
27. FAA by the Numbers. Federal Aviation Administration; 2021. [Online]. https://www.faa.gov/ air_traffic/by_the_numbers/. 28. Uncontained engine failure and subsequent FireAmerican Airlines Flight 383 Boeing 767-323, N345AN. Chicago: National Transportation Safety Board; 2016. 29. Callaghan B, Walker T, et al. Reducing risks, protecting people, health and safety executive. HSE Books, editors; 2001. 30. Norm ISO 21789 Gas turbine applications – safety. ISO – International Organization for Standardization; 2009. 31. Abinger R, Hammer F, Leopold J. Der Maschinenschaden: Grosschaden an einem 330-MWDampfturbosatz 1988. [Online]. http://www.gwp.eu/fileadmin/seiten/download/AZT_ Veroeffentlichung_Irschingwelle_T.pdf 32. Zimmer A, Vrana J, Meiser J, Maximini W, Blaes N. Evolution of the ultrasonic inspection requirements of heavy rotor forgings. Rev Quant Nondestr Eval. 2010;29:1631–8. 33. Nakao M. Brittle fracture of turbine rotor in Nagasaki. 1970. 34. Nakao M. Fracture of turbine shaft in Wakayama. 1972. 35. Nitta A, Kobayashi H. Burst of steam turbine rotor in fossil power plant. 1974. 36. Annis CJ, Hunter D, Watkins TJ. Evaluation of damage tolerance requirements using a probabilistic-based life approach. In: ASME paper 86-GT-266, Duesseldorf; 1986. 37. Andrews M, Wereszczak KTP, Breder K. Strength and fatigue of NT551Silicon Nitrideand NT551 diesel exhaust valves. Oak Ridge National Laboratory 0RNL/TM-1999/33, 2000. 38. Mäde L, Gottschalk H, Schmitz S, Beck T, Rollmann G. Probabilistic LCF risk evaluation of a turbine vane by combined size effect and notch support modeling. In: ASME paper GT201764408, 2017. 39. Mäde L, Schmitz S, Gottschalk H, Beck T. Combined notch and size effect modeling in the local probabilistic model for LCF. Comput Mater Sci. 2018;142:377–88. 40. Beck T, Gottschalk H, Krause R, Rollmann G, Schmitz S, Mäde L. From probabilistic prediction of fatigue life to a new design approach for gas turbines. In: Bock H, Küfer K, Maass P, Milde A, Schulz V, editors. German success stories in industrial mathematics. Springer; 2021. p. 8–13. 41. Engels P, Amann C, Schmitz S, Kadau K. Probabilistic fracture mechanics for mature service frame rotors. J Eng Gas Turbines Power. 2021;140:071004–1. 42. Bae H, Boyd IM, Carper E, Brown J. Accelerated multify-fidelity emulator modeling for probabilistic rotor response study. J Eng Gas Turbines Power. 141(12):121019. 43. Wang Y, Gogu C, Binaud N, Bes C, Haftka R, Kim N. Predictive airframe maintenance strategies using model-based prognostics. J Risk Reliab. 2018;232(6):690–709. 44. Enright M, McClung R, Liang W, Lee Y-D, Moody J, Fitch S. A tool for probabilistic damage tolerance of hole features in turbine engine rotors. In: Proceedings of the 57th ASME international gas turbine & aeroengine technical congress, Copenhagen, 2012. 45. Enright M, Moody J, Sabotka J. Optimal automated fracture risk assessment of 3D gas turbine engine components. In: ASME Paper GT2016-58091, 2016. 46. Reliable gas turbines. Siemens Energy, [Online]. https://www.siemens-energy.com/global/en/ offerings/power-generation/gas-turbines.html 47. SGT5-4000F heavy-duty gas turbine (50 Hz). Siemens Energy, [Online]. https://www.siemensenergy.com/global/en/offerings/power-generation/gas-turbines/sgt5-4000f.html 48. Kadau K, Gravett PW, Amann C. Probabilistic fracture mechanics for heavy-duty gas turbine rotor forgings. In: ASME paper no. GT2017-64811, 2017. 49. Radaelli F, Amann C, Gumbsch P, Kadau K. Probabilistic fracture mechanics framework including crack nucleation of rotor forging flaws. In: ASME paper no. GT2019-90418, 2019. 50. EC Team. EXTENDE CIVA NDE simulation software. [Online]. https://www.extende.com/ 51. Gray J. Three dimensional modeling of projection X-ray radiography. In: Review of progress in quantitative nondestructive evaluation, vol. 7A. Plenum Publishing; 1988. p. 343–8. 52. Enright MMR, Sobotka J, Moody J, McFarland J, Lee Y-D, Gray I, Gray J. Influences of non-destructive inspection simulation on fracture risk assessment of additively manufactured turbine engine components. In: ASME paper GT2018-77058, 2018.
Robotic NDE for Industrial Field Inspections
24
Robert Dahlstrom
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robotic Inspection Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ground-Based Remote Inspection Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Airborne Remote Inspection Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Underwater Remote Inspection Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Use of Robotic Inspection Systems in Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits and Drawbacks of Robotic Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits of Industrial Robotic Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drawbacks of Industrial Robotic Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Toward a more Automated Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industrial Inspection Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Growth in Inspection Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
642 644 645 647 649 651 651 652 656 656 657 658 659 659 660
Abstract
Today’s industrial field robotic inspection systems at sites such as energy facilities, maritime facilities and ports, and infrastructure asset locations save lives by moving workers from harm’s way and preventing catastrophic accidents. Utilizing data from robotic inspection systems, NDE 4.0 increases the lifesaving capability of these inspections by moving human workers from harm’s way, and by enabling more accurate insights into the structures (aka assets) at these facilities and locations. NDE 4.0 enables the extrapolation of knowledge concerning the assets operational and functional viability, future operational requirements and needs, as well as estimated future performance and efficacy. From underwater remote inspections of structures used for offshore oil exploration and drilling, to R. Dahlstrom (*) Apellix, Jacksonville, FL, USA e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_17
641
642
R. Dahlstrom
ground-based robots crawling up the walls of above ground storage tanks and boilers, to drones flying around flare stacks, there is no shortage of either different types of assets or modalities of robotic inspection systems in service, all of which can benefit from the knowledge NDE 4.0 helps create. NDE 4.0 is a force multiplier for inspecting, testing, and evaluating industrial assets for their safety, operational effectiveness, and usefulness. NDE 4.0 generated knowledge, insights, and understandings can turn data gathered from industrial inspection robots into actionable information to enhance and extend knowledge-based information driven decision making. As inspection robots are “data gathering machines,” they can gather more data and “see” more than inspections completed by people. This is exactly where NDE 4.0 brings value. While industrial inspections are necessary and critical, they can be very dangerous and very expensive. Inspection robotics and automation help organizations improve safety and reduce costs. The value creation NDE 4.0 enables is so compelling, and for it to be as effective as possible requires so much data, that it is a foregone conclusion that the data gathering industrial inspection robotics market will grow exponentially to feed data to NDE 4.0. Keywords
Inspection robot · Remote inspection · ROV · Drone · NDE inspection · Visual inspection · Contact-based inspection · Robot · Industrial robot inspection
Introduction Inspecting, testing, and evaluating industrial assets in the field are critical to the safe use and operations of infrastructure, manufacturing, energy production and transmission, transportation, and other industries. NDE 4.0 is a force multiplier for the safety, operational effectiveness, and efficacy of these built world industrial assets. Field inspections of industrial facilities and the structures they contain, as part of an organized or formal examination evaluation, range from visually inspecting assets, to destructive testing such as chipping away a small area of concrete to access and evaluate rebar, to nondestructive contact-based measurements. Good inspection regimens encompass the entire lifespan from newly constructed operational assets and facilities to inspecting aging assets scheduled for decommission or demolition. Industrial field inspections involve the measurements, tests, and gauges used to gather information and data on various characteristics in regard mostly to an object or activity. The addition of NDE 4.0 generated knowledge, insights and understanding turn data gathered from industrial inspections into actionable information. Inspections and the resulting data from industrial robotic inspection systems can enhance and extend this knowledge-based information-driven decision making. Information from robotic inspection systems is often gathered to complete prespecified requirements and standards. Standards based inspection procedures, almost always nondestructive, enable structural and other engineers to scientifically
24
Robotic NDE for Industrial Field Inspections
643
evaluate the health and risk of operational and nonoperational assets and can be a dirty, dangerous, and time-consuming job. While industrial inspections are necessary and critical, they can be very expensive and very dangerous. Inspection robots and automation help organizations improve safety and reduce costs. Further, as robots and computers are “data gathering machines,” they can gather more data and “see” more than inspections completed by people. For example, the data gathered from high-definition visual, hyperspectral, superspectral, and other imaging plus data and information collected from sensors and devices placed in physical contact with surfaces. Robotic inspection systems can also eliminate human factors associated with mental, emotional, and physical conditions of the inspector. Another major advantage of robotic inspection is consistency in execution that eliminates human factors associated with mental, emotional, and physical conditions of the inspector. No matter what, an inspector’s performance varies. As NDT 4.0 is data driven, industrial inspection robotic systems are perfect for enabling it and affording its benefits. Just as we need regular health exams to maintain our wellness and stay free from disease, industry and infrastructure need preventative maintenance and inspection. Regular doctor’s visits and checkups can detect health problems before they begin or become too serious or critical, and the sooner and earlier a problem is detected, the better the chances for a cure or treatment. Similarly, industrial and infrastructure inspection in general, and corrosion monitoring specifically, protects and ensures the safety and integrity of bridges, dams, ships, oil and gas refineries, manufacturing plants, electrical transmission towers, wind turbines, and more. Corrosion monitoring via inspections is a vital component in maintaining the health of these components and systems. When you take care of your body you are able to live a longer, healthier life, comparatively, utilizing a well-rounded corrosion monitoring system leads to extended service and operational lives of industrial and infrastructure field assets. Robotic inspection and measurement systems hold great potential to perform jobs safer, better, and faster than completing the same tasks with people. Add to that the ability of NDE 4.0 to capitalize and extrapolate on the data and information gathered by industrial inspection robotics and you are provided with a scenario wherein 2 + 2 equals more than four. As we move through the fourth industrial revolution in industry: the computerization, digitization and networking of industrial assets, NDE 4.0 will be crucial to its success as it provides needed data allowing for machine learning, artificial intelligence implementations, and more. Inspection robotic systems are currently delivering and are on track for further delivery of benefits derived from NDE 4.0, are safer than putting workers at risk, are highly extensible, and can gather a breadth and depth of data heretofore unavailable. Today’s industrial robotic inspection systems actually save lives by preventing catastrophic accidents and by moving workers from harm’s way. Further, humans unlike robots can act in ways detrimental to their health and safety that can be mental, psychological, or emotional state related something robotic systems do not suffer from. NDE 4.0 can increase the lifesaving capability of industrial inspection robots by enabling more accurate insights to assets structural integrity and extrapolating on its current needs and future condition. The unique lifesaving benefits
644
R. Dahlstrom
provided by inspection robotic systems are particularly evident for those used in enclosed and confined spaces or working at elevation. As increased safety is a unique selling feature of industrial inspection robotics their ability to save lives and bring about other safety improvements are key components in the discussions on investing in these systems and the potential changes in processes they bring. For example, in the United States the Occupational Safety and Health Administration (OSHA) states the preferred method of reducing risk is to engineer the risk out and away from the jobsite [1]. Removing people from dangerous situations and replacing them with robotic systems does just that. Given the enormous potential industrial inspection robotic systems enable, one can easily envision a future with robotic systems having more automation, functionality, and capabilities. This would enable more inspections, as both more inspection robots are placed in service, and as functionality increases. This also increases demand, driving robotic inspection system developers to build more specialized systems. In turn, this leads to yet more implementation of NDE 4.0’s digital datadriven augmentations. Further, as robotic inspection systems tend to be faster, and enable measurements and testing of more inspection locations within the same or less time, new standards and inspection requirements will become more formalized as part of corrosion monitoring and other industrial and infrastructure inspection regimens. Using industrial robotic inspection systems has a multitude of benefits but also drawbacks and limitations. And while robotic inspection systems portend a more automated future, one must ensure the benefits outweigh the drawbacks. As with most any tool, selecting the correct tool from a toolbox is important and care should be taken to ensure you choose the appropriate one. NDE 4.0 draws heavily on emerging technologies and covers topics such as the use of Artificial Intelligence (AI), Machine Learning (ML), Machine Vision (MV), Deep Learning (DL), big and smart data processing and visualization, cloud computing, Augmented/ Virtual/ Mixed Reality (AR /VR/MR), blockchains, 5G, quantum computers, special data formats and data storage for a safer, cheaper, faster, and more reliable inspection ecosystem [2]. Robotic inspection systems excel at gathering the data needed to unlock the potential of NDE 4.0. How data is used by NDE 4.0 and how the additional data helps are discussed in detail in other chapters of this handbook. Whether under water, on the ground, or flying, robotic systems can be employed for both visual and other inspections from a distance as well as inspections wherein physical contact is required. In addition to inspections, robotics systems can perform other jobsite tasks, including welding and modifying the underlying structure thus providing maintenance functionality. And, now with the additional affordances provided by NDE 4.0 for inspection robotics, there is an urgency and need to unlock the value created.
Robotic Inspection Technologies “Crawl, fly, swim” is a phrase heard in the autonomous unmanned robotics world. It encapsulates the growing capabilities of remotely controlled robotic systems used for inspection in industry and their ability to work on the ground, in the air, or
24
Robotic NDE for Industrial Field Inspections
645
underwater. While current inspection robots are sometimes autonomous, they often tend to be manually or remotely controlled. Remote control of robots and inspection system are frequently done from a distance via signals transmitted from a radio or electronic device either wirelessly or via a tether. Also commonly attached to inspection robotic systems are onboard cameras allowing for a first-person view (FPV) enabling the robotic system operator to see what is in front of the robot for ease of navigation and positioning. Modern robotics can enable reviewers, such as a corrosion engineers, ships surveyors, inspectors, or other knowledgeable professionals, to conduct inspections mostly or entirely remotely. Remote inspection robotic systems can be deployed into different types of locations to perform a wide variety of type and kinds of inspections and checks. These systems are sometimes referred to as bots and can operate with varying degrees of automation and autonomy. And, while reviewers may control them remotely, the systems may also operate with full or partial autonomy while a person reviews the collected data from a separate location both live and post inspection. These systems can be flexible and designed to access tight spaces and perform complex tasks. As the robotic inspection systems continue to add autonomy and automation the on-site inspector remotely controlling the robotic systems, often while in visual observation of the robot, is undergoing a transformation wherein the robotic operator can operate remotely, from practically anywhere on the globe. Remote inspection robots may also contain a robotic arm or hand capable of manipulation. The end portion of the robotic arm, what can be thought of as the “hand,” is known as the end effector. These robotic arms and extenders with end effectors may have multiple joints for flexibly and maneuverability enabling precise placement of the end effector for inspection. Tools can be attached to the end effector, most common of which are cameras. In addition to visual inspection from a close or far distance, some inspections require physical placement of probe tips or other devices in contact with a surface. Thanks to inspection robots, visual inspections of many industrial equipment assets and asset categories are frequently performed remotely. Cameras capture still or video footage allowing real-time monitoring and inspections of equipment while also storing footage for viewing at a later time. Inspection robots generate both general visual inspections (GVI) and close visual inspections (CVI) using cameras and video systems and store the data as part of a digital data record. When needed, these systems are outfitted with lighting to ensure adequate viewing. Visual inspection is the most common type of remote examination performed today. High-quality inspection site data used to monitor assets has always been a key use case for businesses. NDE 4.0 needs to use digital data to enable the use of computer intelligence to make sense of, or operationalize and make actionable, the data, both visual and other.
Ground-Based Remote Inspection Robots Ground-based inspection robotic systems are a mature industrial inspection platform. These inspection robots are equipped with wheels or tracks that roll or use legs
646
R. Dahlstrom
or other apparatus to traverse the ground and have been in use for decades. Since these systems stay on the ground, they do not have the constraints of airborne or underwater systems. This allows them to be larger and non-weight or waterproof constrained. Ground-based inspection robotic systems are available in a variety of shapes and sizes. They also offer a multitude of modalities from crawling snake like, to rolling on wheels, to walking on legs. What began as mostly visual only robotic inspections has slowly grown to include tasks such as grinding and welding wherein actual repairs are completed by the robotic system. This is in addition to the data gathering by physically placing NDE measurement and testing equipment in direct contact with a surface or by using NDE camera-based systems. It is only a matter of time until maintenance tasks are added to the inspection robotic systems such that when an inspection uncovers an issue, the maintenance component of the robotic system can institute a repair or remediation concurrent with the inspection. One of the more recent development in the world of ground-based inspection robots are those that have magnetic wheels, suction cups, or use adhesion to climb walls. These systems are known as climbers, “wall crawlers” or wall crawling robot (WCR) systems. Workers can deploy these types of systems for remote inspections of machinery such as boilers, pipelines, and other assets including both internal and external spaces. These devices can sometimes deal with problem or difficult tasks that could involve climbing up vertical surfaces or making their way over irregularly shaped objects and can be affixed to a wide range of inspection equipment (Image 1). Performing regular inspections on machinery used for manufacturing is crucial to keeping plants and facilities up and running. Automated inspection tools can accomplish, or assist in performing, many of the checks conducted in industrial facilities.
Image 1 An example of a Wall Crawling Remote Inspection robot for Ultrasonic Thickness (UT/UTT) Measurements © 2018 Gecko Robotics
24
Robotic NDE for Industrial Field Inspections
647
WCR systems can climb up the sides of manufacturing equipment to perform inspections and can complete quality and inspection checks inside machinery as well. These systems are used to inspect the insides of bores and pipes as well as analyze weld integrity. They can also examine turbine blades for cracks and conduct visual inspections of tanks, vessels, pipes, cooling towers, and other equipment components. Additionally, some companies use robots to inspect the buildings in which they operate. Robots can often access roofs, attics, crawlspaces, ducts, and other parts of a facility more efficiently and safely than human workers can [3]. Climbing robots have been applied in various fields since the late 1980s [4]. Climbing robots can be divided into categories according to how they adhere to or stick to a wall or structure. Magnetic adhesion is widely applied in climbing robots, and many robots climbing structures with a ferrous metal substrate have adopted this method [5]. In the early 1990s climbing robots based on vacuum adhesion were placed in service including one that was used for painting in a nuclear power facility [6]. Newer climbing robots can use Van der Waals force, which uses distance-dependent interaction between atoms or molecules for adhesion, enabling robots that can move on some slippery surfaces. In recent years, wheeled robots that climb on cloth [7] and robots that climb with claws [8] have expanded the repertoire of techniques used to adhere and stick to a wall. These ground-based and climbing robotic inspection systems can go places people cannot, for example spaces physically too small, that lack breathable air, are too hot or cold, or just too dangerous. Gathering inspection data from locations such as these extend the capabilities of human inspectors and are an example of where NDE 4.0 can be well utilized as if the robotic systems did not exist, the data they collect would not be available.
Airborne Remote Inspection Robots The newest addition to the inspection robotics world is drones. Also known as unmanned aerial vehicles (UAVs) when flown by themselves, and unmanned aerial systems (UASs) when they utilize a base station or include post flight data processing and analysis. You can attach a variety of sensors to drones and conduct inspections and examinations from the air. This capability is especially useful for checks of equipment located at substantial heights or spread out across wide areas. Drones can vary in size, and they can also be made to fit into small spaces and range from full sized aircraft to small micro weight mini drones that can fit into a shirt pocket. Drones tend to be of two types fixed wing and rotary aircraft. There is also a hybrid category of vertical take of and landing (VTOL) aircraft that often use both. Fixed wing drones are like traditional airplanes and typically fly longer distances mostly straight line or raster pattern type missions. As the fixed wing drone systems can traverse a large area they are frequently used in agriculture, for example flying a pattern over a field of crops or inspecting and videoing industrial assets such as pipelines. Rotary aircraft are more like helicopters but come in multiple
648
R. Dahlstrom
configurations with different numbers of motors and propellers, most common of which are the four motor and propeller aircraft frequently referred to as quadcopters. There are octocopters (8 motor and propeller) as well as drones with 6, 16, or other numbers of motors. Various frame and arm styles and configurations of these systems are used, for example an 8 motor and propeller aircraft can be configured with 8 arms and one motor and propeller per arm or with 4 arms and two motors and propellers per arm, one “up” and one “down” in what is known as an x8 configuration. As rotary drone systems can hover in place and fly slow and in close proximity to structures, they are frequently used in industrial inspections such as recording high-definition video of wind turbine blades looking for lightning or bird strikes and to check the blades leading edge for delamination and the condition of the coating/paint. Fixed wing drones are used in industrial inspections and can create enormous value while gathering information necessary for NDE 4.0. A great example is pipeline inspections. A fixed wing drone can be launched and flown several miles along a pipeline while gathering infrared thermography, multispectral, or hyperspectral video (or all three) that can “see” things beyond the range of human vision. This enables an engineer or analyst to pinpoint, for example, hot spots or areas where there are temperature abnormalities potentially indicating problem areas where the pipeline envelope is at risk of rupture or other failure. The collection of data from the electromagnetic spectrum via hyperspectral or multispectral imaging allows the advanced analysis capabilities of NDE 4.0 to be performed. Specifically having video footage allows NDE 4.0 to use machine learning and artificial intelligence to train software systems to identify and locate areas of concern, be it “visual” corrosion, damage, weathering, or other indicators necessitating a closer look. Staying with our example of inspecting a pipeline, a rotary drone brings a different value proposition. Typically, they would fly much closer and slower to the pipeline than a fixed wing inspection aircraft. This can allow them to capture even more data as the time spent per distance of pipe is greater. While the rotarybased aircraft may gather the exact same multispectral data as a fixed wing system, it potentially could look also look at the “air density” and detect gas leakage using optical gas imagery camera-based systems, something difficult to do farther away traveling at a faster speed. Further, as they are closer to the pipeline they could sample or sniff the air for gas concentrations. Since the rotary drones can fly in close proximity to industrial assets, they are frequently used for the most common form of NDE and NDE 4.0, visual inspection. Visual inspection via robotic systems can epitomize the data collection component of NDE 4.0. By outfitting a drone with an array of multimodal sensory devices collecting a plethora of data and information we can enable the best success and use of NDE 4.0 as information that can provide more and better data for use and analysis. Aerial robotic inspection systems enable this in a specific and unique way. Until modern drones, aircraft did not fly close to or come in contact with structures as it is extremely dangerous and requires a very skilled pilot. Automating the precision flight via onboard computers allows, for example, multiple micro adjustments to the flight per second, something a human pilot is incapable of. Thus, using a small unmanned aerial robotic inspection system can reduce or eliminate risks.
24
Robotic NDE for Industrial Field Inspections
649
Image 2 An example of Unmanned Aerial System with a robotic arm and a contact-based Dry Film Thickness (aka paint thickness) inspection measurement device being flown near a ship. © 2017 Apellix. Of historical note, the ship in this image the Artic Discoverer used a famous and cutting edge technology ROV, the NEMO [10], in 1988 to locate and retrieve one of the largest ship wreck treasure discoveries in history – over 21 tons of gold from the shipwreck of the Central America [11]
In addition to visual and camera-based inspections specialized rotary drones can now take contact-based NDE inspection measurements wherein a probe tip from an electronic measurement device is physically placed in contact with a structure to collect measurements at elevation in potentially hazardous or hard-to-access environments. This new Aerial Robotics Platform utilizes computer-controlled heavy-lift industrial hardened multi-rotor drones outfitted with various locational awareness sensors and functions to allow precisely controlled flight close to structures [9]. Needless to say, having an airborne robotic system capable of coming in contact to take physical based measurements and gather data from the surface is an exciting new and novel innovation, one that further extends NDE 4.0 as it affords yet another avenue for data collection (Image 2).
Underwater Remote Inspection Robots Underwater systems can be used for inspecting things such as ship hulls or in-service water tanks. These systems can keep inspectors out of danger and eliminate the need for diving inspections both in open waters and inside enclosed spaces. Underwater remote inspection robots are commonly referred to as Remote Inspection Vehicles (ROVs). This nomenclature is used for underwater vehicles such as the one shown in Image 3. Manned submarines extended the range of humans by enabling them to access inhabitable underwater locations. Over time
650
R. Dahlstrom
Image 3 An example of Remote Operated Vehicle (ROV) being lowered into the water. This work has been released into the public domain by its author, Brennanphillips at English Wikipedia. This applies worldwide. Brennanphillips grants anyone the right to use this work for any purpose, without any conditions, unless such conditions are required by law. https://commons.wikimedia. org/wiki/File:ROV_Hercules_2005.JPG Retrieved 18 June 2020
the extension of humans ranges underwater included unmanned submarines. Removing the constraints of interior space for people to operate the sub resulted in smaller systems able to be controlled remotely, most commonly with a tether to secure and potentially retrieve the craft, and for data transfer, various communications and sending operational commands. These small underwater systems enable exploration and inspection, including at extreme depths, unattainable with a person inside the craft. As the built environment adds more assets such as offshore oil and gas and platforms, wind turbines, ships, docks, and underwater pipelines, inspection becomes more important. ROVs are well established and have been around for decades with subsea ROVs, such as those used in the oil and gas industry, having been around over 30 years. ROVs can be conceptualized as work class, mid class, and observation class generally in descending order by size. Work class ROVs are chosen for their jobs primarily by their pump size, thruster configuration, and power and operations of the manipulators. These ROVs complete both inspection and maintenance tasks such as grinding. Mid class ROVs also have both cameras and tools and often fiber optics included in the tether for high bandwidth throughput of the visual images and data. Observation class ROVs tend to be “all about the camera,” thus they are a preferred option for visual inspections. Further, as observation class vehicles tend to be smaller and less expensive than the mid class or work class options, and thus are more accessible for engineering firms that focus on inspection. (Personal interview June 18, 2020, with Robert Christ, coauthor of “The ROV Manual: A User Guide for Remotely Operated Vehicles 2nd Edition,” Robert D Christ, Robert L. Wernli Sr). ROVs for industrial inspection have long utilized a plethora of Nondestructive testing (NDT) technologies. While visual inspection has dramatically improved as camera technology developed, high-resolution video and images and those
24
Robotic NDE for Industrial Field Inspections
651
generated using infrared and multispectral cameras allow insight into more than a human eye can see. In addition, ROVs have used NDT technologies that require contact with the surface for some time. These include ultrasonic, eddy current, electromagnetic acoustic transducer (EMAT), radiography, and more. Capabilities of ROVs for NDE 4.0 include hydrographic services that utilize 3D and other computer vision, various AI, machine vision, underwater computational geometry such as simultaneous localization and mapping (SLAM), live 3D point clouds, stereoscopic real-time video photogrammetry, and other technology innovations. These technologies feed directly into NDE 4.0 and the cyber-physical capabilities of robotic inspection systems such as the inspection services by ROVs allow engineers to gain a view into the condition and operational viability of maritime assets while keeping their feet dry.
Use of Robotic Inspection Systems in Industry Robotic inspections that help discover defects, functional deterioration, or operational issues are crucial for safe operation, thus administering regular inspections is vital. However, doing so manually is time-consuming. With automation, these inspections are much more efficient. As evidenced by the plethora of divergent types of robotic inspection systems available, automated inspection systems are able to perform a range of tests for various industries. These robotic monitoring systems can examine machinery, buildings, products, and even natural environments. Some of the industries that use inspection robots include manufacturing, energy, and transportation. Accurate inspections of infrastructure, energy, and maritime assets are crucial for reliably and safely. The energy industry is an industrial sector that frequently uses robotic systems for inspections. Be it ROVs for inspection of underwater structures used for offshore oil exploration and drilling [12], ground-based robots for crawling up above ground storage tanks and boilers [12], or drones to inspect flare stacks [13], there is no shortage of either different types of assets or modalities of robotic systems needed. Not only are robotic inspection systems able to “crawl, fly, and swim,” they are uniquely positioned for use off earth in space. The US National Aeronautics and Space Administration (NASA) is currently experimenting with using robots armed with infrared thermography sensors to inspect aircraft. This sensor technology is needed because of the advanced composite materials being used in the manufacture of modern aviation equipment that are similar to proposed material for spacecraft and space structures [14].
Benefits and Drawbacks of Robotic Inspections Robotic systems can gather extremely large data sets and for the data to be useful for the digital twin or other uses of NDE 4.0, it is critical that the information be machine readable. One must be able to interpret the meaning of the exchanged data
652
R. Dahlstrom
unambiguously and within the correct context. This semantic interoperability allows as Vrana states in a paper on NDE 4.0, The Fourth Revolution in NonDestructive Evaluation: Digital Twin, Semantics, Interfaces, Networking, Feedback, New Markets and Integration into the Industrial Internet of Things; “With the semantic information stored in the digital twin it will be possible to simulate the asset, to predict its behavior, to apply algorithms etc. A digital twin can also include services to interact with the asset” [15]. In addition to the digital twin there are numerous reasons that companies may choose to use robots to conduct industrial inspections.
Benefits of Industrial Robotic Inspections One recent report detailed five areas where industrial robotic inspections create value helping to create compelling reasons to invest in and scale them rapidly. The areas are safety, time, analytics, access, and cost [16]. Robots can safely go many places people cannot or do not want to go. They can fit into small spaces, climb walls or fly to the top of high structures, and move around inside dark, hot, dirty environments. Just by their presence they can make inspections more thorough and simpler to perform. And they help improve safety by removing humans from hazardous environments. Using industrial inspection robots allows workers to avoid potentially dangerous inspection sites by enabling them to conduct inspections remotely. Rather than using ropes access for workers to inspect a wind turbine blade, for instance, an inspection drones can be flown to examine the blades. In addition to keeping workers SAFE on the ground instead of having to access locations at elevation, workers can also avoid getting wet by going underwater or putting themselves in other potentially hazardous locations such as confined spaces with limited breathable atmospheres. Many current, nonrobotic, inspection measurement practices can run afoul of safety regulations such as those in the United States administered by OSHA. OSHA maintains a hierarchy of fall protection starting with completely eliminating the hazards and risks of falling by engineering them out and away from the workplace. Even after having taken what an operator may believe to be sufficient precautions and complying as best it can to the mandate, OSHA may still find the business in violation, as what is reasonably possible is subjective [17]. Regulatory View on Working at Heights The United States OSHA maintains safety regulations that cover, amongst other things, working at height. Similarly, the American National Standards Institute (ANSI) also has Fall Hazard standards. Both offer a worker protection hierarchy, and both apply to the potential Hazards of “Manual” measurements at heights for NDT measurements with handheld electronic digital testing devices. OSHA maintains that the hierarchy of fall protection starts with completely eliminating the hazards and risks of falling by engineering them out and away from the workplace [17]. If that is not a reasonable possibility, then preventing falls
24
Robotic NDE for Industrial Field Inspections
653
from happening is to be considered next. And if that is also not a suitable solution, then implementing a fall protection program and a rescue plan is a must. The ANSI worker protection hierarchy is detailed in the ANSI Z359.2 standard “Minimum Requirements for a Comprehensive Managed Fall Protection Program” which strictly applies to people who need to use fall protection equipment in the workplace, The ANSI worker protection hierarchy starts with the Elimination or Substitution of jobs at height [18] and also recommends if that is not a suitable solution, then implementing a fall protection program and a rescue plan is a prudent and often a requirement. The increased safety benefits from automated robotic inspection systems are important to facility owners, insurance companies, local and national regulators, and standards and inspections organizations such as the International Maritime Organization (IMO), the National Association of Corrosion Engineers (NACE), the American Petroleum Institute (API), and others. To a large extent, much of worker’s safety depends on organizational safety policies and procedures in addition to governmental, insurance, and other safety guidance and regulations. Removing safety risks by engineering them “out and away” from the jobsite is an elegant solution. Why put people at risk of falls or other injuries when you can have a robotic system doing the work instead? That is part of why organization needs to implement industrial robotic inspections as soon as possible. Industrial inspection robotic systems can help reduce the time it takes to conduct industrial inspections. Robotic inspections can often be completed more quickly than ones by human as they can eliminate the need to disassemble equipment or potentially interfere with it to inspect it. With automated inspections, they can be scheduled to run “in the background,” while workers attend to other tasks. This can decrease the number of workers involved in the process and reduce the number of man hours or hours worked. The alignment of human work hours with augmentation by industrial robotic inspection systems is about optimizing the use of human resources and allowing an organization to redeploy those resources that would otherwise be occupied doing inspections to focus on other things. Further, robotic inspection systems can reduce the number of change overs and repositioning required to conduct such inspections, for example, converting offline assets to online to remove an asset from service for inspection. These transitions and instances wherein assets are removed from service for an interval for inspection without a replacement can be costly in time and money. Completing these measurements with a robotic system frees the examiner, or corrosion engineer, to focus on high-value duties allowing work flexibility and enhanced productivity. Industry, Infrastructure, maritime, and the other verticals that use industrial robotic inspections benefit from continually enhanced productivity, cost reduction, fast throughput, and process optimization robotic inspection systems enable. Industrial inspection robotic systems enable data collection on a scale and scope heretofore unimaginable feeding the hungry NDE 4.0 paradigm for analytics and other computational and informational purposes. Improving analytics allows organizations to create a digital alignment to the physical space and processes. Industrial
654
R. Dahlstrom
inspection robotic systems can also capture information in real-time and accelerate the process of conducting analysis [16]. As Vrana 2020 concludes “NDE 4.0 is the chance for NDE to move from the niche of the “unnecessary cost factor” to one of the most valuable data providers for Industry 4.0. However, this requires the opening of data formats and interfaces. The insight that the protectionism lived up to now will have a damaging effect on business in the foreseeable future will decide on the future of individual companies. For companies that recognize the signs of the times, NDE 4.0 is also the way to market the data as part of a completely new business model for the industry [26]. These automated inspection systems can help remove human bias and potentially improve data management and organization. Data from robotic inspections is often automatically input directly into to business systems, eliminating the need for a human worker to record data and transfer it from place to place. NDE 4.0 enables strategies for companies to analyze this data and learn more about the health of their systems and improve their maintenance strategies. Communicating data directly from a robotic system to a centralized secure data repository affords better control of the data helping reduce or eliminate human factors leading to a more reliable inspection system. Or what is referred to as and more consistent probability of detection (POD) from inspection to inspection [19]. The use of robotics for industrial inspections can afford access to difficult to access and inspect areas. For example, by using a robotic inspection system with a smaller and nimbler footprint than required for a person. One such system with phenomenal potential, as mentioned earlier, is the airborne robotic systems capable of coming in contact with a surface. Assets are not always box-shaped simple structures. Ships, for example, are a somewhat nonlinear geometry optimized to reduce drag coefficient. These “flowing lines” can make robotic maintenance, inspection, and measurement difficult and challenging. And ships make up the simpler subset of assets and geometric complexities inspected by robotic systems. One of the great things about aerial robotics is they can adapt. They can easily conform to nonlinear surfaces while other robotic or other techniques may have a long adaptation curve. Gaining access to areas of facilities and plants that require inspection can often be onerous, gaining manned access more so. In addition to permissions and rights to access areas for inspection, safety equipment may also need to be secured as well as a safety buddy. For inspection and maintenance work at elevation safety equipment such as lifts and scaffolding may be required. Getting the elevated work platforms to the physical location, setting them up, and using them safely are both expensive and time consuming. Further, physical impediments, barriers, soil conditions, and more may prohibit the driving of a wheeled lift or tractor or tire wheeled crane to the area of concern. Robotic inspection systems often do not have this constraint as they are often mobile and/or occupy a less heavy and expansive space footprint. As industrial robotic systems can, in many instances, access more locations than people with more ease and help gather data that is needed for the use of NDE 4.0 analytics.
24
Robotic NDE for Industrial Field Inspections
655
Finally, reducing cost is essential as industrial inspections can be very expensive. The smallest cost saving with inspection robotic system is from reducing the number of people required for an inspection. Most inspectors are not replaced by inspection robots, instead the robots are a tool that free them from the dirty, dull, and dangerous tasks of collecting the inspection data and allows them to spend more time on the higher value components of operational and maintenance. A much larger cost savings is from mitigating the cost of safety equipment required for inspections with workers. A rule of thumb in the industry for elevated inspections is an average of 60% of the costs are placing the workers height. In 2020 it costs $1000 to $1500 per week to rent a standard 300 capacity lift (aka mobile elevated work platform), plus delivery and pick up fee, taxes, and insurance. Larger models such as those that extend 100 feet or more are significantly more expensive [20]. Robotic inspection systems can also be easier to use resulting in more frequency, enhancing preventative and predicative maintenance activities and reducing the costs associated with performing repairs on equipment. Care must be taken to ensure management and others do not develop a tendency to over-rely on robotic inspection systems. One area of huge economic value creation is the use of industrial field inspection robots that either prevent an asset from being taken out of service or allowing an asset to be returned to service quicker. For example, shutting down multiple portions of an oil and gas refinery that feed excess gas into a flare stack to take it out of service for inspection can cost millions of dollars a day in lost revenue [21]. Using an aerial robotic system such as the one shown in Image 4 allows for the asset to be inspected and measurement thickness data while keeping the asset in service.
Image 4 An example of an Aerial Inspection Robotic system for Ultrasonic Thickness (UT/UTT) Measurements on an ~200degree Fahrenheit in service operational chimney at a refinery (~93 Celsius) © 2018 Apellix
656
R. Dahlstrom
Drawbacks of Industrial Robotic Inspections Robots are not always the ideal solution. In many situations the existing inspection regimen and methods are done relatively inexpensively, safely, and provide the requisite data and information for good operations and knowledge of the current and projected future state of the asset. Robots can require a relatively high upfront investment [12]. And although on a per inspection cost basis the robotic inspections may cost less, purchasing robotic inspection systems requires an upfront investment, even if it is the financial commitment to lease rather than purchase. This is where the longer-term benefits brought about by an NDE 4.0 program can show the net reduction in costs while increasing the knowledge of industrial assets conditions and operations and how they are expected to perform over time thus in the long-term saving money and reducing risks. Robots do not respond well to many unexpected situations. Robots are not as versatile as people and while they may exceed at certain specific tasks, especially repeated programmatical tasks, they might not be able to adapt in unexpected or unanticipated situations or if something unexpected occurs. While NDE 4.0 included AI and machine learning, applying it to data is currently more advanced as a science than applying AI and machine learning to a robotic systems response to unexpected events. As robotic inspection systems develop and mature, they should adapt better. Robotic inspection systems, since they are not human inspectors, may not pick up on or discover rare issues that an experienced human inspector might. Because of this limitation, companies supplement robotic-powered inspections and examinations with ones completed by people. Humans, however, can also have a “bad day,” are susceptible to mental or physical stress, and have a number of human factors that impact their performance. Robotic inspection systems are also not perfect however; when operating correctly, they repeatedly do the same task the same way every time.
Toward a more Automated Future As robotic inspection tools become more advanced and more affordable, and as more industrial companies utilize them, they continue to make improvements in their value creation, specifically in increased efficiency, reduced costs, and improved worker safety. As this trend continues, we can expect to see more companies using automated robotic inspection systems as well as the creation of new types and kinds of automation tools. It is crucial that manufacturers, industrial companies, infrastructure asset owners, and others that are using or contemplating using industrial inspection tools evaluate the use of robotic inspection equipment to ensure that they can get the most out of this advanced technology. There will also be a need for trained workers with skills in operating these devices. Thoughtful planning is important and developing a deployment plan can be critical to ensure the introduction and deployment of the robotic inspection systems occur without disrupting activities. Robotic industrial inspection tools likely will not be the only robotic or automation equipment a company deploys.
24
Robotic NDE for Industrial Field Inspections
657
Integrating all of the robotic devices into business systems, along with the data they produce, in a thoughtful way can help companies utilize them to their fullest potential.
Industrial Inspection Standards Given the new robotic inspection platforms are very efficient in gathering data, standard bodies are looking to implement new inspection standards that include the efficiency gains of robotic inspection systems. A major industry standards body in the coatings and corrosion prevention area is the National Association of Corrosion Engineers (NACE). NACE International is looking into creating specific industrial robotic inspection standards and has established task groups that will develop a standard practice for “Drone-Based Condition Monitoring of Below and Above Ground Pipeline Integrity Threats.” In personal communications with NACE the author spoke with Ed Manns, Director, Standards and Strategic Technical Initiatives who stated, “NACE’s standards program is poised to address the corrosion prevention industry’s standardization needs relative to robotics inspections. NACE is actively working with industry to identify and prioritize their robotic inspection requirements.” One aerial robotic inspection system that exemplifies the need for new standards is an aerial robotic system that flies up to a structure and, under autonomous software control, touches a UT or DFT measurement probe to the target and records the data compliant with SSPC, ASTM, API, and other industry standards. DFT standards such as the Society for Steel and Protective Coatings SSPC-PA2 are frequently used to ensure coatings/paint is properly applied or to estimate the remaining life of a coating job. One organization that promulgates a widely used DFT standard is SSPC [22]. Dry Film Thickness (DFT) measurements are frequently required to ascertain the thickness of coatings and may be taken to determine whether a structure needs recoating, or if the structure has recently been coated to ensure that it conforms to specification. DFT measurements provide insight into how a surface may be impacted by rust, corrosion, or incidental damage and how successful coatings perform or may perform over time. The Society for Protective Coatings (SSPC) publishes the accepted standard on coatings. The current standard for DFT measurements is the Society for Protective Coatings Paint Application Standard No. 2 (SSPC-PA 2) [22]. Highlights for SSPC PA-2 are as follows: • For Structures Less Than 300 sq. ft., take 5 spot readings per 100 sq. ft. • For Structures not exceeding 1000 sq. ft., select 3 random 100 sq. ft. areas to test. • For Structures exceeding 1000 sq. ft., select 3 random 100 sq. ft. areas to test in the first 1000 sq. ft. and for each additional 1000 sq. ft. test one random 100 sq. ft. area. • If any area is not in compliance, the non-compliant area should be determined. The measurement process using SSPC PA-2 is consequently, in many cases, a massive undertaking. However, as author Rob Francis shows in his article Dry Film
658
R. Dahlstrom
Thickness Measurements: How Many Are Enough (2009) [23] SSPC PA 2 standards take spot readings of a smaller percentage of the surface area than other standards such as ISO 19840 or IMO PSPC, 14% vs 100%. Thus, some standards are more rigorous than others testing a larger percentage of the surface area. As the number of spot readings increases, an automated measurement process could have even larger benefits under the more rigorous standards. The SSPC-PA2 standard for an area in excess of 1,000 square feet (~100 square meters) stipulates the area be divided into 10 10-foot square areas or zones (100 square feet or ~10 square meters). You then arbitrarily select three of the areas for the first 1000 square feet and one additional area for each additional 1000 square feet. Within each of the designated areas five spot readings are randomly collected by placing the probe tip of a handheld electronic measuring device in contact with the surface. A magnetic pulse is sent through the coating and “bounces” off the hull enabling the device to measure to exacting tolerances exceeding thousands of an inch (mil’s or 0.00100 ). In addition to SSPC as a “standards body,” there are other entities, both international and nation based, that set paint and coatings DFT measurement standards. This includes: The International Standards Organization (ISO) ISO 19840, the International Maritime Organization (IMO) which is a United Nations specialized agency with responsibility for the safety and security of shipping and the prevention of marine pollution by ships, resolution MSC 215(88), and Standards Australian AS 3894.3. To manually take five spot readings in a 10 10 foot square area as part of an SSPC-PA2 measurement at elevation and then move to the next location to take an additional five spot readings can be challenging and time consuming. In a conversation with the author it was disclosed by a ship manufacturer that they take 2000 measurements on a newly constructed large ship for each coat of paint, and there are five coats, and the process takes on average 2–4 full time people up to 2 weeks per coat (10 weeks for all five coats). The aerial robotic inspection system described in this document collects 60–100 measurements at different physical contact locations per hour – enabling the task to be completed much faster. As we know, selecting the right tool for the right job is essential. When properly selected and utilized, robotic inspection systems can assist with creating safer workplaces, provide better data to manage assets and unlock cost savings. NDE 4.0 helps make this possible and when used in conjunction with standards the datadriven results can help satisfy the asset owners and those who insure them that industry best management practices are being observed. While the industrial robotic inspection systems can be highly effective when properly used, they do have limitations and, in some cases, they are the incorrect tool.
Growth in Inspection Robotics The automation inspection revolution is coming whether or not individual manufacturers and other industrial organizations are ready for it. The value creation of NDE
24
Robotic NDE for Industrial Field Inspections
659
4.0 is so compelling and requires so much information that it is a foregone conclusion that the industrial inspection robotics market will grow exponentially. The Global Industrial Robotics Market is expected to exceed more than US$ 98.0 billion by 2024 [24] with a very healthy compound annual growth rate. The types of robotic inspection and other robotic equipment for the coatings industry has grown significantly the last 20 years – and is forecast to grow exponentially in 2020 and beyond, as asset owners and service providers realize the economic savings, increased data, and safety that these types of systems can offer when properly selected and utilized [25].
Conclusion Autonomous industrial inspection robotic systems hold great potential to perform jobs safer, better, and faster than completing the same tasks with people. These systems improve efficiency due to reduced inspection time and increase efficacy by allowing for quicker reporting and faster decision-making with NDE 4.0 processes. Further they improve inspection transparency as computers are better at gathering data and do not have the biases people do. They can help achieve substantial cost savings, particularly when they prevent an asset from being taken out of service or enabling the asset to be returned to service quicker. And finally, they are an elegant safety solution moving workers from harm’s way and saving lives. Combined with these safety, efficiency, and effectiveness gains is the additional benefits derived from NDE 4.0 wherein artificial intelligence (AI), machine learning (MI), and other tools enable a more reliable inspection ecosystem and act as a force multiplier for inspecting, testing, and evaluating industrial assets. NDE 4.0 generated knowledge, insights, and understandings can turn data gathered from industrial inspection robots into actionable information and enhance and extend knowledgebased information driven decision making. As we move toward a more automated future as robotic inspection tools become more advanced, affordable, and utilized we will continue to see improvements in their value creation. As this trend continues, we can expect to see more companies using automated robotic inspection systems as well as the creation of new types and kinds of automation tools freeing human inspectors from the dirty, dull, and dangerous tasks of collecting inspection data enabling them to spend more time on the higher value components of industrial assets operation and maintenance.
Cross-References ▶ Artificial Intelligence and NDE Competencies ▶ Compressed Sensing: From Big Data to Relevant Data ▶ Digital Twin and Its Application for the Maintenance of Aircraft ▶ From Nondestructive Testing to Prognostics: Revisited ▶ “Moore’s Law” of NDE
660
R. Dahlstrom
▶ Robotic NDE for Industrial Field Inspections ▶ Smart Monitoring and SHM ▶ Value Creation in NDE 4.0: What and How
References 1. Eliminate or control all serious hazards (hazards that are causing or are likely to cause death or serious physical harm) immediately. https://www.osha.gov/shpguidelines/hazard-prevention.html#ai1 2. Vrana J. NDE 4.0: the fourth revolution in non-destructive evaluation: digital twin, semantics, interfaces, networking, feedback, new markets and integration into the industrial internet of things. ResearchGate. 2020;336128589 https://doi.org/10.13140/RG.2.2.17635.50720. 3. Xu Z, Zhang K, Zhu X, Shi H. Design and optimization of magnetic wheel for wall climbing robot. In: Robotic welding, intelligence and automation, advances in intelligent systems and computing, vol. 363. Cham: Springer International Publishing; 2015. https://doi.org/10.1007/ 978-3-319-18997-054. 4. Nishi A, Wakasugi Y, Watanabe K. Design of a robot capable of moving on a vertical wall. Adv Robot. 1986;1(1):33–45. https://doi.org/10.1163/156855386X00300. 5. Yano T, Numao S, Kitamura Y. Development of a self-contained wall climbing robot with scanning type suction cups. In: IEEE/RSJ international conference on intelligent robots and systems, vol. 1. Proceedings; 1998. p. 249–54. https://doi.org/10.1109/IROS.1998.724627. 6. Tokioka S, Ishigami S, Sekiguchi R. Kumagai Gumi Co., Ltd. 1986. https://pdfs. semanticscholar.org/7d81/5f90733e78a309abd0c2fd0c8f41e5eedb45.pdf 7. Birkmeyer P, Gillies AG, Fearing RS. Clash: Climbing vertical loose cloth. In: IEEE/RSJ international conference on intelligent robots and systems; 2011. pp. 5087–5093. https://doi. org/10.1109/IROS.2011.6094905. 8. Lam TL, Xu Y. A flexible tree climbing robot: Treebot – design and implementation. In: IEEE international conference on robotics and automation; 2011. p. 5849–54. https://doi.org/10.1109/ ICRA.2011.5979833. 9. Dahlstrom RL. The emergence of contact based nondestructive testing NDT at height utilizing aerial robotic drone systems. Offshore Technology Conference. 2020; https://doi.org/10.4043/ 30788-MS. 10. Michelle Sullivan Man overboard: the robots that found the ship of gold, then and now. Columbus Monthly Oct 28, 2014. https://www.columbusmonthly.com/article/20141028/ lifestyle/310289314 11. Blargity http://www.wncrocks.com/ARCTIC%20DISCOVERER%20HISTORY.html 12. Guide to Inspection Robots Used in Industrial Sectors https://gesrepair.com/guide-inspectionrobots-used-industrial-sectors/ 13. Apellix 14. Blargity https://roboticsandautomationnews.com/2018/01/29/robotic-inspection-nasa-testingnew-system-for-checking-and-fixing-aircraft-fuselages/15833/ 15. NDE 4.0: The Fourth Revolution in Non Destructive Evaluation: Digital Twin. . .Johannes VRANA 1 https://arxiv.org/pdf/2004.05193.pdf 16. 5 Reasons To Consider Robots for Industrial Inspections, John Santagate 10/30/18|Industrial Robotics, Factory Automation|Service Robots - Robotics Tomorrow https://www. roboticstomorrow.com/article/2018/10/5-reasons-to-consider-robots-for-industrial-inspections/ 12729/
The materials and the views expressed in this document are solely those of the author(s) and are cleared for public release.
24
Robotic NDE for Industrial Field Inspections
661
17. Elimination of fall hazards is the first and best line of defense against falls from heights https:// www.osha.gov/dte/grant_materials/fy11/sh-22230-11/FallHazardManual.pdf 18. “Elimination or substitution. For example, eliminate a hazard by lowering the work surface to ground level, or substitute a process, sequence or procedure so that workers no longer approach a fall hazard” American Society of Safety engineers - ANSI/ASSE Z359 FALL PROTECTION CODE Revisions Strengthen Benchmark Consensus Standard By Joseph Feldstein http://www. asse.org/assets/1/7/ByDesign_Z359Special_Fall2007.pdf 19. Purpose and Pursuit of NDE 4.0 https://www.inspiringnext.com/purpose-and-pursuit-of-nde-4-0/ 20. How much does it cost to rent a cherry picker https://costfigures.com/cherry-picker-rental-cost/ 21. Stone DK, Lynch SK, Pandullo RF, Evans LB, Vatavuk WM. Flares. Part II. Capital and annual costs. J Air Waste Manage Assoc. 1992;42(4):488–93. https://doi.org/10.1080/10473289.1992. 10467008. https://www.tandfonline.com/doi/abs/10.1080/10473289.1992.10467008 22. The Society for Protective Coatings (SSPC) Marketplace. Standards, Paint Application (PA), PA 2, Determining Compliance to Required DFT http://www.sspc.org/ST-000PA2 23. Rob Francis Dry film thickness measurements: how many are enough? A close look at four major international standards and requirements. Paint Square News, pp 22–31, JPCL December 2009. 24. Industrial Robotics Market Industry Analysis, Size, Share, Growth, Trends and Forecast 2025 Published: April 8, 2020 at 2:52 p.m. ET https://www.marketwatch.com/press-release/ industrial-robotics-market-industry-analysis-size-share-growth-trends-and-forecast-20252020-04-08 25. PaintSquare Press June 2020, Spring 2020 Volume 3, Number 1 https://www.paintsquare.com/ pspress/Spring2020/?page¼20 26. Vrana J. NDE perception and emerging reality: NDE 4.0 value extraction. Mater Eval. 2020;78(7):835–51. https://doi.org/10.32548/2020.me-04131.
Part III Applications
NDE for Additive Manufacturing
25
Julius Hendl, Axel Marquardt, Robin Willner, Elena Lopez, Frank Brueckner, and Christoph Leyens
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additive Manufacturing: From 3D CAD to NDE Investigated Part . . . . . . . . . . . . . . . . . . . . . . . . . . . Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geometry Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Material Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Typical Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additive Manufacturing: In Situ NDE Investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additive Manufacturing: Post-processing NDE Investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
666 667 667 667 669 671 679 683 687 690 691 691
Abstract
By means of additive manufacturing (AM) complex-shaped parts can be manufactured using a broad range of different materials. The latter can be supplied in the form of powder, wire, paste material, or even as foil. Various J. Hendl · A. Marquardt · C. Leyens Institute for Materials Science, Technische Universität Dresden, Dresden, Germany Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany e-mail: [email protected]; [email protected]; [email protected] R. Willner · E. Lopez Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany e-mail: [email protected]; [email protected] F. Brueckner (*) Fraunhofer Institute for Material and Beam Technology IWS, Dresden, Germany Luleå University of Technology, Luleå, Sweden e-mail: [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_57
665
666
J. Hendl et al.
technologies are covered by the term “Additive Manufacturing,” for example, direct energy deposition (DED), laser powder bed fusion (LPBF), fused filament fabrication (FFF), or binder jetting printing (BJP). In all varieties, parts are manufactured layer by layer which results in changed material properties compared to conventional manufacturing routes, for example, mechanical properties or fatigue life. To reach a conformal material deposition without defects such as lack of fusion, delamination or cracking, an optimal process window with well-chosen parameters (e.g., beam power, spot size, scanning speed) has to be identified. For nondestructive evaluation (NDE), different approaches can be used to classify AM manufactured parts regarding their defect structure and consequentially their performance: 1. Process optimization and understanding of defect formation in order to prevent defects 2. In situ measurements by a variety of integrated sensors and (IR) cameras for direct process observations 3. Post-processing NDE methods such as ultrasonic testing, X-ray, or computer tomography (CT) If the three approaches are simultaneously executed, a prediction of the effect of defects can be made for certain cases. Keywords
Additive manufacturing (AM) · Topology-optimized design · Process-structureproperty relationship · Effect of defects · In situ investigation · Nondestructive evaluation
Introduction State-of-the-art production engineering has to be resource-efficient, including an adjusted material usage, less energy consumption as well as shortened lead times. With the use of modern AM technologies, a well-suited material mix in combination with (topology) optimized designs yield increased performance and lightweight construction of products. AM is a generic term for various technologies using powder, wire, paste, foil, or suspension as initial material. Combined with an energy supply or a binder material, geometries are build-up layer by layer based on a digital CAD model. Parts can be very complex with almost no design restrictions. Due to the versatility of the AM process different energy sources, for example, lasers or an electronic beam can be combined with different material deposition strategies in order to process almost any material. Each individual layer cycle looks as an advanced welding process. For not weldable materials alternatives like 3D Binder Jet Printing or extrusion of polymers can be used to manufacture AM parts.
25
NDE for Additive Manufacturing
667
Furthermore, after manufacturing the parts in building chambers, the unused powder can be recycled and overall productivity can be increased. When the processes are not operated at an optimized level, defects can occur inside the parts during every stage of the process. These are often introduced due to poor material distribution or inefficient energy deposition for the specific material amount. Regarding the AM process, a wide range of defects, such as lack of fusion or delamination, can be found. If these are not detected, their effect on mechanical behavior can lead to critical failure of the part. Therefore, quality management must be taken into account. This can be considered in the part design, process optimization, and post process nondestructive evaluation. Due to better knowledge about the way of formation of defects, parts can be designed such that defects can be avoided in critical locations. Furthermore, the number of defects can be reduced by optimizing the whole AM process. NDE is crucial to AM parts in order to detect the majority of defects and allowing, if respective criteria are fulfilled, the part to be installed and used. NDE, especially CT, is often used during the process development in order to define optimal process parameters. During the production those can be used to control and ensure the quality of the manufactured parts. This is indispensable in order to guarantee the required material properties in terms of fatigue life and mechanical characteristics. Usually it is more economical to use faster NDE methods such as X-ray radiography, but in order to do that, the process must be understood well and locations with a higher risk of defects formation must be defined previously.
Additive Manufacturing: From 3D CAD to NDE Investigated Part Additive Manufacturing In order to perform a well-established and reproductive NDE the main characteristics from each aspect of the AM process need to be fully understood. The interaction of the basic AM process with the special design of the parts, the typical occurring defects are to be discussed. Furthermore each material class will be considered at individually and the respective process parameters for NDE will be shown.
Process Description AM is a relatively young technology (started in the 1980s [1]) whereby 3D parts are manufactured by stacking 2D layers on top of each other. That processing route brings different advantages [2] compared to conventional forming processes such as forging, casting, and/or milling. Some of these are: • Less tools are needed • Small batches, prototypes, and individual parts (e.g., implants) can be built economically • Fast and easy incorporations of (design) changes are possible
668
J. Hendl et al.
Fig. 1 Schematic representation of powder bed process
• Almost no geometry restrictions • High resource efficiency due to almost no waste and lightweight design When using AM processes several problems can occur: • • • • •
High investment costs due to expensive machinery and high maintenance effort No mass production possible (not yet) Insufficient or poor surface quality Thermally introduced residual stresses A variety of defects inside the manufactured volume as listed in [2–4]
The unique manufacturing way of AM parts is based on a cycle process, see Fig. 1, which, for different AM processes, is basically similar and consists of the following steps: (a) – (c) Manufacturing the parts virtually This step consists of manufacturing the part with 3D CAD software and then transferring it to a .STL file. In this way, the surface of the part is optimized, divided into layers (the thickness can differ between 25 and 200 μm [3]) and finally transmitted to the selected machine for the machining step. Generally, layers with higher dimensions require more energy to melt the material, while speeding up the build process. However, their surface is very rough and precision is lower than parts manufactured with smaller layer dimensions due to the staircase effect, see Fig. 6. (d) Material Application In this step the feedstock is either transferred into the building layer or evenly distributed on the building platform by a rake or a roller.
25
NDE for Additive Manufacturing
669
(e) Bonding/Melting the Powder The next step is to melt the powder according to the .STL file. Therefore a heat source (typically a laser or an electron beam) conducts energy into the building layer. In order to minimize the quantity of typical defects which occur during 3D printing (see below) the printing strategy has to be optimized (line energy, focus offset, etc.). (f) Lowering Building Platform After bonding/melting a layer the building platform must be lowered in order to apply the next powder layer. When manufacturing fragile parts, such as very thin or very complex parts, defined pause times must be completed between layers to minimize the risk of delamination. Once a cycle is completed the whole process starts again until the part is fully manufactured.
Machining and Finishing The surfaces of load-bearing components are subject to the most stringent requirements. At the current time, the surface quality of AM components is not reliable. This is due to many factors, such as possibly existing and necessary support structures, processed powder frictions, for example. For this reason, most loadbearing additively manufactured components must be finished. Due to the layer-by-layer structure the mechanical behavior can be different in z or x-y direction. Therefore different precautions need to be taken from a constructive design and mechanical aspect.
Geometry Design The layer-by-layer AM process allows designers and engineers to overcome the limitations of conventional manufacturing technologies [5–7]. Nevertheless there are geometrical limitations of AM parts. These are dependent of the used AM processes. Overall it can be said that, for example, large overhang structures or small radii can lead to difficulties. In order to maximize the potential of design parts for the AM process, technologies like topology optimization (TO) can be used. TO is a design tool that calculates the optimal material distribution of a component based on the applied loads [8]. The technology is mainly used to reduce weight, and based on that, load-bearing parts can be redesigned [9]. Due to bionic appearance of the resulting topologies, conventional manufacturing technologies are often not suitable for this kind of parts [10]. Weight reduction and improved stiffness are key features for the further development of high-performance components, for example aerospace. The redesign of weight bearing and lightweight structures toward TO for AM is a complex process and requires a stringent approach based on a defined process chain, see Fig. 2. Previously conventionally manufactured components are evaluated regarding the benefits of design revision, based on economic factors. The subsequent TO of
670
J. Hendl et al.
Fig. 2 Workflow of the manufacturing approach [11]
Fig. 3 TO workflow, redesign of the structure, adjustments for AM processing, and comparison with the original part design [11]
Fig. 4 Left: successfully completed build job with test specimens and witness samples, right: complete post-machined Ti-based test specimen [11]
selected parts is performed to achieve the requirements for functional integration and weight reduction while still meeting the requirements for stiffness and strength. At the same time, a material validation and characterization process must be performed and nondestructive evaluation methods must be used. The design verification process for AM ensures that manufacturing constraints and expertise are used to achieve a printable part. The printed model can be used to compare its properties to the original component’s properties [11], see Fig. 3. After TO, the additively manufactured parts, see Fig. 4 left and right, need to be tested under representative conditions. The test campaign for qualification may include thermal cycling, vibration tests, static as well as dynamic load tests and NDE. These methods are used to ensure the quality in series production processes.
25
NDE for Additive Manufacturing
671
Material Classes Ceramics For ceramic materials, the screen printing process is often used in manufacturing, in which a suspension – consisting of an aqueous or organic solvent and particles of the material to be processed – is deposited in layers on top of each other. These layers are subsequently dried and sintered [12, 13]. Pure melting processes normally yield heavy cracking as well as strong delamination and, hence, are only used in selected applications. In addition, more and more advanced material combinations are a matter of current research and development work as a result of new industrial requests. This includes material mixtures such as composite structures, for example Thermoplastic 3D Printing (CerAM – T3DP) [14, 15]. Plastics Fused filament fabrication (FFF) or fused deposition modeling (FDM) is a material extrusion process among the AM technologies initially presented in a patent from Stratasys Inc. in 1989 [16]. The main principle is to soften thermoplastic materials, for example polymers, in form of a filament by heat and subsequently extruding the polymer melt onto a planar surface. During the extrusion process, the print head or print bed is moved in order to directly deposit material where it is needed. Typically, the extrusion nozzle is made of brass (with an optional ruby tip) or hardened steel and has diameters in the range between 0.2 and 0.8 mm. Due to the typical shortfalls of this process, printed parts showing anisotropic material properties are obtained. The main effects thereof are: Raster Angle Dependency [17] Raster orientation refers to the direction of bead extruded relative to the mechanical load of a sample. The corresponding toolpath is usually selected within the slicer software. For dense parts, the raster pattern can be concentric or meander formed. Furthermore the meander angle can be oriented to the load direction. Concentric or axial meander (0 angle to load direction) orientations tend to be stronger, while increasing the angle to the load direction usually lowers the mechanical strength, due to the weaker inter-bead bonds. Air Gaps [17, 18] Due to the nature of extrusion, softened polymer beads are extruded and shaped upon contact with the print bed by factors such as gravity, translational speed of the print head, nozzle diameter, nozzle-to-print bed distance, and cooling temperature. The bead shape is typically round (more precisely elliptic), which results in gaps between adjacent extrusion lines and layers [19]. Usually, an extrusion factor or extrusion multiplier, which takes into account the resulting bead width as a function of print speed and feed rate, is used to mitigate this effect and filling up the free volume, may result in an over-extrusion and finally in a lack of overall geometrical precision. Another possibility is to increase the overlap between neighboring beads, the difference between changing overlap parameters is shown in Fig. 5. However,
672
J. Hendl et al.
Fig. 5 Micrographs of cross-sectional areas of FFF samples with different overlap rates and a total thickness of 1.2 mm: (a) layer heights of 0.1 mm, (b) 0.2 mm, and (c) 0.3 mm [19]
Fig. 6 Schematic picture of overlapping effect: The higher the overlap rate, the more detailed the part contours
these process parameters have to be adjusted in respect to the print heads configuration and material. Moreover, to reach high precision, the overall geometry of a part should match the nozzle diameter (or better, the extrusion width) or vice versa. As a practical example, a build job with a width of 2 mm and a bead width of 0.4 mm was considered. While printing with an overlap of 50%, this results in a total of nine lines required to cover the whole width. For 25% overlap six lines are necessary. By doing this, the lines in the center will have an overlap of 0%, leading to possible unfilled, under extruded gaps, see Fig. 6. Interlayer Bonding Due to the planewise extrusion, mechanical strength in lateral direction is higher than in vertical print direction. Reasons for poor interlayer bonding are explained by insufficient fusing caused by improper nozzle diameter to layer height adjustment (generally layer height should be around 20% smaller than the nozzle diameter) or low temperature. Cooling down of the polymer melt starts directly after exiting the nozzle and warm polymer tends to bond better compared to cold one. Additional external heating via laser [20], infrared light [21], cold plasma [22], friction [23], or
25
NDE for Additive Manufacturing
673
microwave [24] showed that interlayer bonding strength can be significantly increased. An overview of these techniques can be found in Shih [25].
Metals Metal AM can be classified in nozzle-based (Direct Energy Deposition – DED) and powder bed-based (Powder Bed Fusion – PBF) processes. Using DED-based processes, a wide range of materials can be processed. Due to its flexibility DED is often used to add geometry elements to existing components (also in a hybrid approach). PBF offers greater geometrical freedom and is usually used for the production of complete complex components. Laser Metal Deposition Laser Metal Deposition (LMD) is an AM process that is assigned to the DED processes. Due to the movement between the nozzle and the substrate, layers can be deposited. In Fig. 7 an LMD process is shown. During the manufacturing process the bulk material is melted using a laser as a heat source and powder is blown via a carrier gas, like helium or argon [26], into the melting pool using a coaxial nozzle. Fig. 7 Multi-material LMD principle: several materials can be simultaneously supplied in a powder mixing unit on top. This yields a homogeneous multi-material supply in the process zone close to the substrate [29]
674
J. Hendl et al.
The powder interacts there with the melting pool and gets absorbed in order to manufacture the desired part. In order to fully absorb the powder into the melting pool a minimal energy is needed, which can be called line energy, see Eq. [1] EL ¼
IB vs
ð1Þ
When powder gets deposited, heat transfers through the prior layers into the substrate which acts as a heat sink. By using powder feeders different materials can be mixed and be blown simultaneously into the melting pool. Because LMD is basically a very advanced welding process [27], this process needs to be optimized in order to minimize the previously discussed defects. Therefore different factors need to be taken into consideration and are discussed in the following. Laser Spot Size The laser spot size is the most important parameter to describe the size of the process. If the line energy is kept constant, using a large spot size, more energy gets conducted into the substrate, more powder can be absorbed and the single track size rises [28]. The disadvantage of using larger spot sizes is a larger stair-case effect and therefore the resolution of the parts decreases and more finishing after the process must be taken into account. Furthermore, with a high conducted energy increases the possibility of overheating of the melting pool. That could lead to defects such as hot cracks or delamination. Laser Power When looked at the influence of the laser power the assumption that the spot size is constant must be made. When ramping up the laser power the tracks get larger due to the higher conducted energy, which can lead to swelling, gas porosity, and high residual stresses that can lead to hot cracking. Furthermore, the risk of oxidation of the melting pool is increased and the resolutions decreases comparably to the laser spot size. If the laser power is too low, defects such as lack of fusion and not fully melted powder can occur. Scanning Speed The speed to which the nozzle is moving relatively to the part is called scanning speed. If the scanning speed is very high, the conducted energy, when compared to a lower speed, is very low. Therefore, lack of fusion defects and not fully melted powder are the defects that occur mostly. The higher line energy that gets conducted into the substrate when using slow scanning speeds usually leads, when optimized correctly, to good absorption of the blown powder and a proper bonding between layers. Furthermore, if the cooling of the substrate is at a moderate level the bonding to the next track can be expected to better. If the parameters are not chosen correctly, the substrate will suffer from overheating and some of the defects discussed above can occur.
25
NDE for Additive Manufacturing
675
Powder Mass The track height is mainly dependent on the powder mass flow which is blown into the melting pool and therefore determines the building speed. The amount of blown powder must be balanced with the laser power, because it defines the viscosity of the melting pool (at constant laser power). The higher the amount of powder is, the higher the viscosity of the molten pool gets. This can lead to a stronger trace, but if too much powder is added, then it will lead to defects such as unmolten powder and lack of fusion, because the energy supplied is not sufficient to absorb all the powder particles. Powder Bed Fusion Powder bed fusion combines the processes of Laser Powder Bed Fusion (LPBF), selective Electron Beam Melting (sEBM), and 3D Binder Jet Printing (BJP). The powder is evenly distributed onto the building plate with a roller or a rake with a defined height, see Fig. 8 left. The larger the height, the faster the build takes place. Once the powder is distributed the energy source scans/melts the powder selectively in a predesigned pattern, see Fig. 8 right. In order to prevent overheating, the beam jumps between melting pools. One main difference between LPBF and sEBM is the energy source used. When working with an electron beam the powder needs to be sintered before being melted, that way smoking (powder gets blown away by repulsive force due to the negative charge of the powder) is prevented. In order to manufacture almost defect free parts via powder bed processes, different factors need to be optimized. These are discussed in the following. Scanning Speed The scanning speed defines the velocity by which the heat source scans the layer. There is no powder blown, the scanning speeds of powder bed processes are much faster than the equivalent LMD. When the corresponding laser power or beam current is constant, a faster scanning speed increases the build rate because the scanning happens much faster and a new layer can be deposited earlier. Furthermore
Fig. 8 Left: LPBF principle [30]; right: laser powder bed melting parameters [30]
676
J. Hendl et al.
the risk of overheating and therefore the risk of delamination, hot cracks, and swelling are reduced because less energy is directed to the substrate. If the conducted heat is low, defects such as lack of fusion or unmolten powder can occur because of the insufficient energy needed to fully melt the powder. By using a slower scanning speed, in addition to the risk of delamination, the risk of operating in the keyhole mode increases, which leads to the formation of small gas pores and possible loss of alloying elements. Laser Power/Beam Current The influence of different laser power levels for the LPBF process is similar to the LMD process. When using too high power the melting pool is too large, the resolution is poor and risk of swelling increase. The risk of oxidation for the LPBF process is due to the use of shielding gas like argon reduced compared to the LMD, but does not apply for the sEBM process, because it is carried out under vacuum protection. Therefore, the beam current must be optimized for each material due to the risk of vaporization of alloying elements. Hatch Spacing Hatch spacing describes the overlap between two single tracks. The higher the overlap, the better the bonding between tracks becomes, resulting in an increased the layer time. A higher risk of overheating can be expected. That could lead to defects such as delamination, loss of alloying elements, or gas porosity. If single tracks are too far apart from the melting pool, the gap between these cannot be filled properly and large defects can be seen. Focus Offset The focus offset is directly comparable to the laser spot size of the LMD process and will not be discussed, see LMD. Working Atmosphere Working under the correct and stable atmosphere is a very important tool for successful manufacturing of high-quality AM parts. LPBF processes are carried out under argon atmosphere. A high quality of the shielding gas is important to prevent the risk of oxidation. The vacuum atmosphere of the sEBM process is needed to work with an electron beam as an energy source. Furthermore it prevents the risk of oxidation almost completely, but as discussed above the risk of vaporizing alloying elements is very high and needs to be considered sufficiently. Powder Quality The powder used for powder bed processes needs to have a spherical shape as well as a homogeneous powder particle distribution in order to have stable rheological properties, good rake quality and guarantee a stable welding result. Besides the powder should not have gas entrapped from the atomization process. Otherwise the risk of forming small gas pores ( 15 μm) increases. Whereby the used powder particle-size-distribution for the LPBF process is very fine compared to the sEBM
25
NDE for Additive Manufacturing
677
process. In turn, the resulting surface quality increases in terms of roughness and resolution.
Binder Jet 3D Printing The Binder Jet Printing process (BJP) is an AM process whereby a liquid bonding agent is used to join powder materials selectively into a green part. To achieve the final properties, these green parts need to be sintered. In order to manufacture printed parts, the steps shown in Fig. 9 are necessary [31]. The way of designing and slicing the parts are comparable to the other AM processes mentioned and is described in the previous sections. The powder is rolled from a powder supply into the build box and is spread slightly compacted. During the fusing process, a print head applies a liquid binder to the powder, which fills the spaces between the powder particles [32] according to the CAD model. The binder needs to dry after each deposited layer, thereby it needs to completely bond with the powder before adding the next layer. If drying is insufficient, the risk of cracks and agglomeration or powder sticking onto the roller rises [33]. Sometimes a heat source is necessary to speed up the drying process. During each drying step the roller is cleaned and the next layer is applied. When the printing process is finished, parts need to be cured and depowdered. The curing process is an additional drying process, carried out in an extra oven, which is needed for some BJP processes to achieve the desired green strength [33]. Before the sinter process can be carried out the loose powder surrounding the printed parts needs to be taken away, which can be done with a vacuum cleaner, gentle air blasts, or brushed manually [33]. The following sintering process will not be discussed further. Many references to this matter can be found in the literature [34, 35]. As discussed above, an optimized process for each material is necessary to achieve high quality and almost defect free printed parts. The most important
Fig. 9 Schematic representation of a BJP – after [32]
678
J. Hendl et al.
parameters to optimize the process are described in the paper by Mosstafaei, et al. [33] and listed as powder characteristics, binder properties, print parameters, and overall design. Powder Characteristics Similarly to the other AM processes described before in this chapter, the used powder should have a high flowability as well as a homogenous and high packing density of the layers [36]. In order to accomplish that a highly spherical powder with a powder particle distribution D50 ¼ 35 μm has to be used. The layer density can be increased by using bi- or multi-modular powder [37, 38]. Another advantage of a homogeneous powder distribution via spherical powder is the better penetration of the liquid binder into the powder bed which results in a reduction of defects [33] as well as a higher sinter activity [34]. Finer powder shows better sinter activity, but when using very fine powder the flowability is insufficient [39] as well as a higher risk of agglomeration can be expected due to the high cohesion between powder particles [33]. Binder Since binders act as the solidification counterpart during the BJP process, the requirements are very versatile. Typical binders engineered directly for the BJP process are either monomer or polymer based [33]. The main difference between them is the way of crosslinking and forming a solid frame (monomer) or attachment of polymer bridges between powder partials (polymer). There is a much larger variety of different binders, for different material systems, used today [40]. The stability of the printed green part depends on the binder used [41]. Therefore, a proper binder should be selected for each used material individually. The main characteristics of a binder in order to be deposited out of the print head are the rheology and stability. Additionally a good penetration of the powder layers and sufficient stability of the green part are also important. Next to the stability of the green part the resolution is mainly dependent of the binder formulation, especially the droplet size [42]. To ensure a reproducible process the binder must have strong chemical stability. Therefore, it should have a high boiling temperature (above room and shipping temperatures). After the binder jet printing, during the curing and sintering process, the binder needs to be temperature resistant up to several hundred degrees in order to support the green part and above its stable temperature burn away without residue [40]. Print Parameters Like every other AM process, the print parameters have significant impact on the quality of the printed part. The layer thickness defines, in addition to the binder, the resolution of the printed part and is mainly dependent of the powder particle distribution. Generally, the layer thickness should be larger compared to the largest powder particles in order to use all powder particles during the BJP process [43, 44]. The layer thickness usually ranges between 15 and 300 μm and larger layers lead to poor resolution and powder-bed density [33]. Printing speed is a
25
NDE for Additive Manufacturing
679
combination of recoat speed, oscillation speed, roller speed, and roller traverse speed and is the time-determining parameter of the print job and therefore needs to be optimized for every print job and material [33, 45]. Studies have shown that higher printing speeds lead to lower accuracy [46] as well as a rise in surface roughness [47]. During printing, the parameter of binder saturation has a significant impact on the quality of the green part [33]. When the binder saturation is considerably lower, the risk of layer delamination and defects after burnout of the binder and sintering is high. Compared to that, an over saturation of the powder leads to high shrinkage and therefore, lower accuracy and high surface roughness [33, 48] can be expected as a consequence. Furthermore, a high amount of binder can cause powder sticking on the roller and therefore increases the risk of cracks, roughness, and inaccuracy [33]. After each step of binder deposition, the layer must be dried with a heater. On the one hand, this results in a higher stability of the printed part. Simultaneously the print head is cleaned from excess binder, to prevent clogging of the nozzle [49] or oversaturation of the powder. Long drying times as well as high drying temperatures can cause shrinkage and significantly increase the coating times [34].
Typical Defects Depending on the AM process, differently shaped defects can occur in the manufactured components, which can influence the mechanical behavior to a certain extent, due to their shape and position in the part [50]. Therefore, the fully dense fusion between the layers is very important in order to guarantee high quality standards when using AM processing. In this section, different AM-specific defects, including lack of fusion (LoF), gas porosity, loss of alloying elements, unmolten powder, swelling, and cracks, will be discussed.
Lack of Fusion Lack of fusion defects are very common in AM processes if the process is not properly optimized. In order to ensure proper bonding between layers the supplied energy must be high enough, so that the melting pool can penetrate the previous layer. The penetration depth is mainly influenced by the material as well as by process parameters such as energy source, beam power, and scanning speed [51]. Especially for PBF processes the penetration depth should be larger than the layer thickness to guarantee strong layer bonding [52]. Lack of fusion defects are usually larger than 10 μm and have sharp edges [53]. When these occur in load bearing parts they act as crack nucleation sites, reducing weight bearing cross sections and ultimately decreasing the mechanical properties [54]. LoF defects are visible in Fig. 10. Gas Porosity There are two main mechanisms by which gas pores originate in 3D printed parts [53]. First, AM processes can be performed in keyhole mode [55] when operating with high power density. When operating above the optimal process window and
680
J. Hendl et al.
Fig. 10 Lack of fusion (LoF) defects (micrograph of cross-sectional area), 316 L, LMD, © Fraunhofer IWS Dresden
overheating the process zone, the risk of forming small, almost perfectly round shaped keyhole voids increases [55, 56]. Second, gas pores can originate from gas trapped in the powder due to the gas atomization process [34].
Loss of Alloying Elements For most AM processes, vaporization of alloying elements happens when the melting pool temperature is too high [53]. When the beam power is excessively high the risk of a plasma phase forming is considerable [26] and metals are likely to evaporate [55]. A special case is represented by the selective electron beam melting process, which is operated under vacuum atmosphere (103–105 mbar). When working under these conditions and at high temperatures, alloying elements with high vapor pressures are likely to evaporate [57] when melted with a focused electron beam. If certain elements evaporate, the microstructure, solidification, corrosion resistance, and overall mechanical performance of manufactured parts are affected. Unmolten Powder Unmolten powder is a typical AM defect that can be observed when conducted energy into the substrate is too low, see Figs. 11 and 12. These defects often act as crack nucleation sites and are therefore often responsible for early critical failure [58]. Delamination/Swelling Delamination is the separation of two layers due to extreme heat at the top of the part and therefore residual stress exceeding the mechanical properties of the material [59], see Fig. 13. For AM parts the build plate acts as a heat sink, with each added layer the temperature at the top rises. Without careful heat management, for example, conducted energy, pause times between layers or adjustment of scanning speed,
25
NDE for Additive Manufacturing
681
Fig. 11 Fracture surface, unmolten powder trapped in pores, as built stage, Ti64, LPBF (SEM image), © TU Dresden
Fig. 12 Unmolten powder, as built state, Ti-5553, sEBM (SEM image), © TU Dresden
layers will bulge. Mukherjee et al. [59] show that by using more layers to achieve the same height of a part the residual stress is reduced.
Cracks When working with AM, two mechanisms of hot cracking, see Fig. 14, have a vast impact, first solidification cracking and/or second, liquefaction cracking mechanism [60]. Solidification cracking occurs in the so-called mushy zone, where solid and liquid parts are coexisting during the solidification process [61]. During solidification, dendrites originate and can hinder the flow of remaining liquid phase [62]. Cracking then is a result of these crack nucleation sites and stress introduced due to the shrinkage during the solidification and can be mainly seen alongside the grain boundaries [63]. The liquefaction cracking mechanism is one of the main mechanism for hot cracking in alloys with large number of precipitations at the grain boundaries [64]. This can be traced back to low energy melting conditions
682
J. Hendl et al.
Fig. 13 Extreme delamination – swelling, Ti-5553, sEBM, © TU Dresden
Fig. 14 Hot crack, Inconel 718 (micrograph of cross-sectional area), LMD, © Fraunhofer IWS Dresden
[61]. When rapidly heating areas away from the melt pool below the overall liquidus temperature, certain low melting phases, for example, low melting point carbides melt [65]. These liquid grain boundary phases, in combination with unmolten carbides and residual stresses, can act as crack nucleation sites [60]. Cold cracking is a phenomenon that only appears in fully solidified parts. When the fracture toughness of the processed material, such as TiAl or NiAl, is very poor at room temperature, these alloys tend to crack due high stress. Figure 15 for the
25
NDE for Additive Manufacturing
683
Fig. 15 Cold crack, NiAl (micrograph of cross-sectional area), LMD, © Fraunhofer IWS Dresden Fig. 16 Cold crack, AlSi10, (micrograph of cross-sectional area), LPBF [66]
LMD and Fig. 16 for the LPBF process show each a specimen with a large number of cold cracks. In order to process these materials via AM processes the processing temperature must be higher than the respective brittle ductile transition temperature.
Additive Manufacturing: In Situ NDE Investigation The reason for the insufficient prediction capability is the variety of physical phenomena, statistical influences, and boundary conditions that cannot be captured in model-based simulations. In this context, the rapidly advancing digitalization in materials and production technology enables completely new approaches to investigate the relationships between process parameters, microstructure, and component
684
J. Hendl et al.
properties. The consistent acquisition of material, process, and component data creates a so-called digital twin, that is, a digital image of the AM process, which can be used for monitoring and optimization. The analysis of the data available in the future with machine learning methods offers a high innovation potential in this context. Research is needed to exploit this potential for the quantitative mapping of process-structure-property relationships for AM components under static and cyclic loading. Due to the characteristic layer-by-layer buildup in AM processes, build-up defects are partially exposed to a cumulative, perpetuating effect as well as the boundary conditions of the process are permanently changing. Small deviations in individual process parameters have considerable effects on the result of AM processes. Therefore these fluctuations must be detected reliably and controlled by monitoring systems [67]. In addition to the variation of the main parameters, discussed above, the holistic analysis of the manufacturing process is gaining in relevance for the users. The aim of the current developments is to provide new tools that, in situ, provide information about the resulting component quality. For challenging materials, such as titanium aluminides or nickel-based superalloys, stable process windows are limited. A working process control can only be achieved by using suitable process monitoring systems, see Fig. 17. Due to the specific requirements of laser-based manufacturing, customized acquisition options must be adapted or newly developed for a large number of process parameters. The heat load and the scattering of laser radiation create a special challenge for measurements in the area of the process zone [68, 69]. With the use of temperature measurement systems, a variety of quantities can be measured. Typical systems are pyrometer or thermocouple based. A camera-based system, thermography, can furthermore also measure temperature gradients on the component surface [70, 71] and in geometric process variables, such as the working distance. In addition, these systems offer the advantage of measuring indirectly and thus without feedback to the process. When combining these temperature measurements systems with powerful image processing systems, process variables can be recorded by using software tools adapted to the application. The combination
Fig. 17 Narrowed process window when processing challenging materials; reached by, for example, process control, well-adjusted process parameters, temperature tailoring
25
NDE for Additive Manufacturing
685
of camera systems, recording simultaneously, for example, working distance and part temperature can decisively improve process stability during induction-assisted additive laser metal deposition of alloys. Feedback to the process provides control functions that make significant contributions to the stability of the process and the required component temperature can be set by automatic adjustment of the inductively coupled power. In addition, the CNC system can be influenced by the feedback of the measured values in order to achieve a stable working distance, even under changing ambient conditions, and thus uniform machining can be expected, see Fig. 18 [72]. Especially for the LMD process, the powder nozzle characterization in laser lightsectioning is another example for in situ process monitoring. A measuring laser illuminates the powder stream after it exits the LMD nozzle. A camera is mounted orthogonally, which records the light sections through the powder. Using analysis software, the three-dimensional distribution of the powder stream is calculated with high precision from the individual light sections, see Fig. 19. The system is used for simplified quality control and enables conclusions to be made regarding the degree of wear of the powder nozzle or any faulty conditions, such as clogging [73]. In addition, the integration of sensors with the process head enables condition monitoring. The comprehensive monitoring does not only contribute to the operational safety of the plant technology and evaluate the condition of the system technology, but also provides the opportunity of reliable quality assurance [74]. For documentation purposes, all data must be stored in a standardized data format and managed by a process data information management system. By correlating the measurement data with the results from component characterization, such
Fig. 18 Comparison of the built-up result without (left) and with (right) process control [72]
686
J. Hendl et al.
Fig. 19 Principle of determining the 3D powder distribution from the individual planes recorded by means of a process camera
as 3D scanning or CT, it is possible to derive patterns for certain defects that indicate a failure condition in the process. Furthermore, the data obtained can be used to validate process models, which serves to expand the understanding of the process. Beyond the description of the influence of defects, it is important to avoid their formation during the AM process in the future. However, the current state of the art indicates, that fundamental understanding of the essential physical processes and associated influencing parameters are lacking [75]. Since experimental process monitoring is not completely feasible yet, this knowledge must be obtained from numerical process simulations. A distinction must be made between two basic categories of process models: macroscopic approaches [76] are based on a number of simplifying assumptions, such as the modellation of the powder bed and the molten material by an effective continuum with thermo-elastic-plastic properties. As a result, the models are only able to approximate the manufacturing process at the component scale. Often, just a thermal analysis is performed to approximate the temperature history. The main knowledge that can be obtained from these models are statements on the form deviations resulting from the manufacturing process [77]. Since macromodels can neither resolve the dynamics of the melting pool nor represent the cyclic microstructure evolution during the solidification process, a fundamental understanding of the process can only be achieved with mesoscopic approaches. These are able to resolve the melting of individual particles as well as the flow processes in the melting pool of LBPF and sEBM processes [78–80]. Current developments join the concepts of fluid mechanics calculations with models to map the kinetics of grain growth [81, 82]. These models provide indispensable insights into the fundamental physical processes in the field of AM, but have strong restrictions with respect to exposure strategy and component geometry. Machine learning technologies are used to analyze the extensive database. An overview of different methods can be found in [83]. From the multitude of approaches, deep learning in particular has become established for large data sets as an essential method for the detection of implicit relationships, feature identification and classification [84]. Therefore this has since been used in a wide variety of applications since [85]. Deep Learning is based on artificial neural networks [86], whereby a distinction has to be made between feedforward and recurrent networks. For the application in the field of numerical mechanics, especially the first group is interesting, due to the possibility of representing nonlinear relations [87]. First applications include, for example, the replacement of material models and structural
25
NDE for Additive Manufacturing
687
optimization [83, 88–90]. If these methods are used primarily to generate a meta-model for effective analysis of process-structure-property relationships, then they can be used to make predictions of material behavior without explicit formulation of material models [91–96] using new training algorithms and network structures [97–100]. In the future, new in situ technologies such as high-energy X-ray imaging [101], acoustic sensors [102, 103], or powder bed scanning [104] could be used to improve defect prediction during AM processes. It is essential to further develop algorithms to process the data and feed it into the machine.
Additive Manufacturing: Post-processing NDE Investigation The main aim of post processing non-destructive-evaluation – NDE – is to locate defects and evaluate their impact on the overall mechanical performance of the part [58]. Due to the significant influence of the surface state on the mechanical performance, especially on fatigue, it must be taken into account [105].
3D Scanning Scanning an object by optical means can be used, that is, for reverse engineering. In addition, it can also be applied for quality control of AM parts. There are several methods to optically 3D scan an object, such as photogrammetry, laser triangulation, structured light, and others. All these systems work by triangulation, where light is projected into the object, then it is reflected to the camera and by knowing the distance between the camera and the light source, it is possible to calculate the distance. With the help of fringe projection and the scanners equipped with it, it is possible to measure the surfaces of the additively manufactured parts. The easy operation of this equipment allows a rapid inspection of simple parts and the digital conversion of physical models and prototypes with good accuracy and resolution, see Figure 20. The operation is sufficiently described in the following literature [106–109]. Fringe projection systems can use one or more cameras to capture the images, increasing the area registered in each shot. These systems are easy to set up and operate and offer resolutions down to 0.02 mm, although they have difficulty scanning transparent or highly reflective surfaces. This problem can be overcome by spraying the surfaces with some type of powder to enable surface detection. Another problem arises when scanning objects that are larger than the scanner’s field of view, as the various frames must be precisely aligned to image the objects. X-Ray Radiography Radiographic testing is an imaging method of nondestructive material testing for the visualization of material inconsistencies. It is used to detect defects inside components, that is, in castings or joints. The density of a component is imaged with the aid of a suitable emitter (X-ray tube, particle accelerator, or radionuclide). The denser or thicker a component, the higher its absorption. Depending on the density and atomic number X-rays and gamma rays the achieved maximum penetration depth fluctuates
688
J. Hendl et al.
Fig. 20 Deviation of an AM part from RUAG to its perspectival 3D CAD model, CT scan, Ti64, LPBF, © TU Dresden/Fraunhofer IWS
between 50 (copper) and 400 mm (light metals). The component size that can be inspected thus depends on the technology used, the power of the beam source, and the component material. An X-ray scan can be seen in Fig. 21. General rules for technical radiographic testing of metallic materials with X-rays and gamma rays to detect inhomogeneities are contained in DIN EN 444 [110] and DIN EN 13068-3 [111]. The former is limited to the radioscopic detection of defects using the film technique. X-ray imaging is two-dimensional. To determine the position of a defect in the volume, the X-ray inspection must therefore be performed from several angles (static or dynamic). In contrast, computed tomography (CT), which is described in the DIN EN 16016-2 [112] standard, is a spatial radiographic examination method. Three-dimensional information about a test object is provided from a number of projections either over cross-sectional planes (CT slices) or over the entire volume.
Computer Tomography X-ray micro-computed tomography (microCT or simply X-ray tomography or CT scanning) is an emerging technology used for nondestructive investigation of structural integrity and internal inhomogeneities of samples. The nondestructive nature of the method allows the investigation of internal defects such as porosity and cracks in simple or complex structures, in addition to verifying geometric accuracy for all surfaces including complex and internal features. The use of the technique in AM is well established, and there are already a wide range of industrially proven applications. As the complexity of industrial components increases, so do the requirements to analyze the internal flaws of additively manufactured structures, see Fig. 22. When the results of an X-ray scan (Fig. 21) and a CT Scan (Fig. 22) are compared, it can be seen that the quality of the CT scan is much better and more information about possible defects can be extracted. As it is well known, the defects to be
25
NDE for Additive Manufacturing
689
Fig. 21 X-ray scan of a complex part from RUAG, Ti64, LPBF, © TU Dresden/ Fraunhofer IWS
Fig. 22 CT scan of a complex part from RUAG, Ti64, LPBF, © TU Dresden/ Fraunhofer IWS
detected in the structures depend strongly on the selected microCT equipment and its mode of operation, as well as on the material and the size of the component. Defects down to a minimum size of a few μm can be detected in filigree metallic, additively manufactured structures [113, 114]. A major advantage of microCT analysis is that several component analyses can be performed with one scan. It is thus possible to record pore distribution, check
690
J. Hendl et al.
internal structures, and verify the accuracy of the manufactured component with the aid of a nominal/actual comparison. A disadvantage of microCT analysis, besides the very high investment cost, is the time-consuming (up to 24 h) scanning. This means that 100% component testing is usually not possible in production plants. Furthermore, depending on the method of data preparation, extremely large files are generated which need to be handled.
Ultrasonic Testing Ultrasonic testing is an acoustic measuring method for detecting material defects using ultrasound in the frequency range between 500 kHz and 75 MHz [115]. Ultrasonic testing is based on the principle of sound waves propagating at different speeds in different media. This difference can be determined at each material transition, for example, gas-metal. Conventionally, the waves are emitted into the tested part and due to the different interaction of ultrasound waves with voids, inclusions, cracks, or other separations in the microstructure, the related defects can be detected. Based on the recorded echo strength and depth position, these defects can be localized, and thickness measurements can also be performed on the component. In general, structures from a size of approx. 0.6 mm can be resolved, therefore this technique has only a small impact for NDE of AM manufactured parts. Furthermore, testing of complex structures is almost impossible with this method, due to the linear wave expansion. Lately advanced UT techniques have come into use, such as the laser ultrasonic (LU) or phase array UT (PAUT) [116]. These have the potential to be integrated into AM machines and therefore can be used for process monitoring.
Summary It was shown that with the use of AM processes, innovative approaches can be explored in terms of component design and application. Thereby new ways of NDE, customized for AM parts need to be evaluated. Three approaches of NDE for AM were mentioned; process optimization, parts design, and post processing NDE, and should be considered in order to predict the effect of defects. Due to the low design restrictions and thus the possibility of integrating previously impossible functions into newly developed AM components, a field of application that has been very poorly explored to date can be exploited. By using TO in conjunction with AM processes it is now possible to redesign entire assemblies, integrate new functions in order to optimize them. For each AM processes, material and component it is of immense importance to use an optimized process windows (e.g., line energy or layer thickness). Thereby the total number of defects can be minimized and thus the lifetime of the AM manufactured components can be ensured. As described the main lifetime limiting inhomogeneities occurring in AM manufactured parts are LoF, cracks, and delamination defects. An important step to minimize the defect density and enhance the lifetime of AM manufactured parts is the continuous monitoring, control, and regulation of the AM processes. Among other measuring equipment camera-based IR temperature measurement
25
NDE for Additive Manufacturing
691
systems, working distance measurements, and acoustic sensors are used for this purpose. In order to be able to guarantee all-encompassing quality control and assurance, it is essential to perform subsequent nondestructive testing of additively manufactured components. As described, ultrasound systems, X-ray, and industrial CT systems can be used for this purpose whereby individual solutions must also be developed for the different applications of the components. In the future, it would be conceivable and useful to use machine learning for process and product development in order to make even greater use of the described advantages of the particular AM processes.
Cross-References ▶ Introduction to NDE 4.0 ▶ In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of Additively Manufactured Metal Parts ▶ NDE in Additive Manufacturing of Ceramic Components
References 1. Bourell D, et al. Materials for additive manufacturing. CIRP Ann. 2017;66(2):659–81. https:// doi.org/10.1016/j.cirp.2017.05.009. 2. Khajavi SH, Partanen J, Holmström J. Additive manufacturing in the spare parts supply chain. Comput Ind. 2014;65(1):50–63. https://doi.org/10.1016/j.compind.2013.07.008. 3. Herzog D, Seyda V, Wycisk E, Emmelmann C. Additive manufacturing of metals. Acta Mater. 2016;117:371–92. https://doi.org/10.1016/j.actamat.2016.07.019. 4. Uriondo A, Esperon-Miguez M, Perinpanayagam S. The present and future of additive manufacturing in the aerospace sector: a review of important aspects. Proc Inst Mech Eng Part G J Aerosp Eng. 2015;229(11):2132–47. https://doi.org/10.1177/0954410014568797. 5. Ramadani R, Belsak A, Kegl M, Predan J, Pehan S. Topology optimization based design of lightweight and low vibration gear bodies. Int J Simul Model. 2018;17(1):92–104. https://doi. org/10.2507/IJSIMM17(1)419. 6. Seabra M, et al. Selective laser melting (SLM) and topology optimization for lighter aerospace components. Procedia Struct Integr. 2016;1:289–96. https://doi.org/10.1016/j.prostr.2016. 02.039. 7. Emmelmann C, Petersen M, Kranz J, Wycisk E. Bionic lightweight design by laser additive manufacturing (LAM) for aircraft industry. Strasbourg. 2011. p. 80650L. https://doi.org/10. 1117/12.898525. 8. Groenewaeller S. Theorie und Numerik zur freien Designoptimierung mechanischer Strukturen. PhD, Dortmund, 2007. 9. Wong J, Ryan L, Kim IY. Design optimization of aircraft landing gear assembly under dynamic loading. Struct Multidiscip Optim. 2018;57(3):1357–75. https://doi.org/10.1007/ s00158-017-1817-y. 10. Walton D, Moztarzadeh H. Design and development of an additive manufactured component by topology optimisation. Procedia CIRP. 2017;60:205–10. https://doi.org/10.1016/j.procir. 2017.03.027. 11. Willner R, et al. Potential and challenges of additive manufacturing for topology optimized spacecraft structures. J Laser Appl. 2020;32(3):032012. https://doi.org/10.2351/7.0000111. 12. Heinrich JG, Gomes CM. Einführung in die Technologie der Keramik. p. 214.
692
J. Hendl et al.
13. Bikas H, Stavropoulos P, Chryssolouris G. Additive manufacturing methods and modelling approaches: a critical review. Int J Adv Manuf Technol. 2016;83(1–4):389–405. https://doi. org/10.1007/s00170-015-7576-2. 14. Weingarten S, et al. Multi-material ceramic-based components – additive manufacturing of black- and-white zirconia components by thermoplastic 3D-printing (CerAM – T3DP). J Vis Exp. 2019;11. 15. Scheithauer U, Schwarzer E, Richter H-J, Moritz T. Thermoplastic 3D printing-an additive manufacturing method for producing dense ceramics. Int J Appl Ceram Technol. 2015;12(1): 26–31. https://doi.org/10.1111/ijac.12306. 16. Crump SS, Muir AEPD. Creating Three-Dimensional Objects. US5121329, 1992. 17. Ahn S, Montero M, Odell D, Roundy S, Wright PK. Anisotropic material properties of fused deposition modeling ABS. Rapid Prototyp J. 2002;8(4):248–57. https://doi.org/10.1108/ 13552540210441166. 18. Kim J. Optimization of design and manufacturing process of fusion filament fabrication (FFF) 3D printing. PhD, West Virginia University Libraries. 2018. 19. Garzon-Hernandez S, Garcia-Gonzalez D, Jérusalem A, Arias A. Design of FDM 3D printed polymers: an experimental-modelling methodology for the prediction of mechanical properties. Mater Des. 2020;188:108414. https://doi.org/10.1016/j.matdes.2019.108414. 20. Sabyrov N, Abilgaziyev A, Ali Md H. Enhancing interlayer bonding strength of FDM 3D printing technology by diode laser-assisted system. Int J Adv Manuf Technol. 2020;108(1– 2):603–11. https://doi.org/10.1007/s00170-020-05455-y. 21. Kishore V, et al. Infrared preheating to improve interlayer strength of big area additive manufacturing (BAAM) components. Addit Manuf. 2017;14:7–12. https://doi.org/10.1016/j. addma.2016.11.008. 22. Narahara H, Shirahama Y, Koresawa H. Improvement and evaluation of the interlaminar bonding strength of FDM parts by atmospheric-pressure plasma. Procedia CIRP. 2016;42: 754–9. https://doi.org/10.1016/j.procir.2016.02.314. 23. Li G, et al. Effect of ultrasonic vibration on mechanical properties of 3D printing non-crystalline and semi-crystalline polymers. Materials. 2018;11(5):826. https://doi.org/10. 3390/ma11050826. 24. Sweeney CB, et al. Welding of 3D-printed carbon nanotube–polymer composites by locally induced microwave heating. Sci Adv. 2017;3(6):e1700262. https://doi.org/10.1126/sciadv. 1700262. 25. Shih CC. Effects of cold plasma treatment on interlayer bonding strength in fused filament fabrication (FFF) process. Master of Sience Thesis, Texas A&M University, USA, 2019. 26. Azarniya A, et al. Additive manufacturing of Ti–6Al–4V parts through laser metal deposition (LMD): Process, microstructure, and mechanical properties. J Alloys Compd. 2019:804:163– 191. https://doi.org/10.1016/j.jallcom.2019.04.255. 27. Gasser A, Backes G, Kelbassa I, Weisheit A, Wissenbach K. Laser additive manufacturing: laser metal deposition (LMD) and selective laser melting (SLM) in turbo-engine applications. Laser Tech J. 2010;7(2):58–63. https://doi.org/10.1002/latj.201090029. 28. Lewis GK, Schlienger E. Practical considerations and capabilities for laser assisted direct metal deposition. Mater Des. 2000;21(4):417–23. https://doi.org/10.1016/S0261-3069(99)00078-3. 29. Brückner F. Modellrechnungen zum Einfluss der Prozessführung beim induktiv unterstützten Laser-Pulver-Auftragschweißen auf die Entstehung von thermischen Spannungen, Rissen und Verzug. Dresden: Technische Universität Dresden; 2011. 30. Shipley H, et al. Optimisation of process parameters to address fundamental challenges during selective laser melting of Ti-6Al-4V: a review. Int J Mach Tools Manuf. 2018;128:1–20. https://doi.org/10.1016/j.ijmachtools.2018.01.003. 31. Allen SM, Sachs EM. Three-dimensional printing of metal parts for tooling and other applications. Met Mater. 2000;6(6):589–94. https://doi.org/10.1007/BF03028104. 32. Nandwana P, Elliott AM, Siddel D, Merriman A, Peter WH, Babu SS. Powder bed binder jet 3D printing of Inconel 718: densification, microstructural evolution and challenges☆. Curr Opin Solid State Mater Sci. 2017;21(4):207–18. https://doi.org/10.1016/j.cossms.2016. 12.002.
25
NDE for Additive Manufacturing
693
33. Mostafaei A, et al. Binder jet 3D printing – process parameters, materials, properties, and challenges. Prog Mater Sci. 2020;100707. https://doi.org/10.1016/j.pmatsci.2020.100707. 34. Schatt W, Wieters K-P, Kieback B, editors. Pulvermetallurgie: Technologien und Werkstoffe, 2., bearb. und erw. Aufl. Berlin: Springer; 2007. 35. Schatt W. Sintervorgänge. Düsseldorf: VDI Verlag GmbH; 1992. 36. Averardi A. Effect of particle size distribution on the packing of powder beds. A critical discussion relevant to additive manufacturing. Mater Today Commun. 2020;17. 37. Miyanaji H, Zhang S, Yang L. A new physics-based model for equilibrium saturation determination in binder jetting additive manufacturing process. Int J Mach Tools Manuf. 2018;124:1–11. https://doi.org/10.1016/j.ijmachtools.2017.09.001. 38. Bai Y, Wagner G, Williams CB. Effect of particle size distribution on powder packing and sintering in binder jetting additive manufacturing of metals. J Manuf Sci Eng. 2017;139(8): 081019. https://doi.org/10.1115/1.4036640. 39. Spierings AB, Voegtlin M, Bauer T, Wegener K. Powder flowability characterisation methodology for powder-bed-based metal additive manufacturing. Prog Addit Manuf. 2016;1(1– 2):9–20. https://doi.org/10.1007/s40964-015-0001-4. 40. Utela B, Storti D, Anderson R, Ganter M. A review of process development steps for new material systems in three dimensional printing (3DP). J Manuf Process. 2008;10(2):96–104. https://doi.org/10.1016/j.jmapro.2009.03.002. 41. Paranthaman MP, et al. Binder jetting: a novel NdFeB bonded magnet fabrication process. JOM. 2016;68(7):1978–82. https://doi.org/10.1007/s11837-016-1883-4. 42. Myers K, Paterson A, Iizuka T, Klein A. The Effect of Print Speed on Surface Roughness and Density Uniformity of Parts Produced Using Binder Jet 3D Printing. Physical Sciences, preprint. 2021; https://doi.org/10.20944/preprints202101.0459.v1. 43. Sutton AT, Kriewall CS, Leu MC, Newkirk JW. Powders for additive manufacturing processes: characterization techniques and effects on part properties. p. 27. 44. Simchi A. The role of particle size on the laser sintering of iron powder. Metall Mater Trans B. 2004;35(5):937–48. https://doi.org/10.1007/s11663-004-0088-3. 45. Mendoza Jimenez E, et al. Parametric analysis to quantify process input influence on the printed densities of binder jetted alumina ceramics. Addit Manuf. 2019;30:100864. https://doi. org/10.1016/j.addma.2019.100864. 46. Shrestha S, Manogharan G. Optimization of binder jetting using Taguchi method. JOM. 2017;69(3):491–7. https://doi.org/10.1007/s11837-016-2231-4. 47. Parteli EJR, Pöschel T. Particle-based simulation of powder application in additive manufacturing. Powder Technol. 2016;288:96–102. 48. Schmutzler C, Stiehl TH, Zaeh MF. Empirical process model for shrinkage-induced warpage in 3D printing. Rapid Prototyp J. 2019;25(4):721–7. https://doi.org/10.1108/RPJ-042018-0098. 49. Chen H, Zhao YF. Process parameters optimization for improving surface quality and manufacturing accuracy of binder jetting additive manufacturing process. Rapid Prototyp J. 2016;22(3):527–38. https://doi.org/10.1108/RPJ-11-2014-0149. 50. Fatemi A, et al. Fatigue behaviour of additive manufactured materials: an overview of some recent experimental studies on TI-6AL-4V considering various processing and loading direction effects. Fatigue Fract Eng Mater Struct. 2019;42(5):991–1009. https://doi.org/10.1111/ffe.13000. 51. Chen Z, Wu X, Tomus D, Davies CHJ. Surface roughness of selective laser melted Ti-6Al-4V alloy components. Addit Manuf. 2018;21:91–103. https://doi.org/10.1016/j.addma.2018.02.009. 52. Mukherjee T, Zuback JS, De A, DebRoy T. Printability of alloys for additive manufacturing. Sci Rep. 2016;6(1):19717. https://doi.org/10.1038/srep19717. 53. DebRoy T, et al. Additive manufacturing of metallic components – process, structure and properties. Prog Mater Sci. 2018;92:112–224. https://doi.org/10.1016/j.pmatsci.2017.10.001. 54. Yadollahi A. Additive manufacturing of fatigue resistant materials: challenges and opportunities. Int J Fatigue. 2017;98:14–31. 55. King WE, et al. Observation of keyhole-mode laser melting in laser powder-bed fusion additive manufacturing. J Mater Process Technol. 2014;214(12):2915–25. https://doi.org/10. 1016/j.jmatprotec.2014.06.005.
694
J. Hendl et al.
56. Kaplan A. 1 A model of deep penetration laser I welding based on calculation of the keyhole profile. p. 11. 57. Biamino S, et al. Electron beam melting of Ti–48Al–2Cr–2Nb alloy: microstructure and mechanical properties investigation. Intermetallics. 2011;19(6):776–81. https://doi.org/10. 1016/j.intermet.2010.11.017. 58. Masuo H. Influence of defects, surface roughness and HIP on the fatigue strength of Ti-6Al-4V manufactured by additive manufacturing. Int J Fatigue. 2018;117:163–79. 59. Mukherjee T, Zhang W, DebRoy T. An improved prediction of residual stresses and distortion in additive manufacturing. Comput Mater Sci. 2017;126:360–72. https://doi.org/10.1016/j. commatsci.2016.10.003. 60. Carter LN, Attallah MM, Reed RC. Laser powder bed fabrication of Nickel-base superalloys: influence of parameters; characterisation, quantification and mitigation of cracking. In: Huron ES, Reed RC, Hardy MC, Mills MJ, Montero RE, Portella PD, Telesman J, editors. Superalloys 2012. Hoboken: Wiley; 2012. p. 577–86. 61. Dye D, Hunziker O, Reed RC. Numerical analysis of the weldability of superalloys. Acta Mater. 2001;49(4):683–97. https://doi.org/10.1016/S1359-6454(00)00361-X. 62. Böllinghaus T, Herold H, editors. Hot cracking phenomena in welds. Berlin/New York: Springer; 2005. 63. Schatt W, Blumenauer H, editors. Werkstoffwissenschaft, 8., neu Bearb. Aufl. Stuttgart: Dt. Verl. für Grundstoffindustrie; 1996. 64. Henderson MB, Arrell D, Larsson R, Heobel M, Marchant G. Nickel based superalloy welding practices for industrial gas turbine applications. Sci Technol Weld Join. 2004;9(1):10. 65. Zhong M, Sun H, Liu W, Zhu X, He J. Boundary liquation and interface cracking characterization in laser deposition of Inconel 738 on directionally solidified Ni-based superalloy. Scr Mater. 2005;53:159–64. 66. Mueller M, et al. Microstructural, mechanical, and thermo-physical characterization of hypereutectic AlSi40 fabricated by selective laser melting. J Laser Appl. 2019;31(2):022321. https:// doi.org/10.2351/1.5096131. 67. Bi G, Sun CN, Gasser A. Study on influential factors for process monitoring and control in laser aided additive manufacturing. J Mater Process Technol. 2013;213:463–8. 68. Everton SK, Hirsch M, Stravroulakis P, Leach RK, Clare AT. Review of in-situ process monitoring and in-situ metrology for metal additive manufacturing. Mater Des. 2016;95: 431–45. 69. Purtonen T. Monitoring and adaptive control of laser processes. Phys Procedia. 2014;56:1218–31. 70. Thompson SM. An overview of direct laser deposition for additive manufacturing; Part I: Transport phenomena, modeling and diagnostics. Addit Manuf. 2015;8:36–62. 71. Hofman JT. A camera based feedback control strategy for the laser cladding process. J Mater Process Technol. 2012;212:2455–62. 72. Willner R. Konzeptionierung und Aufbau eines kamerabasierten Regelungssys-tems zur Qualifizierung des dreidimensionalen Laser-Generierens. Dresden: Technische Universität Dresden; 2015. 73. Fraunhofer IWS Dresden. Feinschliff für die Additive Produktion. 2019. 74. Fraunhofer IWS Dresden. Smart laser processing heads in the digital age. 2017. 75. Yadroitsau I. Direct manufacturing of 3D objects by selective laser melting of metal powders. These de doctorat, Saint-Etienne. 2008. 76. Schoinochoritis B, Chantzis D, Salonitis K. Simulation of metallic powder bed additive manufacturing processes with the finite element method: a critical review. Proc Inst Mech Eng Part B J Eng Manuf. 2017;231(1):96–117. https://doi.org/10.1177/0954405414567522. 77. https://www.schweissenundschneiden.de/artikel/eigenspannungen-und-verzug-bei-deradditiven-fertigung-durch-laserstrahlschmelzen/. https://www.schweissenundschneiden.de/ artikel/eigenspannungen-und-verzug-bei-der-additiven-fertigung-durch-laserstrahlschmelzen/. Accessed 22 Feb 2021.
25
NDE for Additive Manufacturing
695
78. Körner C, Bauereiß A, Attar E. Fundamental consolidation mechanisms during selective beam melting of powders. Model Simul Mater Sci Eng. 2013;21(8):085011. https://doi.org/10.1088/ 0965-0393/21/8/085011. 79. Khairallah SA, Anderson AT, Rubenchik A, King WE. Laser powder-bed fusion additive manufacturing: physics of complex melt flow and formation mechanisms of pores, spatter, and denudation zones. Acta Mater. 2016;108:36–45. https://doi.org/10.1016/j.actamat.2016. 02.014. 80. Qiu C, Panwisawas C, Ward M, Basoalto HC, Brooks JW, Attallah MM. On the role of melt flow into the surface structure and porosity development during selective laser melting. Acta Mater. 2015;96:72–9. https://doi.org/10.1016/j.actamat.2015.06.004. 81. Rai A, Markl M, Körner C. A coupled cellular automaton–lattice Boltzmann model for grain structure simulation during additive manufacturing. Comput Mater Sci. 2016;124:37–48. https://doi.org/10.1016/j.commatsci.2016.07.005. 82. Panwisawas C, et al. Mesoscale modelling of selective laser melting: thermal fluid dynamics and microstructural evolution. Comput Mater Sci. 2017;126:479–90. https://doi.org/10.1016/j. commatsci.2016.10.011. 83. Oishi A, Yagawa G. Computational mechanics enhanced by deep learning. 2017. https://doi. org/10.1016/J.CMA.2017.08.040. 84. Le QV. Building high-level features using large scale unsupervised learning. In: 2013 IEEE international conference on acoustics, speech and signal processing. 2013. p. 8595–8. https:// doi.org/10.1109/ICASSP.2013.6639343. 85. Silver D, et al. Mastering the game of go with deep neural networks and tree search. Nature. 2016;529(7587):484–9. https://doi.org/10.1038/nature16961. 86. Haykin SS, Haykin SS. Neural networks and learning machines. 3rd ed. New York: Prentice Hall; 2009. 87. Yagawa G, Okuda H. Neural networks in computational mechanics. Arch Comput Methods Eng. 1996;3(4):435. https://doi.org/10.1007/BF02818935. 88. Yagawa G, Matsuda A, Kawate H, Yoshimura S. Neural network approach to estimate stable crack growth in welded specimens. Int J Press Vessel Pip. 1995;63(3):303–13. https://doi.org/ 10.1016/0308-0161(94)00040-P. 89. Kim JH, Kim YH. A predictor-corrector method for structural nonlinear analysis. Comput Methods Appl Mech Eng. 2001;8–10(191):959–74. 90. Lopez R, Balsa-Canto E, Oñate E. Neural networks for variational problems in engineering. Int J Numer Methods Eng. 2008;75(11):1341–60. https://doi.org/10.1002/nme.2304. 91. Furukawa T, Yagawa G. Implicit constitutive modelling for viscoplasticity using neural networks. Int J Numer Methods Eng. 1998;43(2):195–219. https://doi.org/10.1002/(SICI) 1097-0207(19980930)43:23.0.CO;2-6. 92. Huber N, Tsakmakis C. A neural network tool for identifying the material parameters of a finite deformation viscoplasticity model with static recovery. Comput Methods Appl Mech Eng. 2001;191:353–84. https://doi.org/10.1016/S0045-7825(01)00278-X. 93. Lefik M, Schrefler B. Artificial neural network as an incremental non-linear constitutive model for a finite element code. Comput Methods Appl Mech Eng. 2003;192:3265–83. https://doi. org/10.1016/S0045-7825(03)00350-5. 94. Lefik M, Boso D, Schrefler B. Artificial neural networks in numerical modelling of composites. Comput Methods Appl Mech Eng. 2009;198:1785–804. https://doi.org/10.1016/j.cma. 2008.12.036. 95. Jung S, Ghaboussi J. Characterizing rate-dependent material behaviors in self-learning simulation. Comput Methods Appl Mech Eng. 2006;196:608–19. https://doi.org/10.1016/j.cma. 2006.06.006. 96. Man H, Furukawa T. Neural network constitutive modelling for non-linear characterization of anisotropic materials. Int J Numer Methods Eng. 2011;85(8):939–57. https://doi.org/10.1002/ nme.2999.
696
J. Hendl et al.
97. Oeser M, Freitag S. Modeling of materials with fading memory using neural networks. Int J Numer Methods Eng. 2009;78(7):843–62. https://doi.org/10.1002/nme.2518. 98. Ghaboussi J, Pecknold DA, Zhang M, Haj-Ali RM. Autoprogressive training of neural network constitutive models. Int J Numer Methods Eng. 1998;42(1):105–26. https://doi.org/ 10.1002/(SICI)1097-0207(19980515)42:13.0.CO;2-V. 99. Al-Haik MS, Garmestani H, Navon IM. Truncated-Newton training algorithm for neurocomputational viscoplastic model. Comput Methods Appl Mech Eng. 2003;192(19):2249–67. https://doi.org/10.1016/S0045-7825(03)00261-5. 100. Hashash YMA, Jung S, Ghaboussi J. Numerical implementation of a neural network based material model in finite element analysis. Int J Numer Methods Eng. 2004;59(7):989–1005. https://doi.org/10.1002/nme.905. 101. Guo Q, et al. In-situ characterization and quantification of melt pool variation under constant input energy density in laser powder bed fusion additive manufacturing process. Addit Manuf. 2019;28:600–9. https://doi.org/10.1016/j.addma.2019.04.021. 102. Bond LJ, Koester LW, Taheri H. NDE in-process for metal parts fabricated using powder based additive manufacturing. In: Smart structures and NDE for energy systems and industry 4.0. Denver: SPIE; 2019. p. 1. https://doi.org/10.1117/12.2520611. 103. Lu QY, Wong CH. Additive manufacturing process monitoring and control by non-destructive testing techniques: challenges and in-process monitoring. Virtual Phys Prototyp. 2018;13(2): 39–48. https://doi.org/10.1080/17452759.2017.1351201. 104. Tan Phuc L, Seita M. A high-resolution and large field-of-view scanner for in-line characterization of powder bed defects during additive manufacturing. Mater Des. 2019;164:107562. https://doi.org/10.1016/j.matdes.2018.107562. 105. Beretta S, Romano S. A comparison of fatigue strength sensitivity to defects for materials manufactured by AM or traditional processes. Int J Fatigue. 2017;94:178–91. https://doi.org/ 10.1016/j.ijfatigue.2016.06.020. 106. Xiaobo C, Jun Tong X, Tao J, Ye J. Research and development of an accurate 3D shape measurement system based on fringe projection: model analysis and performance evaluation. Precis Eng. 2008;32(3):215–21. https://doi.org/10.1016/j.precisioneng.2007.08.008. 107. Kumar A, Jain PK, Pathak PM. Reverse engineering in product manufacturing: an overview. In: Katalinic B, Tekic Z, editors. DAAAM international scientific book, vol. 12. 1st ed. Vienna: DAAAM International; 2013. p. 665–78. 108. Sansoni G, Trebeschi M, Docchio F. State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation. Sensors. 2009;9(1):568–601. https://doi.org/10.3390/s90100568. 109. Lopez E, et al. Evaluation of 3D-printed parts by means of high-performance computer tomography. J Laser Appl. 2018;30(3):032307. https://doi.org/10.2351/1.5040644. 110. DIN Deutsches Institut für Normung e. V. DIN 444. Berlin: Beuth Verlag; 2017. 111. DIN Deutsches Institut für Normung e. V. DIN EN 13068-3. Berlin: Beuth Verlag; 2001. 112. DIN Deutsches Institut für Normung e. V. DIN EN ISO 15708-2. Berlin: Beuth Verlag; 2019. 113. du Plessis A, Yadroitsava I, Yadroitsev I. Effects of defects on mechanical properties in metal additive manufacturing: a review focusing on X-ray tomography insights. Mater Des. 2020;187:108385. https://doi.org/10.1016/j.matdes.2019.108385. 114. De Chiffre L, Carmignato S, Kruth J-P, Schmitt R, Weckenmann A. Industrial applications of computed tomography. CIRP Ann. 2014;63(2):655–77. https://doi.org/10.1016/j.cirp.2014. 05.011. 115. Bauch J, Rosenkranz R. Physikalische Werkstoffdiagnostik. Berlin/Heidelberg: Springer Berlin Heidelberg; 2017. 116. Honarvar F, Varvani-Farahani A. A review of ultrasonic testing applications in additive manufacturing: defect evaluation, material characterization, and process control. Ultrasonics. 2020;108:106227. https://doi.org/10.1016/j.ultras.2020.106227.
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of Additively Manufactured Metal Parts
26
Christiane Maierhofer, Simon J. Altenburg, and Nils Scheuschner
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of In Situ Monitoring and In Situ NDE Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optical, Spectroscopic, and Thermographic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acoustic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electromagnetic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measurement of Particle and Fume Emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Requirements and Examples of Thermographic Methods for in Situ Monitoring of Different Additive Manufacturing Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Powder Bed Fusion of Metals with Laser Beam (PBF-LB/M) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Direct Energy Deposition with a Laser (DED-LB/M) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of In Situ Monitoring Against Post NDE: Advantages, Challenges, and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
698 699 701 703 704 704 705 705 707 715 722 728 729 729
Abstract
In this chapter, the current state-of-the-art of in situ monitoring and in situ NDE methods in additive manufacturing is summarized. The focus is set on methods, which are suitable for making statements about the quality and usability of a component currently being manufactured. This includes methods which can be used to determine state properties like temperature or density, other physical properties like electrical or thermal conductivity, the microstructure, the chemical composition, the actual geometry, or which enable the direct detection of defects like cracks, voids, delaminations, or inclusions. Thus, optical, thermographic, acoustic, and electromagnetic methods, as well as methods being suitable for C. Maierhofer (*) · S. J. Altenburg · N. Scheuschner Bundesanstalt für Materialforschung und –prüfung, Berlin, Germany e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2022 N. Meyendorf et al. (eds.), Handbook of Nondestructive Evaluation 4.0, https://doi.org/10.1007/978-3-030-73206-6_51
697
698
C. Maierhofer et al.
investigating particle and fume emission are presented. The requirements of in situ monitoring methods with a focus on thermographic methods are discussed by considering different additive manufacturing processes like laser powder bed fusion (PBF-LB/M) and direct energy deposition (DED-LB/M). Examples of the successful implementation and applications of such monitoring methods at BAM are given. The in situ monitoring and NDE methods are compared against post-process NDE methods. The advantages and challenges of in situ methods concerning real-time data analysis and the application of AI algorithms are addressed and discussed. Keywords
Additive manufacturing · In situ monitoring · In situ NDE · Post NDE · Thermography · Laser powder bed fusion · Direct energy deposition
Introduction Since in additive manufacturing more and more parts are generated to be applied in safety relevant systems, e.g., in aerospace [1], automotive industry or energy applications [2], reliable quality assurance concepts are becoming more and more important [3, 4]. Beside the qualification of raw materials (powder, wire, foil) and the ensuring of stable and reproducible manufacturing processes and systems, this comprises the assurance of integrity and durability of the manufactured parts. As in additive manufacturing usually only a small number of parts is generated, nondestructive evaluation (NDE) methods like computed tomography (CT), laminography, radiography, ultrasonics, thermography, and several surface testing methods are very suitable for post inspection and are preferred over destructive testing [5–8]. However, in some cases, post NDE methods reach their application limits due to complex geometries and rough, nonplanar, and internal surfaces of the built parts [9]. Furthermore, post inspection of parts, whose build times are sometimes very long, give only information about the quality after production and the build process can only be optimized iteratively. Therefore, in situ monitoring and in situ NDE of the part quality is more efficient. Such an in situ investigation should not only acquire the process parameters like laser power, gas pressure, and temperature inside the build space, but should provide direct or indirect relevant information about the component. This includes melt pool properties as well as built part properties, such as the location and quantification of inhomogeneities like pores, lack-of-fusion defects, cracks, inclusions, delaminations, and deviations from the element composition and planned geometry. An early decision about the part quality and the possibility of regulating and adapting the build process parameters already during production will save time, material, and energy. There are existing different processes for additive manufacturing, which are separated into seven basic categories [10]. In this chapter, we are focusing on the manufacturing of metal parts using the categories of powder bed fusion of metals
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
699
with a laser (PBF-LB/M) and of directed energy deposition using a laser (DED-LB/ M). These two processes are single-step processes, i.e., the built parts are manufactured within a single working step. This does not exclude further processing steps like hot isostatic pressing (HIP) and grinding. In PBF-LB/M, a thin layer of metal powder is applied to a base plate inside a build chamber under an inert gas atmosphere. The workpiece is built up by selective melting of the metal powder by a laser, controlled by means of a scanner. After each layer, the build platform is lowered, a new powder layer is deposited, and the laser starts again. In DED-LB/ M, the workpiece is locally molten by a laser and fine metal powder is fed into the melt pool by means of a nozzle and a stream of inert gas. The material deposition is controlled by an axis system or a robot. DED-LB/M is used if larger deposition rates and only a limited complexity of the part geometry is required. This process is applied for the generation of 3D functional components, for laser cladding of surfaces, and for the repair of used parts. During both additive manufacturing processes, numerous influences on the process can lead to the formation of a number of different typical defects. They can be of a type that is well known from conventional welding (e.g., pore formation, lack of side fusion, hot cracks, carbide and oxide inclusions) [11] or of a type that is new and caused by the specific manufacturing method (e.g., lack of powder fusion in case of PBF-LB/M, an unplanned and anisotropic microstructure, geometrical deviations from the planned structure, high residual internal stresses) [12]. In the following chapters, first, we give an overview about the state-of-the-art of in situ monitoring methods, mainly applied to the additive manufacturing categories and processes of metals as described above. Afterwards, we focus on in situ monitoring methods based on the analysis of heat radiation generated during the manufacturing process. We define the requirements for different camera systems in various configurations and give examples on the selection of appropriate measurement parameters and data acquisition techniques as well as on techniques for data analysis and interpretation. Finally, we compare in situ monitoring methods against post NDE methods by discussing the advantages and disadvantages of both.
Overview of In Situ Monitoring and In Situ NDE Methods In situ monitoring and NDE of the additive manufacturing process can refer to different quantities. The current state-of-the-art is exclusively the monitoring of process variables and the melt pool [12]. However, the relationships between the individual production variables and their influence on possible component defects and inhomogeneities are very complex. Therefore, the development of innovative in situ measuring methods for the direct recording of the state parameters of the component should make it possible to better understand these complex interrelationships and thus also to better monitor the manufacturing process. In addition, it will be possible to regulate, to adapt, or to interrupt the manufacturing process if the onset of the production of large defects is detected, which otherwise could probably not be repaired during the process or afterwards and which could not be tolerated.
700
C. Maierhofer et al.
The development and qualification of many nondestructive testing methods suitable for process monitoring is currently state-of-the-research. Review articles about in situ monitoring are mainly assessing the principal suitability of various methods for in situ monitoring and NDE [12–16], whereas still not all potential methods and techniques have been tested in situ (e.g., eddy current testing, laminography, x-ray backscattering). At NIST, a review focuses on powder bed fusion additive manufacturing and in-process and post-process testing to identify the correlations between process parameters, process signatures, and product quality [13]. They give a comprehensive overview about process controllable parameters, predefined process parameters which cannot be changed during manufacturing, and process signatures giving information about melt pool, track, layer, and product. The literature review is supplemented by modelling and simulation of the manufacturing process to identify correlations between process and melt pool signatures, melt pool and track signatures, track and layer signatures, and layer and product signatures. As a consequence, in October 2018, NIST started a project on AM Machine and Process Control Methods for Additive Manufacturing [17]. Spears and Gold [14] emphases on the impact of feed powder characteristics and on the melt pool properties. Everton et al. [15] give a widespread overview of defects and inhomogeneities of all metal AM systems, and how these could be detected in situ using NDE methods. They included examples of in situ investigations from welding. A classification of in situ NDE methods concerning the direct and indirect detection of defects and inhomogeneities was performed by Hirsch et al. [16]. They added a detailed discussion of the influence of spatial and temporal resolution of these methods on the time needed for data recording as well as on the sensitivity for defect detection. Grasso et al. [12] devoted particular attention to the development of automated defect detection rules. They give a detailed review on defects and their origin in metal powder bed fusion and concentrated on the measurement of processing signatures for fault detection using already available commercial systems (e.g., melt pool monitoring). In 2017, BAM started a project on the development of process monitoring in additive manufacturing (ProMoAM [18]). Results of this project are included in the following subchapters. Standardization of in situ monitoring methods is still at the very beginning and up to now (2020), no standards on in situ monitoring or NDE during additive manufacturing are available. The ISO/TC 261 Additive Manufacturing and the ASTM Committee F42 Additive Manufacturing Technologies formed a joint working group on the development of additive manufacturing standards. Here, general standards (e.g., terminology, data formats), standards for broad categories of materials (e.g., metal, polymers) or processes (e.g., PBF-LB/M, DMD-LB/M), and specialized standards for specific materials (e.g., aluminum alloys, titanium alloys) or applications (e.g., aerospace, medical, automotive) including the AM process are developed. A standard on post NDE is currently under development (ISO/ASTM DTR 52905 Additive manufacturing – General principles – Nondestructive testing of additive manufactured products) and contains some hints to in situ monitoring. In the following, details about distinct in situ methods based on different physical effects are presented.
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
701
Optical, Spectroscopic, and Thermographic Methods Optical Methods Optical methods are applied for the determination of deviations of the built part geometry from the planned geometry during and after the manufacturing process as well as for the detection of impurities and inclusion during the process. Methods are based on the recording of high-resolution layer images with a visual camera from the top with grazing incidence side (and front) illuminations using light emitting diodes (LEDs) after the solidification of each layer [19, 20], on the recording of images with a photodiode using the optical path coaxial to the laser in PBF-LB during the manufacturing process [21], and on 3D digital image correlation (DIC) during and after manufacturing using commercial DIC systems [22, 23]. 3D information can be gained for all systems while the highest spatial resolution of a few μm is obtained with DIC. Future developments include data analysis based on edge detection and spectral analysis to be performed with machine learning and deep leaning algorithms. Currently, only methods based on layer images (e.g., powder bed monitoring [24], layer control system [25]) are commercially available for in situ monitoring. The optical coherence tomography (OCT) is more complex than the methods described above. Commonly, a beam splitter divides light from a separate laser or a super luminescent diode with a short coherence length into two beam paths: one is directed to the sample, and one along a reference path. The light reflected from the sample is superimposed with the reference light. Depth-resolved information is obtained by varying the length of the reference path, and 3D images are generated by lateral scanning. The depth resolution is determined by the coherence length. Thus, for opaque materials as metals, the OCT enables the spatially resolved determination of the height of the solidified layer as well as of the powder layer [26, 27]. In addition to layer thickness measurement, also information about the surface roughness is gained [28]. An axial (depth) resolution of about 1 μm and a lateral resolution of a few μm can be achieved. For fixed optical systems with mirror scanners, the size of the area to be investigated is limited, since for larger areas, the build plane is out of the depth of field of the OCT system. Spectroscopic Methods Although optical emission spectroscopy (OES) experiments have been performed successfully during laser welding using a CO2 laser [29], experimental investigations within a DED + LB/M system at AISI 316 L and Ti-6Al-4 V have shown that the energy density of the laser is too low for exciting emissions from element and molecules. Only in case of very high laser energy densities and at discontinuities, the heat increases and spectral emission is excited [30]. Other studies on AISI 304 stainless steel showed emission spectra even at lower laser power [31]. Thus, not for all applications reliable spectra are obtained and further excitation sources are needed. Laser-induced breakdown spectroscopy (LIBS) measurements with a separate laser system for excitation of emission have already been performed within the melt pool during DED-LB/M manufacturing without influencing the sample structure [32].
702
C. Maierhofer et al.
Up to now, there are only very few in situ applications of both methods. The methods could be very valuable for the control of intentionally graduated compositions.
Passive Thermographic Methods For a complete understanding of the additive manufacturing process and the resulting material properties and microstructure, the thermal history of the built part plays a decisive role. The determination of the spatial and temporal temperature evolution by means of infrared and visual sensors and cameras is therefore relevant for a profound documentation. However, the boundary conditions of the processes make it difficult to measure temperatures using infrared (IR) cameras. Highly dynamic temperature changes and steep temperature gradients, phase transitions with accompanying emissivity changes, and the limited accessibility of commercial additive manufacturing systems are examples of these difficult conditions in practice. Nevertheless, the determination of temperatures and other process-describing parameters is being researched by many groups worldwide [12, 15, 33–35]. The challenges and starting points for thermography differ considerably for the various additive manufacturing processes. Some examples are presented below. Optical Tomography For optical tomography (OT), the build platform is continuously monitored during the manufacturing process with a high-resolution CCD or CMOS camera. The radiation intensity of the welding process is recorded in a spatially resolved integrated manner based on the continuous exposure during the production of a single layer [36, 37]. With OT, successful in situ investigations have already been obtained [38]. The existence of predicted defects was proven by CT and metallography. Further developments are related to a better spatial resolution (e.g., by using a high-resolution camera). A challenge is still the reconstruction of information gained from the integrated intensities, which might be related to multiple exposures of light. Thus, currently it is not possible to derive the maximum intensity at a pixel position seen in one single exposure. A limited temporal resolution of the process can be achieved by an approximate gapless acquisition of multiple images during the single layer illumination. Active Thermography with Laser Excitation Laser thermography enables the detection of flaws that are much smaller than the resolution of the measuring system. Detectable flaws should be located in the area close to the surface (e.g., within the last 1–3 layers in additive manufacturing with powder bed processes), but do not necessarily have to be open towards the surface. In this NDE method, a focused laser spot (or laser line, or pattern of any structure) is scanned over the component surface. Cracks or other inhomogeneities in the area near the surface impede the lateral heat propagation of the absorbed laser light, resulting in a temperature distribution in the area of the laser focus that deviates from the undisturbed case. The temperature distribution can be detected either with a co-scanned IR detector (flying spot laser thermography [39, 40]) or with a fixed IR camera, in which case temperature calibration of the system is not required [41, 42].
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
703
A patent from MTU Aero Engines AG (Munich, Germany) exists for the in situ use of laser thermography. Here, the heat distribution is detected in an off-axis configuration by an IR camera [43]. Principally, in situ laser thermography can be implemented by using the build laser and a single sensor or a short-wave IR (SWIR) camera in on-axis configuration.
Acoustic Methods Ultrasonics In situ ultrasonic investigations can be either performed by coupling a single transducer or a transducer array to the rear side of the build platform [44], or by air coupled ultrasonics by using an air coupled transducer as transmitter and a laser vibrometer as receiver [45]. For the former case, measurements are performed in impulse-echo mode and either the direct reflection from inhomogeneities or the scattering of shear waves from the evolving melt pool are analyzed. For a direct coupling of the transducers, an access to the rear side of the build platform is required. Thus, the method is only suited to investigate small samples with a simple geometry, e.g., which are built as reference samples in addition to the real built part. The application of phased arrays has a large potential to obtain 3D data. Air-coupled ultrasonics enables the assessment of the heat affected zone and the detection of small voids. In situ ultrasonic investigations can be performed in all additive manufacturing systems. Spatially Resolved Acoustic Spectroscopy (SRAS) For the generation of surface waves, a pulsed laser illuminates the surface of the part to be investigated with, e.g., a stripe pattern. By the photoacoustic effect, this pattern is converted into an acoustic surface wave whose wavelength corresponds to the distance between the stripes. Another laser (e.g., a laser vibrometer) scans the surface and the reflections are detected with a photodiode. The frequency of the surface wave is measured, and the propagation velocity is displayed with spatial resolution. Different crystal orientations as well as pores in the surface can be visualized [46]. Thus, this method is suitable to investigate the microstructure including the detection of grain refinements and of different phases of a material. It is not possible to resolve individual grains or their boundaries. In addition, the acoustic wave velocity is correlated with Young’s modulus. For in situ measurements, the velocity is correlated to the temperature of the heat affected zone. The optical images show surface breaking defects like cracks and pores. Up to now, SRAS systems have been implemented into PBF-LB/M systems by using the same optic as the processing laser [46]. The measurements are performed either during layer growth or when one layer is finished. Principally, SRAS can be implemented to all AM systems. The surface roughness should be less than 100 nm using a single diode (specular reflection) and less than 2 μm when using a speckle knife edge detector (SKED), which consists of an array of single diodes (diffuse reflection) [47].
704
C. Maierhofer et al.
Acoustic Emission Acoustic sensors for recording acoustic emission (AE) can be mounted at any location inside the AM system (array of contact sensors below the build platform, contact sensor at the build surface, free air sensor above the platform) [48]. As sensors, microphones or fiber optical sensors are used [49]. The main aim of the application of AE is the detection and location of cracks, but none of the publications about AE in AM show examples of that. A major influencing factor is the environmental noise generated by the AM system. A large step forward is the analysis of AE data using deep learning and supervised deep learning in combination with in situ radiography [49, 50]. Within the latter publication, three different sets of additive manufacturing parameters could be distinguished using the AE data with confidences up to 91%. With supervised deep learning using the AE signal of a laser welding process and a classification of the conditions, conduction welding, stable keyhole, unstable keyhole, blowout and pores, accuracies of up to 99%, correlating to in situ radiography, are reached. On acoustic monitoring, there exists a patent by Renishaw plc (Wotton-under-Edge, UK) [51].
Electromagnetic Methods Eddy current testing enables the detection of cracks, pores, and lack of fusion defects close to the surface, as well as the determination of the electrical conductivity and magnetic permeability of the topmost layer. While the powder has no influence on the measurement, the surface roughness may strongly disturb the detectability of cracks [52]. The eddy currents induced in an electrically conductive sample by means of an alternating magnetic field are detected by the changes in the impedance of a coil system in a contact-free (but very close to the surface), spatially resolved manner. These eddy currents change near cracks and other defects. The magnetic field can also be measured with giant magnetic resistance (GMR) or tunnel magnetic resistance (TMR) sensors [52]. These sensors can be arranged in a linear array and mounted on the recoater in powder bed processes, as described in a patent by Rolls Royce PLC (London, UK) for the eddy current application [53]. Up to now, only pre-studies and no real in situ applications are known [54].
Measurement of Particle and Fume Emission The knowledge regarding particle and fume emission outside AM systems is important for evaluating potential exposure-related health hazards for the operators [55, 56]. The emission inside the AM chambers especially for PBF processes leads to contamination of the build chamber, of optics (e.g., laser window, optional installed measurement systems) and influences the build process itself by, e.g., laser absorption [57, 58]. Investigations of the particle emissions are focusing on the determination of particle numbers and their distribution over space and time, masses, sizes, and the identification of elements and molecules. Particles with sizes
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
705
between 10 nm and a few μm can be detected and quantified. Here, it is shown that particle emission measured inside a PBF chamber depends on the part position, powder layer thickness, and scanning vector directions [57]. Investigations of the influence of shielding gas flow on the melt pool geometry and on the part quality show that decreasing flow rate yields, below a certain threshold, to an inefficient plume removal. Increased interactions of the laser radiation with the plume lead to instabilities of the melt pool and finally to pores and inhomogeneities within the built structure [58]. Already some years earlier, this effect has been observed during laser welding [59].
Requirements and Examples of Thermographic Methods for in Situ Monitoring of Different Additive Manufacturing Processes General Requirements The use of camera systems to observe the manufacturing process requires optical accessibility to the surface of the part under construction. For sealed systems with inert gas atmosphere such as the PBF-LB/M process, a suitable window must be available, often in combination with internal deflection mirrors, so that the entire component platform can be imaged. In all cases, laser protection must be considered when using lasers as a heat source. The spatial resolution of the applied IR or visual camera systems should be at least three times below the smallest dimension of the investigated thermal structures, e.g., the melt pool of the respective process. Otherwise, spatial under-sampling leads false intensity readings. The spatial resolution can be calculated from the detector size, the focal length of the imaging system including any spacer rings, the distance of the camera to the object surface, and the angle between the optical axis of the camera and the surface normal. The larger the angle between the optical axis of the camera and the surface normal, the worse the spatial resolution in the plane to which the angle refers. However, for a given detector size, the spatial resolution also determines which field of view (FOV) can be captured with the camera. In addition, it must be taken into account that the depth of field of the camera system is limited. With increasing FOV and increasing angle to the surface normal, areas outside the optical axis might not be imaged sharply. With a fixed camera having a detector size of 640 512 pixels and a spatial resolution of 100 μm, a FOV of approximately 64.0 51.2 mm2 can be captured. This FOV is larger, if the optical axis of the camera is not parallel to the surface normal, and is smaller, if sub windows are used. If information on the melt pool geometry is to be derived from the recorded data, then the scanning velocity of the laser and the frame rate and integration time or time constant of the camera must also be considered. If temperatures in the melt pool are to be analyzed, the upper limit of the temperature measuring range should be significantly higher than the temperature of the liquid melt pool (melting ranges, e.g., for Inconel 718 up to 1703 K, for AISI 316 L up to 1723 K, and for AISI 2205 up to 1738 K). Locally, temperatures up to
706
C. Maierhofer et al.
the evaporation point can be reached in the melt pool (up to 3170 K for steel). The temperature of the melt pool and also its size and shape have an influence on the pore formation. The lower limit of the temperature measurement range is determined by the extent to which cooling processes are to be observed. The cooling rates have a significant influence on the microstructure of the component. Due to very fast cooling down processes in additive manufacturing, not only the cooling rate up to shortly after solidification but also up to room temperature (or up to the temperature of a possibly preheated substrate) plays a role. When selecting the temperature measurement range, it must also be taken into account that the emissivity of the surface depends on the aggregate state of the material (solid or liquid), on the temperature, on the observation angle, and on the presence of oxides or other impurities on the surface. In the PBF process, for the solid phase, the emissivity also depends on whether the material is still powdery or has already been melted and re-solidified. Typically, for most metals, the emissivity is above 0.3 for the powder, above 0.2 for the solidified material, and below 0.15 for the liquid in the melt pool [60, 61]. This should be considered for the appropriate selection of the temperature measurement range, as it reduces the required upper limit significantly as well as the lower limit. Calibrated temperature measurements are required, for example, when: • Different additive manufacturing processes of one system are to be compared to each other • Different additive manufacturing systems are to be compared to each other • Real processes are to be compared with simulation calculations • Microstructure predictions are required • Or preheating strategies are to be evaluated As the emissivity depends on that many parameters, it is very time and resource consuming to experimentally determine all emissivity values for all parameters and all materials. In most of the published studies and also in most of the examples below, the solidification plateau observed in temperature versus time curves or temperature versus position curves close to the melt pool is used to correct the apparent temperatures or intensities using literature values of the solidus and liquidus temperatures [62]. But as these literature values differ in different literature sources and seem to depend on the process itself, the accuracy of this temperature correction is very low. At the same time, the emissivity of the solid material can change due to oxidation of the surfaces despite the inert gas atmosphere. In addition, this correction is limited as only the high temperature range for the solidified material is covered. Further attempts are based on the calibration of larger temperature ranges of the solidified material using thermocouples (e.g., [60, 63]), but these studies show as well that a large effort is required for a reliable calibration. Future approaches therefore pursue the use of multispectral techniques [64]. To predict the almost real temperature evolution, supervised training of an artificial neural network should be possible, using a priory knowledge about the emissivity together with multispectral temporally resolved imaging data of the build process.
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
707
If the real temperature evolution is known, defects and inhomogeneities still cannot be derived directly, as already described above. Deviations from simulated or expected temperature distributions might be a hint to a higher probability of the occurrence of defects, but additional information about defects and inhomogeneities, micromechanical structures, and mechanical properties of the built structure is needed and has to be correlated with thermal signatures. Finally, the monitoring of the temperature evolution of each layer of the built part generates a large amount of data. Depending on spatial and temporal resolution, storage depth per measurement value, lateral size of the surface, and the number of layers of the part, up to several terabyte of data might be generated per built part, which must be stored in real time during the build process. If the monitoring will be used for adapting or regulating the manufacturing process, not only data storage but data analysis a well has to be performed in real time. Thus, new concepts based on the selection and detection of key features in the recorded data should be developed soon.
Powder Bed Fusion of Metals with Laser Beam (PBF-LB/M) Special Requirements Powder bed fusion of metals occurs in sealed chambers under inert gas atmosphere. Therefore, for this process, the spatial and temporal observation of the temperature evolution using IR, SWIR, and visual cameras is the most beneficial. In order to protect the optics and not to influence the construction process, these camera systems are mounted in most cases outside the build chamber. Typical scanning speeds of the PBF-LB/M process laser are between 0.4 m/s and 2 m/s, which places high demands on integration time, frame rate, and spatial resolution of the camera systems. Thermographic process monitoring can be carried out coaxially using the optics of the scanner unit or statically by imaging the build platform through a window, see Fig. 1a, b, respectively. Coaxial imaging can only be done with camera systems whose wavelength range is adapted to the specification of the scanner unit (VIS or SWIR cameras) and, according to the imaging optics, only a smaller image section around the melt pool can be captured. Here, however, it is then possible to characterize the melt pool with high spatial resolution. A fixed camera has the advantage that, e.g., a mid-wave infrared (MWIR) camera can be used to monitor not only the melt pool but also the thermal development of the built part far from the melt pool. For this purpose, flanges must be available at the build chamber for aligning the camera so that the optical axis is in the best case parallel to the surface normal of the build platform. If this is not possible, then mirror systems inside the build chamber can support the imaging, see Fig. 1b. Via beam splitters, a single beam path can be used to simultaneously monitor the build process with different cameras. This could be, e.g., a MWIR camera and a camera for optical tomography (see again Fig. 1b). Additionally, or alternatively, a SWIR camera can be used. If the spatial resolution is sufficient to characterize the melt pool, insights into process stability can be gained from its geometry and temperature distributions. The
708
C. Maierhofer et al.
Fig. 1 Schematic presentation of the PBF-LB/M process (a) with a coaxial camera using the same optical path as the laser or (b) with a fixed camera. Both systems are mounted outside the build chamber
Fig. 2 Visualization of the influence of the spatial resolution. (a) High spatial resolution of about 100 μm/pixel. (b) Low spatial resolution (10x less than in (a))
influence of spatial resolution is visualized in Fig. 2. The thermogram in Fig. 2a has a higher resolution which is slightly smaller than the melt pool size (e.g., 100 μm/pixel in relation to a melt pool width of approximately up to 100 μm and melt pool length of about 1 mm), while Fig. 2b shows a spatial resolution which is 10x less than in Fig. 2a, about 1 mm/pixel. Within the latter thermogram, no information can be obtained about the position and size of the melt pool. Additional information is gained by evaluating the formation of spatter particles [65]. Furthermore, methods for correlating thermal information with real defects, detected and characterized with nondestructive and destructive methods, are being developed (see for example [38]) and are described below.
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
709
With a frame rate of about 900 Hz of the MWIR camera and a scanning velocity of the laser of about 700 mm/s, the laser passes a distance of about 780 μm between two thermograms. As the melt pool length with solidification plateau is within the same magnitude, the temporal resolution of the temperature versus time curves within the PBF-LB/M process for MWIR cameras usually is not high enough to resolve the solidification plateau during cooling down. Sometimes the temperature versus time curves are even superimposed with further heating during the passing of the laser nearby, as will be shown below. In this case, for temperature correction, a diagram displaying the temperature versus position can be used, as the integration time for one thermogram usually is significantly lower than the frame rate. Figure 3a displays the temperature profile along the melt pool in scanning direction (see Fig. 3b) with a spatial resolution of 100 μm/pixel. With a scanning velocity of about 700 mm/s and an integration time of 90 μs, the laser passes a distance of about 63 μm during the recording of one thermogram, which is less than the spatial resolution. In the diagram, the laser is currently at the position of the highest intensity and will be moved to lower distance values. For the calibration of the temperature scale, a routine as described in Ref. [60] has been applied here by assuming an emissivity of 0.2. This routine is based on the camera temperature calibration for the whole camera system with the respective lens (performed by the vendor), consideration of reflections from the chamber environment having a temperature of 295 K, and regarding the wavelength dependent characteristics of the following elements: sensitivity of the camera detector, transmissivity of the lens, the optical filters, the beam splitter, and the window as well as the reflectivity of the used mirrors. The mean value of liquidus and solidus temperature at 1660 K is marked by a red horizontal line. Thus, the shoulder or short plateau just below this line corresponds to the solidification process of the material during cooling down. A thermogram of the
Fig. 3 (a) Temperature profile across the melt pool in scanning direction from the thermogram shown in (b) with a spatial resolution of about 100 μm/pixel. The solidification plateau is just recognizable with 4–5 pixels. The laser is moving to lower distance values. (b) Thermogram recorded with an integration time of 90 μm. The laser is scanning from top to bottom. To emphasize the dimensions of the melt pool, the color scale is changed at 1550 K, slightly below the averaged liquidus and solidus temperature at 1660 K
710
C. Maierhofer et al.
melt pool and the surface of the whole built part is shown in Fig. 3b. Here, for temperatures above 1550 K, the color scale is replaced by a gray scale, thus the melt pool and its mushy zone is highlighted in black-and-white. In the diagram as well as in the thermogram, the melt pool appears with a length of about 1 mm. Thus, the melt pool consists of 10 pixels including the solidification plateau with approximately 4–5 pixels, which is just enough for its identification. Nevertheless, it must be considered that typical melt pool widths are between 200 μm and 500 μm. Thus, a spatial resolution of 100 μm/pixel is a bit too low for a reliable visualization of the real temperature of maximum and plateau. Close to accurate values can only be expected when the melt pool is oriented and positioned in such a way that it is imaged centrally on a detector element. Thus, the thermogram shown in Fig. 3b was chosen carefully. Usually, lower temperatures of the mushy zone are measured since the intensity of the thermal radiation is distributed over multiple detector elements.
Example 1: Detecting Artificial and Natural Defects In the following, in situ investigations of a test specimen consisting of artificial and natural defects are presented. This test specimen consists of AISI 316L austenitic stainless steel and is described in more detail in Ref. [38]. As shown in the sketch in Fig. 4a, the specimen consists of four domains, of which only the domains A, B, and C are considered in the following. As a scanning strategy, a rotation of the scanning vectors of 90 from layer to layer was applied without any contour scanning. The hatch distance was 0.12 mm with a border offset of 0.08 mm for separating the cubes B and C. The laser focus diameter was about 80 μm and the layer thickness was set to 50 μm. For the three domains A, B, and C, the surface laser energy density (laser power divided by the product of laser beam diameter and scanning velocity) and the volumetric energy density (laser power divided by the product of hatch distance, scanning velocity, and layer thickness) were varied by the laser power and scanning
Fig. 4 (a) Sketch of the test specimen with domains of different manufacturing parameters. The manufacturing parameters are described in Table 1 [38]. (b) Vertical cross section of a μCT reconstruction of the test specimen within the domains A, B, and C (voxel size of 7.12 μm) [38]. (with permission of MDPI)
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
711
velocity, as summarized in Table 1. It was intended to build up a base cuboid A of the dimensions 10 10 5 mm3 with optimum manufacturing parameters, a cube B with low volumetric energy density for inducing lack-of-fusion defects and a cube C with high volumetric energy density for generating keyhole porosity. Both cubes should have dimensions of 5 5 5 mm3. In addition, the domain A contains two artificial defects: for one defect named cavity, an area of about 0.6 3 mm2 was not exposed to the laser for a height of 12 layers; for the other defect named corner defect, in one layer an area of 1 2.04 mm2 was not exposed to the laser. Both defects were open to the surface in the way that loose powder might be removed. Figure 4b shows a μCT cross section within a plane of the domains A, B, and C. Most defects (pores) could be found within cube C. Within cube B, some areas could be located with voids due to unmolten powder and delaminations. Domain A appears mainly homogeneous without obvious defects. Please note that the artificial defects are at different positions and could not be visualized within this cross section. The buildup of this test specimens was monitored, among others, with a MWIR camera (2–5.7 μm) using a sub-window size of 192 176 pixels and a lens with a focal length of 100 mm resulting in a spatial restitution of 100 μm/pixel. A frame rate of 900 Hz could be realized with an integration time of 90 μs and a correlated blackbody calibration range from 623 K to 973 K. Figure 5a shows a thermogram within the domain A and within an odd layer with horizontal laser scans moving from top to bottom. Please recognize that here no temperature correction has been applied, thus only an apparent temperature scale with units of apparent Kelvin (Ka) is shown. The insets (Fig. 5b–d) show three sections at the position of the laser recorded during three subsequent frames. This visualizes the movement of the laser spot of about 8 pixels and thus of about 0.8 mm between two consecutive frames. This undersampling must be considered if high and especially maximum temperature features are to be analyzed. Figure 5e shows the apparent temperature versus time diagrams at
Table 1 Manufacturing parameters of the different domains within the test specimen shown in Fig. 4a
Scanning velocity in mm/s Laser power in W Laser beam diameter in µm Hatch distance in mm Layer thickness in µm Surface laser energy density in J/mm2 Volumetric energy density in J/mm3
Basis A Cube B with low Cube C with high energy density energy density 700 700 300 275 80
150 80
275 80
0.12 50 4.9
0.12 50 2.7
0.12 50 11.5
65.5
35.7
152.7
712
C. Maierhofer et al.
Fig. 5 (a) Thermogram of an odd layer with horizontal scans vectors within the middle height of domain A of the test specimen. The crosses mark the positions of the temperature versus time curves in (e). (b)–(d): Sections of three consecutive frames. The position of the laser (maximum in the thermograms) moves by approximately 8 pixels and thus by about 0.8 mm from frame to frame [66]. (e) Time evolution of the apparent temperature at the positions (single pixels) marked by crosses in (a). The time axes were shifted for each position to match the maximum at 0 s [66]. The threshold values of the TOT-analysis shown in Fig. 7 are indicated by a dashed line at an apparent temperature of 700 Ka
three different pixels marked by colored crosses in Fig. 5a. These positions were not directly aligned vertically as they were chosen in such a way that the melt pool reached its maximum temperature while passing, thus there is no influence by the described subsampling effect. At each position, the time was set to zero as the laser passed, and the apparent temperature rose multiple times, thus each volume element at the measurement positions was heated up several times during the layer exposition. As expected, with the selected frame rate, it was not possible to resolve a solidification plateau in the temperature versus time curves. The three curves show significant differences due to their selected positions: The blue curve corresponds to a position along the first hatch of this layer (upper rim of the specimen), therefore, no preheating occurred beforehand and the first maximum is observed at the time zero, where the laser passes. In addition, because of no preheating by preceding hatches, the reached maximum temperature is below the value of the other curves. The red curve corresponds to a position in the middle of the sample; thus, a preheating is visible at negative times. The cooling down is similar to that of the blue curve, with several reheatings at the local measurement position. The green curve corresponds to a position at the last scan vector of the specimen. Thus, the preheating is comparable to the middle case (red curve), but the cooling behavior differs. After the laser passing at time zero, no further reheating occurs [66]. At temperatures above approximately 700 Ka, the cooling is slower than in the other cases, which can be explained by the missing cooler solidified material below the powder at the lower rim of the specimen and thus by much lower thermal conductivity. For times later than approximately 50 ms, the missing reheating by further scan vectors leads to an increased cooling rate.
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
713
First, these thermograms and curves demonstrate how the geometry of the specimen influences the thermal history. Second, it is shown that, although the measurement parameters of the MWIR camera were set to its limits in a compromise of high spatial and temporal resolution, the thermal history at each position of the sample could not be recorded sufficiently due to the high process dynamics and the strong focus of the laser beam. Thus, instead of analyzing single temperature over time curves, it is more appropriate to analyze the time a specific area of the specimen is at a temperature above a chosen threshold value (time over threshold (TOT)) [34, 66]. As illustrated in the following, this is a sensitive quantity to condense process dynamics into an easy to calculate feature, which enables the visualization of the position of probable artificial and natural defects. The TOT analyses with a threshold value of 700 Ka of the layers no. 61, 75, and 77 in domain A of the specimen (standard process parameters, layers are corresponding to the defect named cavity as described above) is displayed in Fig. 6a–c. For each of these layers, the respective CT cross section is shown as well (Fig. 6d–f). As in the layers 61–74 the laser was not exposed to the powder at the position, in Fig. 6a, the TOT is equal to zero while the CT cross section in Fig. 6d shows a well-defined cavity, which seems to be closed at the surface. Figure 6g displays a CT cross section at this position, which is oriented vertically, i.e., tilted by 90 related to the horizontal cross section in Fig. 6d. Here it becomes obvious that the cavity is open to the surface as intended, but the bottom and top surfaces of the cavity have bend upwards at the edge. Layer 75 was the first one which was completely exposed again to the laser. The TOT image of this layer in Fig. 6b
Fig. 6 (a)–(c): TOT images (in ms) for a threshold value of 700 Ka for layers 61, 75, and 77 in domain A of the test specimen. (d)–(f): Horizontal μCT cross sections of the respective layers. (g) The inset shows a μCT cross section with a vertical orientation to the single layers. All subfigures except subfigure (g) are from Ref. [38]. (with permission of MDPI)
714
C. Maierhofer et al.
shows a heat accumulation above the cavity, which could be explained with a reduced heat conduction into the lose powder materials below. In layer 77 (see Fig. 6c), there is still an enhanced TOT in comparison to the surrounding area, but it is much reduced in comparison to layer 75. The CT cross sections of these two layers in Fig. 6e, f appear homogeneous; thus, no defects could be detected in these layers above the cavity with the given voxel size. As shown in the vertical μCT cross section of domain B with low volume energy density in Fig. 4b, larger clusters of lack-of-fusion defects are appearing at the middle height of this cube. The TOT images of two layers at these heights of domain B are visualized in Fig. 7a, b, while the corresponding horizontal μCT cross sections are displayed in Fig. 7c, d. Within the images of the TOT values, contour lines at a TOT of 30 ms are drawn. These contour lines have been transferred to the horizontal μCT cross sections and show a clear correlation to larger voids. Thus, higher TOT values are a hint to natural defects. However, it must be considered that although a good correlation could be achieved, within the μCT cross sections more defective
Fig. 7 (a) and (b): TOT images (in ms) with a threshold value of 700 Ka of two layers within the domain B with low volume energy density. (c) and (d): μCT cross sections corresponding to the positions of the layers shown in (a) and (b). All subfigures are from Ref. [38]. (with permission of MDPI).
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
715
areas are visible than detected within the TOT images. This is a hint to that not only the TOT, but further features of the temperature versus time curves, e.g., the temporal and spatial gradients, should be analyzed for a more reliable defect detection. However, for an analysis of these properties, a higher temporal and spatial resolution is required, as already discussed above.
Direct Energy Deposition with a Laser (DED-LB/M) Requirements Due to the low traverse speed (typically ~1 m/min) and a less focused laser beam (about a diameter of 3 mm), the DED-LB/M process is very well suited for monitoring with IR cameras. Depending on the IR camera system used (detector size, lens, frame rate), high spatial resolutions of more than 30 μm can be achieved and thus information, for example, on melt pool size, spatter from the melt pool, local temperature gradients, as well as heating and cooling processes can be obtained [67]. Since the inert gas atmosphere is usually formed very locally around the melt pool below the powder nozzle, the DED-LB/M system is basically open if the conditions for laser protection are met. This allows very flexible use of camera systems that are mounted fixed in the vicinity of the build plate, see Fig. 8a. This configuration is especially suited if the part is moved below the laser. Also camera systems that are mounted on the manipulator and are moved along during the manufacturing process can be used as shown in Fig. 8b. The latter has the advantage that the entire build process can be monitored. In addition, there are no limitations of the measuring range due to the limited FOVof the camera. Both, MWIR and SWIR cameras are suitable for monitoring the process in both configurations, although MWIR cameras tend to be larger, complicating a mounting on the manipulator. One challenge lies in the registration of the measurement data to the respective volume element of the built part.
Fig. 8 DMD-LB/M process for additive manufacturing using for in situ monitoring (a) a camera which is fixed on the build platform and (b) a camera which is fixed on the manipulator and is moved along the build process
716
C. Maierhofer et al.
Example 1: Comparison of VIS, SWIR, and MWIR Camera Systems in a Configuration Fixed to the Build Platform In this example, commercially available VIS, SWIR, and MWIR cameras are compared to each other to decide which one is suited best for the evaluation of the DED-LB/M of AISI 2205 duplex stainless steel and of AISI 316 L stainless steel [35]. The cameras were mounted on the build platform as shown in Figs. 8a and 9. The build geometry was a stack of single lines (i.e., a wall) consisting of several layers, which were welded unidirectionally without pauses between the layers. In a first experimental run, a VIS camera of type Photron Fastcam SA4 (Photron, Tokyo, Japan, high speed camera with 32 GB internal memory and 12 bit dynamic range, only 8 bit were used here) with a narrow band pass filter between 807 nm and 817 nm, operated at a frequency of 10 kHz at a subframe resolution of 768 240 pixels with a pixel resolution of 48 μm was compared against a MWIR camera of type InfraTec ImageIR 8300 (InfraTec GmbH Infrarotsensorik und Messtechnik, Dresden, Germany, 14 bit dynamic range) being sensitive between 2 μm and 5.7 μm (cooled InSb focal plane array) with a temperature calibration between 773 K and 1473 K (black body) using a neutral density filter within the camera. The integration time was set to 47 μs and it was operated at a frequency of 800 Hz in subframe mode (320 156 pixels) with a pixel resolution of 240 μm. The build substrate was made of polished carbon steel. The main build parameters using AISI2205 duplex stainless steel are summarized in Table 2. The MWIR camera and the VIS camera are shown in the photos in Fig. 9 left and Fig. 9 middle, respectively. In a second experimental run, a SWIR camera of type Allied Vision Goldeye CL-033 TEC1 (Allied Vision Technologies GmbH, Stadtroda, Germany, 14 bit dynamic range) operated with a band pass filter with a central wavelength of 1550 nm and a width of 25 nm as well as with neutral density filters (ND1.0 + ND1.5) and an additional long pass filter with a cut on wavelength of 1175 nm to completely block the welding laser was used. The camera was operated at a frequency of 500 Hz in subframe mode (640 171 pixels) with a pixel resolution of 125 μm. The integration time was set to 2 ms. It was compared against the MWIR camera described above, which was operated at a frequency of 500 Hz in subframe mode (240 176 pixels) and at a frequency of 100 Hz in full-frame mode (640 512
Fig. 9 Photographs of the experimental setup. Left: MWIR setup (first experimental run), middle: VIS setup (first experimental run), right: SWIR setup together with MWIR setup (second experimental run) [35]. (with permission of Taylor & Francis)
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
717
pixels) with a pixel resolution of 260 μm. As build plate, non-polished AISI 316 L steel was used. Further main build parameters for AISI 316 L stainless steel are summarized in Table 2 as well. The setup of both cameras is visualized in Fig. 9 right. Selected thermograms of both experimental runs are compared to each other in Fig. 10. Reflections of the current weld at the build plate are visible in both experimental runs in the thermograms of the MWIR camera, and in the second experimental run of the SWIR camera. Powder particles within the melt pool are only visible with the VIS camera, which also shows a well resolved temperature distribution within the melt pool. As expected, for the MWIR camera, the cooling down can be observed for much longer path lengths than for the SWIR and for the VIS camera. Table 2 Build parameters and camera set-ups used during the first and second experimental run Build and measurement parameters Material Welding velocity in mm/s Laser power in W Laser beam diameter in mm Surface laser energy density in J/mm2 Cameras Camera angle related to the surface normal Base plate
First experimental run AISI 2205 duplex stainless steel 13.3 1700 3.0 43 VIS, MWIR 60 Polished
1st experimental run
2nd experimental run
a) MWIR, 3rd layer
c) MWIR, 9th layer
b) VIS, 3rd layer
d) SWIR, 9th layer
Second experimental run AISI 316 L stainless steel 21.7 1200 2.4 23 SWIR, MWIR 40 Non-polished (as delivered)
Fig. 10 Thermograms with apparent temperature distribution (MWIR camera, top images (a) and (c)) and intensity distribution given in digital values (DV) (SWIR camera, bottom images (b) and (d)) recorded during the build up of the third layer of the first experimental run (left images (a) and (b)) and during the build up of the ninth layer of the second experimental run (right images (c) and (d)) [35]. (with permission of Taylor & Francis)
718
C. Maierhofer et al.
A temperature correction of the apparent temperature of the MWIR camera and of the intensity values of the SWIR and VIS cameras was performed by using the gray body approximation and by adjusting the emissivity in such a way that the temperature of the solidification plateau equals the known mean value of liquidus and solidification temperature of the used materials [35]. This yielded to an effective emissivity of 0.29 for AISI 2205 and of 0.58 for AISI 316 L, whereas both values are related to the solid material after solidification measured within the spectral range of the MWIR camera. This correction resulted in the following temperature measurement ranges of the used camera configurations: VIS camera: 1300–2000 K; SWIR camera: 900–2200 K; MWIR camera: 600–2300 K. Figure 11a shows a comparison of the corrected temperatures as a function of time of the MWIR and the SWIR camera during passing of the laser at a fixed position through the build of the first and of the ninth layer of the second experimental run. Both cameras indicate similar results for the solidified material. During melting, the SWIR camera shows increasing temperature values while the MWIR camera records decreasing values, which increase during solidification. This can be explained with the lower emissivity of the melt, which is even lower in the MWIR range. As for the MWIR camera the temperature seems to be higher after solidification than before melting, it might be that oxidation induced an emissivity increase. A comparison of the temperatures of the first and the ninth layer shows that during the cooling down process, the solidification plateau remains for a much longer time for the ninth layer. Below the solidification temperature, the cooling of the ninth layer is much slower than of the first layer. The explanation is that for the ninth layer, the heat conduction into the substrate is much less than for the first layer. In addition, the solidification plateau of the ninth layer appears at slightly higher temperatures than at the first layer, which can be explained either by more oxidation at the surface of the
Fig. 11 Corrected temperature versus time curves [35]. (a) Comparison of the curves of the first and ninth layer during the second experimental run with the SWIR and MWIR cameras. (b) Comparison of curves of different materials of the first layer during the first and second experimental run of all three cameras. (with permission of Taylor & Francis)
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
719
ninth layer, as it stays longer hot, by a process dependency of the solidification temperature itself, or both [35]. In Fig. 11b, temperature versus time curves of the first layer of the two different experimental runs and thus of the two different materials are compared. For the second experimental run (AISI 316 L), the whole heating and cooling process takes less time, as the welding velocity is higher, see Table 2. The cooling rate after solidification are slightly higher in the second experimental run as well (e.g., 3600 K/s in the first experimental run compared to 4700 K/s in the second experimental run at the passing of 1400 K corrected temperature for the MWIR camera), probably since, in total, less energy was introduced into the built part. The limited temperature measurement range of the VIS camera is obvious. From these experiments, the following conclusions can be drawn: 1. Although the VIS camera shows the highest spatial and temporal resolution, the dynamic temperature measurement range is too small if cooling rates and temperature gradients at lower temperatures need to be analyzed. 2. The SWIR as well as the MWIR camera are more suitable to monitor the temporal temperature changes during the DED-LB/M process than the VIS camera. The dynamic of the measurable temperature ranges of the SWIR and MWIR camera might be even enhanced through optimal selection of filters and integration time. 3. As the SWIR camera is much smaller (and cheaper) and therefore can more easily be integrated into the build space than the MWIR camera, we currently suggest the application of a SWIR camera. 4. The surface emissivity depends at least on wavelength, on aggregate condition, on surface oxidation and on the material. In addition, it must be considered that the emissivity depends on temperature and observation angle as well. The solidification temperature is either influenced by the heating and cooling process and/or its measurement is influenced by the emissivity depending on oxidation. Therefore, as mentioned above, a temperature correction as performed in the study herein by using the solidification temperature of the materials is only a very rough estimation. In case of the MWIR camera, the temperature values in the molten state are obviously far off as they appear lower than in the solid state. Thus, such a correction is not sufficient for using the temperature values in numerical simulations and/or in predictions about material parameters. 5. It is obvious that cooling rates depend on part geometry and on the amount of introduced energy.
Example 2: Determination of Melt Pool Properties with Camera Fixed to the Manipulator In this second example, the SWIR camera with the similar bandpass filter as described above in example 1 but less neutral density filters was mounted directly on the manipulator so that the camera position is fixed relative to the position of the melt pool (and not to the position of the built part, as in example 1), see Fig. 12, Ref. [67]. Thus, the melt pool is always within the optimum focus of the camera, and it can be investigated with higher spatial resolution. Again, the SWIR camera was
720
C. Maierhofer et al.
Fig. 12 Manipulator with the powder nozzle and mounted SWIR camera (foreground, right) and acceleration sensor (background, left) [67]
operated in subframe mode, but different measurement parameters were used (sub-window of 640x210 pixels, framerate of 300 Hz, integration time of 200 μs). For synchronizing the position of the recorded thermograms with that of the nozzle and the laser focus, an acceleration sensor was mounted on the manipulator as well. Spatial resolution and position were reconstructed using a checkerboard pattern [67] resulting in a spatial resolution of 30 μm in direction of the x-axis of the thermogram (parallel to the welding direction) and of 37 μm in direction of the y-axis (perpendicular to the welding direction). A temperature correction was performed by using the known temperature of the solidification plateau as described above and by considering Planck’s law within the spectral window of the bandpass filter. A wall was built up consisting of 10 layers of AISI 316 L by using a weld velocity of 13.3 mm/s and a surface laser energy density of 37.5 J/mm2. The scan direction of all layers was the same. Figure 13a, c each show a thermogram with corrected temperature values recorded with the SWIR camera during the build of the tenth layer. In Fig. 13a, all temperature values above 1550 K are correlated to the melt pool and the mushy zone. Therefore, at temperatures >1550 K, the temperature color scale was replaced by a gray scale. Now, the shape of the melt pool and mushy zone becomes obvious and can be calculated. These calculated melt pool areas from this perspective of the camera as a function of position along the track and for each layer from 1 to 10 are depicted in Fig. 13b. This figure clearly shows that, with increasing layer number up to the eighth layer, the size of
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
721
Fig. 13 (a) Corrected temperature distribution along the weld line of the tenth layer. Above 1550 K, it is assumed that the material is starting to melt and thus belongs to the melt pool and the mushy zone. (b) Area of melt pool and mushy zone at the surface as a function of position along the weld and as a function of layer number (wall height) [67]. (c) Same weld line as in (a). Above 1660 K, it is assumed that the material belongs to the melt pool. (d) Area of melt pool at the surface as a function of position along the weld and as a function of layer number (wall height)
the area of the melt pool and mushy zone increases. For larger layer numbers, the size becomes saturated. Additionally, it can be observed that at the beginning as well as at the end of each layer, the melt pool size is smaller than in between these points. Furthermore, at the beginning of each track from layer 4 to layer 10, at a position of about 10 mm, a local increase of the melt pool area is observed. In Fig. 13c, all temperature
722
C. Maierhofer et al.
values above 1660 K are only correlated to the melt pool. Therefore, at temperatures >1660 K, the color scale of the temperature scale was replaced by a gray scale. The melt pool size itself is much smaller than the melt pool together with the mushy zone. The development of the melt pool area alone along the position and layer number is displayed in Fig. 13d. The melt pool size increases from layer 1 to layer 4. For larger layer numbers, its mean size seems to be stable, while several fluctuations are observed within one layer. Despite the melt pool size, further information can be obtained from the thermograms. Information about the heating and cooling rate within a distinct temperature interval is gained from the time, at which the welded area is above a certain threshold (TOT, as described above). In Fig. 14a, this time span is marked by red arrows and lines for a threshold value of, e.g., 1640 K. Within this time span, it is observed that the temperature first increases, then decreases below the threshold, and increases again. This effect is related to a decrease of emissivity of the liquid material and it is assumed that the real temperature is above the threshold. Therefore, the whole time interval marked by the red arrows belongs to the time over threshold. Figure 14b shows the calculated time over threshold for all ten layers as a function of the track position. Here, it is observed that with increasing layer number and thus build height, the TOT is increasing as well. Thus, with increasing wall height, the material stays warm for a longer time. This means that either the material is getting warmer and stays warmer for a longer time, or it only stays warm for a longer time. This correlates very well to what is shown in the section of example 1: here the cooling down rate of the higher layer number was less than of the lower layer number. Figure 14c shows a magnified section from the beginning of the tenth layer. Here, an increase of the layer thickness just at the beginning of the track is observed, which is correlated with an enhanced TOT. In Fig. 14b, it is noticed that this enhancement of the layer thickness at the beginning of the track is increasing from layer to layer. Currently, it is not possible to conclude whether this layer thickness enhancement is due to the enhanced TOT or if it is vice versa.
Comparison of In Situ Monitoring Against Post NDE: Advantages, Challenges, and Outlook Principally, all NDE methods suited for the detection of volume defects, e.g., CT and ultrasonics, or for the detection of defects below, close to and at the surface, like active thermography, eddy current testing (ET), magnetic particle testing (MT) and dye penetration testing (PT), or optical methods used for the recording of the part geometry can be applied for post-testing of additive manufactured parts [8, 68]. In comparison to in situ monitoring NDE techniques, they have still the advantages of a higher spatial resolution (e.g., for radiography, especially for μCT and for ultrasonic testing in case of low damping material). With post NDE methods, only the defects in the final part are detected, as some defects might disappear during the manufacturing process due to remelting or reheating, after removing the part from the build platform or during HIP afterwards. In addition, pores, cavities, and cracks close to
26
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . .
723
Fig. 14 (a) Temperature-time curve for the calculation of the time over threshold. Here, a threshold of 1640 K has been selected as visualized by the red arrows and the red dashed lines. (b) Display of the time over a threshold of 1640 K for all layers from 1 (bottom) to 10 (top). (c) Magnified section of first few mm of the tenth layer (see dashed rectangle in (b))
724
C. Maierhofer et al.
and/or open to the surface can be detected at all surfaces. However, post NDE methods suffer of the following disadvantages or challenges [9, 69, 70]: • Defects can only be detected when the component is finished. • Topology optimized components have complex geometries with no parallel and no flat surfaces. • The surface often is very rough and inhomogeneous. The surface roughness depends on the surface orientation during construction. Undercuts often have the highest surface roughness. • A registration of the recorded data to the respective layer and manufacturing parameters is not always possible. • If a high resolution is required especially for μCT, only small volumes (small components or small parts of components) can be investigated. • Ultrasonic waves are highly damped and absorbed at the grain boundaries and scattered at the pores. • There is a lack of: – Classification of types and sizes of defects and inhomogeneities which are critical for defined built parts – Physical reference standards or test specimens – Reliability studies with data recordings concerning probability of detection and reliability operation characteristics – Standards, validation reports, and inspection procedures The advantages of in situ monitoring and NDE against post NDE are: • Safety relevant defects can be detected immediately, and the build process can be stopped or regulated, if required. • The position of detected defects can be directly related to the layer and the position inside the layer, and thus to the manufacturing parameters and the geometry of the part. Further advantages are listed in Table 3. Here, typical defect classes occurring in additive manufacturing are summarized, based on current collections and reviews for powder bed and direct energy deposition based additive manufacturing processes [8, 12, 71, 72]. For each of these defect classes, the suited in situ monitoring and NDE methods and the appropriate post NDE methods are listed. Challenges of in situ monitoring and NDE are: • Up to now, for most of the in situ methods, the geometric resolution is partially high enough to detect, but not high enough for a quantitative characterization of structures which are less than 100 μm. • In some cases, defects and inhomogeneities detected within the top layer are detected but might disappear due to annealing during the growths of further layers or during temperature treatments. Thus, they are not relevant for the usage of the built part.
Deviations of the part geometry from the specifications, unwanted curvature of the surfaces.
Undercuts and trenches between adjacent beads (hatches) Different characteristics, see below Round cavities within the material (porosity)
Irregularly shaped small cavities within the material
Undercuts and trenches [12]
Porosity [8, 13, 71, 72] Spherical gas pores
Nonspherical pores
Description The surface roughness of the final part is substantially higher than specified.
Deviations of part geometry [8, 13, 72]
Defect/ inhomogeneity High surface roughness [8, 13, 71]
5–200 μm
True porosity 0.1–22% 5–20 μm
Width and depth up to 50 μm
Depending on part size, up to several mm
Size Ra 8–100 μm
Radiography, laminography, CT; at the surfaces: ET, MT, PT
Thermography (indirect), active laser thermography, eddy current testing, OT, OCT Thermography (indirect), active laser thermography, eddy current testing
Radiography, laminography, CT, ultrasonics; at the surfaces: ET, MT, PT
See below
Suited post NDE methods Visual testing, interferometry, 3D-scan, CT; all surfaces can be tested. Visual testing, interferometry, 3D-scan, CT; all surfaces can be tested. Radiography, laminography, CT
See below
Optical methods, OCT
Optical methods, OCT, OT
Suited in situ monitoring and NDE methods Optical methods, OCT
In Situ Real-Time Monitoring Versus Post NDE for Quality Assurance of. . . (continued)
+ The thermal signature of these defects is mostly larger than its real size, thus it can be better detected in situ than gas pores.
Especially for the detection of small gas pores, the spatial resolution of in situ methods is too low.
+ Undercuts and trenches can be related to single layers. See below
+ Development of geometric deviations during the part growth can be monitored.
Advantages (+) or disadvantages ( ) of in situ against post NDE + development of surface roughness during the part growth can be monitored, but only on certain surfaces.
Table 3 Description of typical defects and inhomogeneities in additive manufactured parts and suited in situ monitoring and NDE methods, suited post NDE methods, and the advantages and disadvantages of in situ against post NDE. Here, all methods which are principally suited are listed. The actual applicability of these methods has only been confirmed for a subset of them
26 725
Contaminations in the material [71]
Variations in chemical composition [8, 71]
Volumetric cracks [71]
Surface cracks [12, 71]
Defect/ inhomogeneity Local separation (lack of fusion pores)
Table 3 (continued)
During melting at very high temperatures, some elements have a higher vapor pressure than others. During cooling down, material separation such as carbides are formed. Foreign metals or oxides within the material volume
Cracks can emerge from pores or cavities to the surface already during production, cracks parallel to the scan direction form during cooling down. Cracks within the part or detachment of parts of the part or of support structures from the base plate
Description Between individual layers
OES, LIBS
OES, LIBS, particle emission
Foreign metal concentration, oxide concentration
Thermography (indirect), active laser thermography, SRAS, AE
Carbon content