Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft 9819982154, 9789819982158

The book focuses on infrared thermographic NDT systems and approaches. Both principles and engineering practice are cove

128 56 12MB

English Pages 283 [280] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Overview of the Book
Contents
Acronyms
1 Background and Requirements
1.1 Space Environment Effects on Spacecraft
1.1.1 Vacuum Environment
1.1.2 Space Debris Environment
1.1.3 Solar Irradiation Environment
1.1.4 Atomic Oxygen Environment
1.2 Spacecraft Materials and Damage
1.2.1 Introduction to Spacecraft Materials
1.2.2 Typical Spacecraft Material Damage
1.3 Overview of Infrared Thermographic NDT Technology
1.3.1 Basic Concepts of Infrared Thermographic NDT Technology
1.3.2 Basic Principles of Infrared Thermographic NDT Technology
1.3.3 Infrared Thermographic NDT System Composition
References
2 Infrared Feature Extraction and Damage Reconstruction
2.1 Introduction
2.2 Variable-Step-Based Pre-processing of Infrared Thermographic Image Sequence
2.2.1 Data Block Division Based on Thermal Extremes
2.2.2 Various Regional Step Sizes Establishment
2.2.3 Variable Step Redundant Information Removal
2.3 Transient Thermal Responses Separation Based on Clustering
2.3.1 GMM-Based Clustering of Transient Thermal Responses
2.3.2 DBSCAN-Based Clustering of Transient Thermal Responses
2.4 Representative Transient Thermal Response Extraction and Image Reconstruction
2.4.1 Representative TTR Extraction Based on Local Average Performance
2.4.2 Representative TTR Extraction Based on Distances
2.4.3 Representative TTR Extraction Based on Distance Weighting
2.5 Experimental Results and Analysis
2.6 Summary
References
3 Reconstructed Thermal Image Fusion Based on Multi-objective Guided Filtering
3.1 Introduction
3.2 Complex Damage Fusion Requirement
3.3 Multiple Fusion Objectives Jointly Moulding
3.3.1 Thermal Radiation Variance-Aware Objective Function
3.3.2 Multi-window Edge-Aware Objective Function
3.3.3 Local Detail Extraction Objective Function
3.4 Multi-objective Guided Filtering Based Weight Acquisition Layer
3.4.1 Two-Layer Multi-objective Fusion Framework
3.4.2 Multi-objective Decomposition Based on Penalty Term
3.4.3 Implementation of Multi-objective Guided Filtering Based Weight Acquisition Layer
3.5 Multi-scale Fusion of Full Pixel Layers Based on Optimal Weight
3.5.1 Dual Scale Decomposition of Reconstructed Thermal Images on Full Pixel Layers
3.5.2 Multi-guided Filtering Based Weight Map Acquisition
3.5.3 All-Pixel Image Fusion Implementation with Multi-objective Guided Filtering
3.6 Experimental Results and Analysis
3.6.1 Specimen #1
3.6.2 Specimen #2
3.7 Summary
References
4 Stitching Technique for Reconstructed Thermal Images
4.1 Introduction
4.2 Feature Extraction Techniques for Reconstructed Thermal Images
4.2.1 Feature Points of Reconstructed Thermal Images
4.2.2 FAST Feature Extraction of Reconstructed Thermal Images
4.2.3 Fine Feature Extraction of Reconstructed Thermal Images
4.3 Alignment Techniques for Reconstructed Thermal Image's Feature Points
4.3.1 Alignment of Feature Points of Reconstructed Thermal Images
4.3.2 Analysis of Reconstructed Thermal Image's Feature Point Alignment Techniques
4.4 Stitching Quality Improvement Method of Reconstructed Thermal Images
4.4.1 Seamless Stitching Fusion of Stitched Reconstructed Thermal Images
4.4.2 Natural Stitching Method with Large Parallax for Reconstructed Thermal Images
4.5 Experiment and Analysis
4.5.1 Fast Feature Extraction Stitching Experiment for Reconstructed Thermal Images
4.5.2 Fine Feature Extraction Stitching Experiments of Reconstructed Thermal Images
4.6 Summary
References
5 Weight Vector Adjustment-Based Multi-objective Segmentation of Reconstructed Thermal Images
5.1 Introduction
5.2 The Challenge of Complex Damage Segmentation
5.3 Complex Object-Oriented Infrared Image Segmentation Objectives
5.3.1 Noise-Cancellation Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images
5.3.2 Detail-Preserving Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images
5.3.3 Edge-Retention Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images
5.4 Multi-objective Model Construction and Irregular Pareto Front Analysis
5.4.1 Multi-objective Modelling and Complex Damage Segmentation Framework
5.4.2 Irregular Pareto Front and Necessity of Adjustment
5.5 Crowding Degree Adaptive and Chebyshev Decomposition Based …
5.5.1 Chebyshev Decomposition
5.5.2 Crowding Metric Based on Manhattan Distance
5.5.3 Weight Vector Adjustment
5.5.4 Implementation of the Weight Vector Adaptive Adjustment Method
5.6 Effective Area Incremental Learning and PDM Based Weight …
5.6.1 Effective Area and Active Vectors
5.6.2 PDM and Population Evolution
5.6.3 Cascade Clustering Based Dominant Solution Selection
5.6.4 Incremental SVM-Based Weight Vector Learning and Tuning
5.7 Experimental Results and Analysis
5.7.1 Crowding Degree Based Adaptive Weight Vector Adjustment
5.7.2 Effective Area and PDM Based Adaptive Weight Vector Adjustment
5.8 Summary
References
6 Defects Positioning Method for Large Size Specimen
6.1 Introduction
6.2 Defect Positioning Based on Whole and Local View Conversion …
6.2.1 Global Defect Location Labeling for Stitched Reconstructed Thermal Images
6.2.2 Precise Positioning of Defective Regions
6.2.3 Re-inspection of Defective Regions After Precise Positioning
6.3 Defect Positioning Based on Inverse Heterogeneous Source …
6.3.1 Pixel Conversion of Stitched Reconstructed Thermal Images
6.3.2 Determining Method of Image Overlap Area
6.3.3 Defect Contour Positioning for Regional Determination Results
6.4 Experiment and Analysis
6.4.1 Experiment and Analysis of Defect Positioning Based on Whole-Local Perspective
6.4.2 Experiment and Analysis of Defect Positioning Based on Inverse Heterogeneous Sources
6.5 Summary
References
7 Defect Edge Detection and Quantitative Calculation of Reconstructed Thermal Images
7.1 Introduction
7.2 Pixel-Level Edge Detection of Defective Regions …
7.2.1 Pixel-Level Edge Detection Based on Differential Operators
7.2.2 Pixel-Level Edge Detection Based on Canny Composite Operator
7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed …
7.3.1 Sub-pixel Fitting Method Based on Edge Pixel Position
7.3.2 Sub-pixel Detection Method Based on Image Zernike Moments
7.4 Quantitative Calculation of Defective Regions in Reconstructed Thermal Images
7.4.1 Calculation of Geometric Feature Parameters
7.4.2 Calculation of Morphological Distribution Parameters
7.5 Experiment and Analysis
7.5.1 Edge Detection Experiments for Specimen Hyper-1
7.5.2 Sub-pixel Edge Detection Experiment for Specimen Hyper-1
7.5.3 Quantitative Calculation of Defective Regions for Specimen #1
7.5.4 Quantitative Calculation of Defective Regions of Specimen #2
7.5.5 Defective Regions Positioning Experiments of Specimen Hyper-1
7.6 Summary
References
Recommend Papers

Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft
 9819982154, 9789819982158

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Chun Yin Xuegang Huang Xutong Tan Junyang Liu

Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft

Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft

Chun Yin · Xuegang Huang · Xutong Tan · Junyang Liu

Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft

Chun Yin University of Electronic Science and Technology of China Chengdu, Sichuan, China

Xuegang Huang China Aerodynamics Research and Development Center Mianyang, Sichuan, China

Xutong Tan University of Electronic Science and Technology of China Chengdu, Sichuan, China

Junyang Liu University of Electronic Science and Technology of China Chengdu, Sichuan, China

ISBN 978-981-99-8215-8 ISBN 978-981-99-8216-5 (eBook) https://doi.org/10.1007/978-981-99-8216-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.

Preface

The space environment has a significant impact on the operation of spacecraft and the functioning of various systems. The space environment includes vacuum, micrometeoroids, space debris, solar radiation, atomic oxygen, temperature variations, magnetic fields, gravity fields, and more. As human exploration and utilization of space continue to deepen, the complex space environment’s influence on spacecraft is receiving increasing attention. With the rapid development of large-scale and long-life spacecraft, new composite materials and precision components are being increasingly utilized, which increases the possibility of space environment-induced failures. The space environment strongly affects the performance and operational lifespan of spacecraft, thereby limiting the advancement of spacecraft system technology. Currently, aerospace engineers have recognized the potential hazards posed by the space environment and have conducted numerous spacecraft environmental engineering experiments to understand the specific damages caused by the space environment. Meanwhile, to prevent serious accidents resulting from various damage and defects during the spacecraft’s service in the space environment, it is crucial to carry out damage detection and quality assessment of spacecraft materials and structures. Therefore, by qualitatively and quantitatively analyzing spacecraft damage and defects, determining the types and severity of the effects imposed by the space environment on the spacecraft, valuable insights can be provided to spacecraft designers and managers. This comprehensive understanding of the various effects of the space environment on spacecraft can lead to the development of appropriate measures, ensuring better operational performance of spacecraft in orbit. Nondestructive testing (NDT) is an important technical means to control product quality and ensure the safe operation of in-service equipment. It has been used for defect detection and structural integrity assessment of aerospace equipment. However, the types of damage and defects caused by the space environment on

v

vi

Preface

spacecraft are complex, and effectively detecting and identifying these damages is an urgent problem that needs to be addressed. The rapid development of infrared thermography as a nondestructive testing technique provides an effective approach to solving the above problem. It is based on the infrared radiation characteristics and utilizes the different thermal radiation properties of different structures or materials to detect surface and internal non-uniformities or anomalies. It offers advantages such as speed, non-contact operation, non-pollution, large single-area testing, visualized results, and applicability to a wide range of materials. Therefore, it is highly suitable for non-contact in situ testing of spacecraft with various types of materials and complex geometries. The thermal image sequences obtained from the infrared thermal imager fully record the temperature and energy response characteristics of defects in spatial and temporal dimensions. The defect detection data obtained from these sequences contain abundant information. Therefore, through front-end data acquisition and post-processing feature extraction algorithms, it is possible to achieve visual detection, classification, and quantification of complex damage and defects. This approach avoids interference from external environmental temperature changes and non-uniform emissivity of test surfaces. It has enormous potential for application in the field of in-situ visual nondestructive testing of spacecraft in outdoor environments. This book not only introduces the conventional processing methods of infrared thermography testing data but also proposes some new analysis and evaluation methods of detection data according to the requirements of practical spacecraft detection applications. At the same time, at the end of each chapter, the book also introduces the operation steps and result examples of different methods in solving practical engineering application problems, such as the fusion problem of complex multi-type damage reconstruction images, and the problem of stitching, segmentation, positioning, and quantification of multiple detection images from large-size spacecraft. This book will be beneficial to students and professionals in all areas of engineering, especially in the area of further analytical evaluation of infrared detection images. The book contains many graphics (block diagrams, simulation, and experimental results) that will help readers understand the material. Readers (researchers, Ph.D.- or M.S.level graduate students, and R&D engineers) will become familiar with step-by-step algorithms that can be easily applied to a variety of visual detection data analysis evaluation problems in engineering practice. The reader will also acquire a deep understanding of the basic principles of infrared thermography NDT technique and its data analysis and processing. The application examples included in the book will help the reader understand the concepts and how they can be applied.

Preface

vii

Overview of the Book The application of infrared thermography NDT technique to spacecraft complex damage detection and evaluation still has many key technical problems to be solved. The damage on spacecraft caused by space environment is complex and diverse, and the damage detection data obtained by infrared thermography NDT technique is the result of the interference of many environmental factors, in this case, it is difficult for engineers to accurately interpret and judge these damages by personal experience. Therefore, accurate extraction and recognition of damage-related information is the most important technical requirement in spacecraft detection and evaluation. This book focuses on the infrared thermography detection and assessment of spacecraft damage caused by space environment. The author systematically reviews and summarizes the research results of complex and multi-type damage data processing and analysis in recent years. Building upon the introduction of the characteristics of spacecraft damage under space environment and the basic principles of infrared thermography detection technology, this book places special emphasis on the author’s recent research work. It provides detailed explanations of methods such as feature extraction and reconstruction of infrared detection images, fusion, stitching, segmentation, localization, and quantitative analysis of infrared reconstructed images. The content covers essential aspects of infrared detection, data analysis, and quantitative assessment of typical impact damage on spacecraft, striking a balance between theory and practice, and exhibiting strong relevance and comprehensiveness. The book consists of seven chapters, Chun Yin (Professor, University of Electronic Science and Technology of China) and Xuegang Huang (Associate Researcher, China Aerodynamics Research and Development Center) collaborated on Chap. 1, Chun Yin and Xutong Tan (University of Electronic Science and Technology of China) collaborated on Chaps. 2, 3, and 5, and Xuegang Huang and Junyang Liu (University of Electronic Science and Technology of China) collaborated on Chaps. 4, 6, and 7. Chapter 1 provides an overview of the effects of space environment on spacecraft and the basic principles of nondestructive infrared thermography detection. Chapter 2 introduces the feature extraction and damage feature reconstruction of infrared thermal image sequences. Chapter 3 presents the method of infrared reconstructed image fusion based on multi-objective-guided filtering. Chapter 4 describes the method of infrared reconstructed image stitching for large-sized test specimens with multiple regions of interest. Chapter 5 explains the method of infrared reconstructed image segmentation for complex damage of multiple types. Chapter 6 introduces the method of damage defect positioning in infrared reconstructed images. Chapter 7 discusses the edge detection and quantitative analysis methods for damage defect areas in infrared reconstructed images.

viii

Preface

This book aims to combine theory and practice based on the needs of engineering applications. However, there are still many technical problems to be solved in the practical application of spacecraft infrared thermography NDT technique, and considering the limited expertise of the authors, the book may have certain shortcomings. The authors welcome and appreciate any criticism and corrections from readers. Chengdu, China Mianyang, China Chengdu, China Chengdu, China

Chun Yin Xuegang Huang Xutong Tan Junyang Liu

Acknowledgements Most of the content in this book is a summary of the author’s and their team’s research work in recent years. The related research has been supported by funding from the National Natural Science Foundation of China and the Science and Technology Plan of Sichuan Province. The experimental and testing work has received strong support from the Hypervelocity Aerodynamics Institute at the China Aerodynamics Research and Development Center and the Institute of Testing Technology and Instruments at the School of Automation Engineering, University of Electronic Science and Technology of China. The authors would like to express sincere gratitude for their support.

Contents

1 Background and Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Space Environment Effects on Spacecraft . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Vacuum Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Space Debris Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Solar Irradiation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.4 Atomic Oxygen Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Spacecraft Materials and Damage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Introduction to Spacecraft Materials . . . . . . . . . . . . . . . . . . . . 1.2.2 Typical Spacecraft Material Damage . . . . . . . . . . . . . . . . . . . . 1.3 Overview of Infrared Thermographic NDT Technology . . . . . . . . . . 1.3.1 Basic Concepts of Infrared Thermographic NDT Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Basic Principles of Infrared Thermographic NDT Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Infrared Thermographic NDT System Composition . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 3 4 5 5 6 7 13

2 Infrared Feature Extraction and Damage Reconstruction . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Variable-Step-Based Pre-processing of Infrared Thermographic Image Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Data Block Division Based on Thermal Extremes . . . . . . . . . 2.2.2 Various Regional Step Sizes Establishment . . . . . . . . . . . . . . 2.2.3 Variable Step Redundant Information Removal . . . . . . . . . . . 2.3 Transient Thermal Responses Separation Based on Clustering . . . . 2.3.1 GMM-Based Clustering of Transient Thermal Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 DBSCAN-Based Clustering of Transient Thermal Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21 21

14 15 17 20

24 24 28 31 32 32 36

ix

x

Contents

2.4 Representative Transient Thermal Response Extraction and Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Representative TTR Extraction Based on Local Average Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Representative TTR Extraction Based on Distances . . . . . . . 2.4.3 Representative TTR Extraction Based on Distance Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Experimental Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Reconstructed Thermal Image Fusion Based on Multi-objective Guided Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Complex Damage Fusion Requirement . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Multiple Fusion Objectives Jointly Moulding . . . . . . . . . . . . . . . . . . . 3.3.1 Thermal Radiation Variance-Aware Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Multi-window Edge-Aware Objective Function . . . . . . . . . . . 3.3.3 Local Detail Extraction Objective Function . . . . . . . . . . . . . . 3.4 Multi-objective Guided Filtering Based Weight Acquisition Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Two-Layer Multi-objective Fusion Framework . . . . . . . . . . . 3.4.2 Multi-objective Decomposition Based on Penalty Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Implementation of Multi-objective Guided Filtering Based Weight Acquisition Layer . . . . . . . . . . . . . . . . . . . . . . . 3.5 Multi-scale Fusion of Full Pixel Layers Based on Optimal Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Dual Scale Decomposition of Reconstructed Thermal Images on Full Pixel Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Multi-guided Filtering Based Weight Map Acquisition . . . . 3.5.3 All-Pixel Image Fusion Implementation with Multi-objective Guided Filtering . . . . . . . . . . . . . . . . . . . 3.6 Experimental Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Specimen #1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Specimen #2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Stitching Technique for Reconstructed Thermal Images . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Feature Extraction Techniques for Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 37 38 39 42 47 47 49 49 50 51 52 55 58 60 60 61 65 67 67 67 68 70 70 76 90 91 93 93 95

Contents

4.2.1 Feature Points of Reconstructed Thermal Images . . . . . . . . . 4.2.2 FAST Feature Extraction of Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Fine Feature Extraction of Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Alignment Techniques for Reconstructed Thermal Image’s Feature Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Alignment of Feature Points of Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Analysis of Reconstructed Thermal Image’s Feature Point Alignment Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Stitching Quality Improvement Method of Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Seamless Stitching Fusion of Stitched Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Natural Stitching Method with Large Parallax for Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . 4.5 Experiment and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Fast Feature Extraction Stitching Experiment for Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . 4.5.2 Fine Feature Extraction Stitching Experiments of Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Weight Vector Adjustment-Based Multi-objective Segmentation of Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 The Challenge of Complex Damage Segmentation . . . . . . . . . . . . . . 5.3 Complex Object-Oriented Infrared Image Segmentation Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Noise-Cancellation Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images . . . . . 5.3.2 Detail-Preserving Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images . . . . . 5.3.3 Edge-Retention Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images . . . . . 5.4 Multi-objective Model Construction and Irregular Pareto Front Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Multi-objective Modelling and Complex Damage Segmentation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Irregular Pareto Front and Necessity of Adjustment . . . . . . .

xi

96 100 105 109 110 113 116 116 118 119 120 123 128 129 131 131 132 133 134 138 140 143 143 144

xii

Contents

5.5 Crowding Degree Adaptive and Chebyshev Decomposition Based Weight Vector Adjustment Method . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Chebyshev Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Crowding Metric Based on Manhattan Distance . . . . . . . . . . 5.5.3 Weight Vector Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Implementation of the Weight Vector Adaptive Adjustment Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Effective Area Incremental Learning and PDM Based Weight Vector Adjustment Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Effective Area and Active Vectors . . . . . . . . . . . . . . . . . . . . . . 5.6.2 PDM and Population Evolution . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Cascade Clustering Based Dominant Solution Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Incremental SVM-Based Weight Vector Learning and Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Experimental Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Crowding Degree Based Adaptive Weight Vector Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Effective Area and PDM Based Adaptive Weight Vector Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Defects Positioning Method for Large Size Specimen . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Defect Positioning Based on Whole and Local View Conversion for Reconstructed Thermal Images . . . . . . . . . . . . . . . . . 6.2.1 Global Defect Location Labeling for Stitched Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Precise Positioning of Defective Regions . . . . . . . . . . . . . . . . 6.2.3 Re-inspection of Defective Regions After Precise Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Defect Positioning Based on Inverse Heterogeneous Source for Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Pixel Conversion of Stitched Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Determining Method of Image Overlap Area . . . . . . . . . . . . . 6.3.3 Defect Contour Positioning for Regional Determination Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Experiment and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Experiment and Analysis of Defect Positioning Based on Whole-Local Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Experiment and Analysis of Defect Positioning Based on Inverse Heterogeneous Sources . . . . . . . . . . . . . . . . . . . . . . 6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

145 145 147 148 151 151 152 153 154 156 158 158 165 172 174 177 177 178 179 188 189 191 191 195 199 206 206 214 227 228

Contents

7 Defect Edge Detection and Quantitative Calculation of Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Pixel-Level Edge Detection of Defective Regions in Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Pixel-Level Edge Detection Based on Differential Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Pixel-Level Edge Detection Based on Canny Composite Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Sub-pixel Fitting Method Based on Edge Pixel Position . . . 7.3.2 Sub-pixel Detection Method Based on Image Zernike Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Quantitative Calculation of Defective Regions in Reconstructed Thermal Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Calculation of Geometric Feature Parameters . . . . . . . . . . . . 7.4.2 Calculation of Morphological Distribution Parameters . . . . . 7.5 Experiment and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Edge Detection Experiments for Specimen Hyper-1 . . . . . . . 7.5.2 Sub-pixel Edge Detection Experiment for Specimen Hyper-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Quantitative Calculation of Defective Regions for Specimen #1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Quantitative Calculation of Defective Regions of Specimen #2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Defective Regions Positioning Experiments of Specimen Hyper-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

229 229 231 232 236 237 238 243 248 248 252 254 255 256 258 261 264 266 266

Acronyms

BCTWI BRIEF CARDC CC CIE CV DBSCAN DOG DTW EM EP FAST FCM FCM-S1 FCM-S2 FGFCM FMTWI FPA GFF GMM HVI InGaAs InSb IR KG-EM Landsat-MSS LBP LDEF LEO L-Info LoG

Barker Coded Thermal Wave Imaging Binary Robust Independent Elementary Features China Aerodynamics Research and Development Center Cascade Clustering Commission Internationale de L’Eclairage Coefficient of Variation Density-Based Spatial Clustering of Applications with Noise Difference of Gaussian Dynamic Time Warping Expectation Maximization External Population Features from Accelerated Segment Test Fuzzy C-Means Fuzzy C-Means Algorithms with Spatial constraint 1 Fuzzy C-Means Algorithms with Spatial constraint 2 Fast Generalized Fuzzy C-Means Frequency Modulated Thermal Wave Imaging Focal Plane Array Guided Filter Fusion Gaussian Mixture Model Hypervelocity Impact Indium Gallium Arsenide Indium Antimonide Infrared Radiation K-means Gaussian Mixture Model-Expectation Maximization Landsat MultiSpectral Scanner Local Binary Pattern Long Duration Exposure Facility Low-Earth orbit Local Information Laplacian of Gaussian xv

xvi

LT M&D SIG MCT MMOD MOEA/D MOGF MSAC NASA NDSort NDT NL-Info NSGA-III ORB PBI PCC PDM PDVI PF PPT PT PtSi QWIP RANSAC SIFT ST SURF SVM TTR UV

Acronyms

Lock-in Thermography Meteoroid and Debris Special Interest Group Mercury Cadmium Telluride Micro-meteoroids and Orbital Debris Multi-Objective Evolutionary Algorithm based on Decomposition Multi-Objective Guided Filtering M-Estimate Sample Consensus National Aeronautics and Space Administration Non-Dominated Sort Nondestructive testing Non-Local Information Non-dominated Sorting Genetic Algorithm-III Oriented Fast and Rotated Brief Penalty-based Boundary Intersection Pearson Correlation Coefficient Proximity and Diversity Metrics Panoramic Defect Visualization Image Pareto Front Pulsed Phase Thermography Pulsed Thermography Platinum Silicide Quantum Well Infrared Photodetectors Random Sample Consensus Scale-Invariant Feature Transform Step Heating Thermography Speeded Up Robust Features Support Vector Machine Transient Thermal Response Ultraviolet

Chapter 1

Background and Requirements

On October .4, .1957, the first artificial Earth satellite, Sputnik 1, was successfully launched by the Soviet Union. Over the following sixty years, the pace of human exploration in outer space has accelerated, and the number of spacecraft launches has increased. Throughout their operational lifetimes, spacecraft are subjected to various environmental factors in space, such as vacuum, radiation, high and low temperatures, micrometeoroids and space debris, atomic oxygen, and others. These factors can cause unforeseen damage to spacecraft materials and structures. Spacecraft operating in Earth orbit must undergo detection and evaluation of various damage risks resulting from complex space environmental effects. Therefore, it is necessary to develop advanced non-contact damage detection and evaluation techniques to achieve accurate identification and analysis of spacecraft damage, ensuring their safety and the successful completion of their intended missions. This chapter primarily introduces the space environmental effects and fundamental knowledge related to spacecraft damage. It also provides an overview of the basic principles of infrared thermography as a NDT technique and its application in spacecraft damage detection.

1.1 Space Environment Effects on Spacecraft Spacecraft environmental engineering encompasses the environments encountered throughout the entire lifecycle of a spacecraft, including ground environment, launch environment, space environment, and re-entry environment [1]. The ground environment refers to the air, temperature, humidity, and experimental conditions during the manufacturing, storage, and transportation of spacecraft materials. For example, for rockets, the ground environment includes corrosive environments from liquid fuels, and salt spray and sea breeze environments in spacecraft assembly facilities are the main influencing factors. The launch environment mainly refers to the impact, noise, vibration, acceleration, and increasing vacuum conditions during launch, separation, and orbital maneuvering of spacecraft materials. The space environment includes © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 C. Yin et al., Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft, https://doi.org/10.1007/978-981-99-8216-5_1

1

2

1 Background and Requirements

the artificial and natural environments experienced by spacecraft in orbit, such as micrometeoroids and space debris, electromagnetic radiation, neutral atmosphere, charged particle radiation, vacuum, cold and dark environment [1, 2]. It also includes the temperature, humidity, artificial radiation environments inside crewed spacecraft. For deep space exploration, spacecraft must also consider the effects of Mars dust, lunar regolith, and acidic gases in Venus. The re-entry environment primarily covers the mechanical and atmospheric changes occurring during the spacecraft’s return to Earth due to braking, attitude adjustment, and landing, including intense impact and high-temperature shocks. The space environment represents a unique environmental mode for spacecraft, distinct from land, sea, and atmosphere, and is the fourth environment where human beings can exist. In the context of space environment, “space” typically refers to the vast cosmic region above several tens of kilometers from the Earth’s surface. It is filled with various forms of matter, particles, and fields, which can exist naturally or be artificially created [3]. Particles include neutral gases, ionized gases, plasmas, charged particles of various energies, as well as meteoroids and space debris of various scales. Fields include gravitational fields, electric fields, magnetic fields, and electromagnetic radiation of various wavelengths. Deep space also contains large-scale “particles” such as asteroids, planets, and comets. These diverse forms of matter constitute the space environment, which was recognized through human space activities and the continuous deepening of our understanding of the surrounding environment. In general, the space environment for spacecraft primarily includes vacuum environment, solar radiation environment, micrometeoroids and space debris environment, atomic oxygen environment, plasma environment, geomagnetic field environment, and gravity environment. The materials and structures of spacecraft experience various effects from these space environmental factors, including the combined effects of multiple factors. Examples include vacuum-induced outgassing of materials, cold welding of mechanisms, thermal-vibration fatigue caused by repeated transitions between Earth’s shadow and sunlight, high-velocity impact dynamics resulting from micrometeoroid and space debris collisions, erosion and detachment of surface materials due to atomic oxygen, thermal ablation of spacecraft thermal protection materials and structures during Earth reentry, and more [4, 5]. This section primarily introduces several space environmental effects that cause significant physical damage to spacecraft.

1.1.1 Vacuum Environment Outer space vacuum is an ideal clean vacuum where thermal conduction of gas molecules can be ignored, and only radiative heat transfer exists. During orbital operations, spacecraft experience the alternating effects of low vacuum, high vacuum, and ultra-high vacuum environments. The internal structures and materials of spacecraft in a vacuum environment induce a series of concurrent effects, including

1.1 Space Environment Effects on Spacecraft

3

outgassing, pressure differential effects, as well as cold welding and adhesion effects. Vacuum environment effects can lead to a decline in the mechanical, electrical, and optical properties of spacecraft materials, and in severe cases, damage or destruction of spacecraft materials and structures. At the same time, the vacuum environment can also trigger other environmental effects that cause damage to spacecraft materials, including electromagnetic radiation effects, space particle radiation effects, space debris impact effects, and gravity effects.

1.1.2 Space Debris Environment Increasing human space activities are leading to an increase in space debris every year, so what is space debris? The increasing human space activities have increased the amount of space debris in Earth orbit year by year. Broadly speaking, space debris contains two components, namely, micrometeoroids that exist in nature and space debris caused by anthropogenic activities (Micro-meteoroids and Orbital Debris, MMOD). Micrometeoroids are natural particles distributed in the space environment, almost all from asteroids and comets, and the average relative velocity of spacecraft encountering micro-meteoroids is .19–20 km/s [6]. Since the number of man-made space debris is more than micro-meteoroids, and the threat to all kinds of spacecraft is more significant, the space debris in the narrow sense is usually equated with all kinds of man-made Space debris, mainly refers to man-made objects distributed in space orbit that have lost their functions, generally in the form of tiny debris and particles, so it can also be called orbital debris, spacecraft encounter space debris with an average relative speed of .10 km/s. Because of its potential threat to spacecraft in orbit or about to be launched, it is called “space junk”. Currently, there are hundreds of millions of space debris above the millimeter level, with a total mass of several thousand tons, as shown in Fig. 1.1. Space debris above the centimeter level can cause complete damage to spacecraft, and the cumulative effect of a millimeter- or micron-scale space debris impact will result in degraded performance or functional failure of the spacecraft [7]. Collisions with orbital debris can pit or damage spacecraft in the best case scenario and cause catastrophic failures in the worst. Due to the rate of speed and volume of debris in Earth orbit, current and future space-based services, explorations, and operations pose a safety risk to people and property in space and on Earth. Kessler Syndrome caused by space debris is even more worrisome. Space debris have accumulated in orbit increasing the likelihood of collision with other debris. Unfortunately, collisions create more debris creating a runaway chain reaction of collisions and more debris known as the Kessler Syndrome after the man who first proposed the issue, Donald Kessler. It is also known as collisional cascading [8]. This cascade of collisions first came to NASAs attention in the .1970’s when derelict Delta rockets left in orbit began to explode creating shrapnel clouds. Kessler demonstrated that once the amount of debris in a particular orbit reaches critical

4

1 Background and Requirements

Fig. 1.1 Space debris in Earth’s orbit

mass, collision cascading begins even if no more objects are launched into the orbit. Once collisional cascading begins, the risk to satellites and spacecraft increases until the orbit is no longer usable. The damages caused by the space debris environment to spacecraft mainly include: altering the surface performance of the spacecraft, creating impact craters on the spacecraft surface, plasma cloud effects, momentum transfer, surface penetration, container explosion and rupture, structural fragmentation, and more. The destruction caused by space debris to spacecraft is an extremely serious and high-risk event, and this threat becomes increasingly severe with the increase in space activities frequency.

1.1.3 Solar Irradiation Environment The sun is a massive source of radiation, constantly emitting a significant amount of energy into space. Visible light radiation, near-infrared radiation, and far-infrared radiation in solar electromagnetic radiation constitute the heating source, also known as external thermal flux, that spacecraft experience during space flight [9]. The impact of ultraviolet (UV) radiation in solar radiation on metal surfaces is due to the generation of free electrons through the photoelectric effect, causing spacecraft metal surfaces to become statically charged. Increased surface static charge can affect internal electronic systems and magnetic components of the spacecraft. UV radiation can create color centers in optical materials such as crystals and glass, resulting in coloration and loss of transparency in the optical materials. Under UV radiation, the molecular weight of polymer materials decreases, leading to the decomposition, cracking, dis-

1.2 Spacecraft Materials and Damage

5

coloration, reduced elasticity, and tensile strength of the materials, deteriorating their mechanical properties and reducing the conversion efficiency of photovoltaic cells. UV irradiation can also darken thermal control coatings, increasing their absorptivity and thus raising the surface temperature of the spacecraft.

1.1.4 Atomic Oxygen Environment Within the orbit altitude of .200–100 km in space, atomic oxygen is the most abundant component in the atmosphere, accounting for approximately .80%, particularly in the height range of .300–500 km, where atomic oxygen is in absolute dominance. Atomic oxygen is formed by the interaction of UV light in solar radiation with oxygen molecules, resulting in their decomposition. In low Earth orbit, the density of atomic oxygen is approximately .109 /cm3 . The temperature of atomic oxygen in orbit is generally between .1000 K and .1500 K. However, due to its relative velocity with the spacecraft of about .8 km/s, atomic oxygen collides with the surface with an energy equivalent to about eV, similar to the interaction of atomic oxygen at approximately .5 × 104 K with the surface. Additionally, atomic oxygen possesses a strong oxidizing capability, surpassing molecular oxygen and second only to fluorine. The environmental effects of atomic oxygen primarily involve reactions with spacecraft materials’ surfaces, leading to oxidation and the formation of oxides [10]. This also accelerates gas release, increases mass loss rate, reduces mechanical strength, and alters optical and electrical properties of the materials. Space atomic oxygen environment causes erosion aging of satellite structural materials, and the eroded materials become a source of pollution in the space environment. Thermal control coatings and optical coatings are particularly sensitive to the effects of atomic oxygen, resulting in changes in their optical properties, reduced specular reflectance, and decreased surface conductivity for some materials. Furthermore, the combination of atomic oxygen with the ultraviolet irradiation environment accelerates the erosion process on materials. The main effects of atomic oxygen on spacecraft include erosion and aging of structural materials, degradation of thermal control materials, degradation of solar cell interconnects, contamination and erosion of remote sensing detectors or other optical materials.

1.2 Spacecraft Materials and Damage Spacecraft materials technology is the fundamental, pioneering, and critical technology for the development of spacecraft products. It is an important factor that determines the performance, quality, reliability, and cost of spacecraft products. It encompasses the entire lifecycle of spacecraft products, including design, development, processing, production, testing, and maintenance. The performance and level of space materials technology are important indicators of the level of spacecraft tech-

6

1 Background and Requirements

nological development. The damage and in-orbit failures of spacecraft are the results of damage to spacecraft material devices and structures. Compared to ordinary materials on the ground, space materials experience various types of damage with unique characteristics due to the constantly changing and complex space environment.

1.2.1 Introduction to Spacecraft Materials Spacecraft materials refer to the materials used in the manufacture of spacecraft vehicles. Spacecraft vehicles typically require all four major categories of materials: metallic materials, inorganic non-metallic materials, organic polymer materials, and composite materials. Metallic materials were the earliest extensively used traditional spacecraft materials. Aluminum alloys, magnesium alloys, titanium alloys, nickelmolybdenum alloys, and other metallic materials are widely applied in the spacecraft field due to their low density, high service life, excellent corrosion resistance, and high-temperature resistance. Inorganic non-metallic materials include glass, ceramics, oxides, carbon materials, etc. They possess characteristics such as good hardness, thermal resistance, resistance to chemical corrosion, acid-base corrosion, hightemperature corrosion, etc. They can be used to manufacture spacecraft shells, insulation boards, solar panels, cables, motors, and other electronic components. Organic polymer materials are important supporting materials for the spacecraft industry. They mainly include rubber, engineering plastics, coatings, synthetic resins, adhesives, sealants, etc. They can be used for spacecraft surface coatings, foam fillers, elastic vibration absorption structures, etc. Advanced composite materials consist of high-performance reinforcing materials and matrix materials. They have high specific strength, high specific stiffness, good designability, and can effectively reduce the structural weight of spacecraft, increase payload, and also possess some special functional uses. They are the focus and direction of research on new spacecraft materials. Advanced composite materials mainly include resin-based composites, metal-based composites, ceramic-based composites, and carbon-carbon composites. According to their usage, spacecraft materials can be divided into structural materials and functional materials. Structural materials are mainly used to manufacture various structural components of aircraft, such as aircraft bodies, satellite load-bearing cylinders, engine casings, etc. Their main function is to withstand various loads, including static loads caused by self-weight and dynamic loads generated during flight. Functional materials mainly refer to materials with special functions in light, sound, electricity, magnetism, heat, etc. For example, electronic information materials involved in spacecraft measurement and control systems (including functional materials for microelectronics, optoelectronics, and sensor devices), as well as thermal protection materials on the surface of spacecraft vehicles. The overall development trend of structural materials is lightweight, high strength, high modulus, high-temperature resistance, and low cost. On the other hand, functional materials are developing towards high performance, multi-functionality, multiple varieties, and multiple specifications.

1.2 Spacecraft Materials and Damage

7

1.2.2 Typical Spacecraft Material Damage Spacecraft materials constitute the basic units of spacecraft vehicles. The effect of the space environment on spacecraft vehicles ultimately stems from the interaction between the space environment and spacecraft materials. Under the influence of a single or multiple space environments, spacecraft materials experience performance degradation or failure, leading to the loss of functionality or in-orbit malfunctions of the spacecraft vehicle, resulting in mission termination or shortened lifespan. The main types of damage defects in spacecraft materials caused by space environment effects include cracks, delamination, perforation, impact craters, ablation, and others. The damage defects in spacecraft materials may involve a single type or multiple types of defects simultaneously. For spacecraft vehicles, coupled damage resulting from special space environmental effects (such as hypervelocity impacts from space debris, high-temperature thermal ablation, etc.) can lead to catastrophic accidents. In February 2003, the Space Shuttle Columbia disintegrated during re-entry, resulting in the loss of all seven crew members’ lives. According to the investigation conducted by the National Aeronautics and Space Administration (NASA), 82 s after ignition, the leading edge of the left wing made of C/C composite material on the Columbia Shuttle was impacted by foam debris, resulting in visible cracks, as shown in Fig. 1.2. During re-entry into the Earth’s atmosphere, aerodynamic heating at temperatures as high as .1400 ◦ C entered the wing through the cracks in the C/C composite material, directly causing the melting of internal aluminum structures, loss of control of the spacecraft, destruction of the left wing, and subsequent explosive disintegration of the entire vehicle [11]. The Columbia accident highlighted that although C/C composite materials have excellent thermal properties in high-temperature environments, impact damage can lead to rapid oxidation and severe performance degradation of C/C composite materials in high-temperature environments, even resulting in structural failure of the spacecraft. In order to study the effects of the space environment on long-term in-orbit spacecraft., NASA designed the Long Duration Exposure Facility (LDEF) specifically, as shown in Fig. 1.3. The LDEF was placed in low-Earth orbit (LEO) by the space shuttle Challenger in April, .1984, and retrieved by the space shuttle Columbia in January,.1990. LDEF was a.14-faced (i.e., a.12-sided cylinder and two ends), gravitystabilized spacecraft that was host to .57 individual scientific experiments. Several of these experiments were designed to characterize various aspects of the meteoroid and orbital-debris environment during the nominal nine month mission. However, as a result of LDEF’s unexpectedly long exposure time (.5.7 years) and the heightened awareness of the man-made debris collisional threat, it was decided to utilize the entire spacecraft as a meteoroid and orbital-debris detector. The Meteoroid and Debris Special Interest Group (M&D SIG) was organized to achieve this end. As a result of the gravity-gradient stabilized orbital nature of LDEF (i.e., the same general surface pointed into the velocity vector during the entire mission), the large exposed surface area (.130.m2 ) of LDEF provided a unique source of information

8

1 Background and Requirements

Fig. 1.2 The left wing of the Columbia Shuttle was impacted by foam debris

concerning the LEO particulate environment and associated directionality effects for both natural and man-made particles [12, 13], as shown in Fig. 1.4. Based on extensive in-orbit and ground test research, it has been found that the damage types resulting from the high-velocity impacts of small space debris on spacecraft materials are diverse, as shown in Figs. 1.5, 1.6, 1.7, 1.8, 1.9 and 1.10. They can cause not only surface damages such as impact craters, perforations, and foreign object embedments but also invisible internal damages such as cracks, delamination, debonding, and peeling [14]. Compared to metallic materials, composite materials are composed of a matrix and reinforcing phase as shown in Fig. 1.11. Due to the internal structural differences in composite materials leading to anisotropy, the damage characteristics and pit morphology of composite materials under hypervelocity impact (HVI) differ significantly from those of metallic materials. In addition to impact craters, other types of damage such as cracks, delamination, and debonding also coexist in composite materials [15].

1.2 Spacecraft Materials and Damage

9

Fig. 1.3 LDEF designed by NASA was left in low Earth orbit for 5.7 years before being retrieved by space shuttle Columbia in January 1990. Credit NASA (Langley, Image No. EL-1994-00111)

In some special structures, there can be even more complex situations with multiple damage defects densely distributed. For example, in the commonly used Whipple shielding configuration of spacecraft, which consists of two or more layers of thin plates, the numerous secondary micro-fragments generated by the hypervelocity impact of space debris on the front plate result in a dense cluster of small impact craters on the rear plate [16], as shown in Fig. 1.12. The characteristics of damage in spacecraft materials are primarily manifested in the following aspects: (1) The damage tissue and defect features caused by complex space environments are different from traditional damage. They are mainly characterized by the diversity of damage types, where different types of damage can exist separately or simultaneously. (2) The depth of damage also varies. Damage can occur on the material surface or within the material. In some special cases, different depths of damage can coexist simultaneously. (3) The use of spacecraft composite materials is increasingly widespread. Due to the inherent heterogeneity and pronounced anisotropy of composite materials, the modes and forms of damage in composite materials become more complex. (4) Spacecraft materials and components are characterized by large dimensions. Damage can be either dispersed over a large area or concentrated in specific regions.

10

1 Background and Requirements

Fig. 1.4 A close-up view of a panel from the LDEF spacecraft. Credit NASA JSC Fig. 1.5 HVI damage modes in aluminum: craters in semi-infinite targets

It is evident that spacecraft materials face more demanding requirements in extreme space environments. Therefore, it is necessary not only to strictly control the product quality during the production process but also to carry out various damage detection and evaluation during the usage and maintenance processes. Visual inspection, classification, identification, and quantification of internal quality and complex damage defects in spacecraft materials are important directions for the development of NDT technology today.

1.2 Spacecraft Materials and Damage Fig. 1.6 HVI damage modes in aluminum: attached spall

Fig. 1.7 HVI damage modes in aluminum: detached spall

Fig. 1.8 HVI damage modes in aluminum: complete penetration or perforation of the target

11

12

1 Background and Requirements

Fig. 1.9 The HVI perforation damage on aluminum alloy material

Fig. 1.10 The HVI perforation damage on ceramic material

1.3 Overview of Infrared Thermographic NDT Technology

13

Fig. 1.11 Picture of HVI damage discovered on STS-129 wing leading edge left-hand panel

Fig. 1.12 Dense impact damage formed on the rear wall of the Whipple shielding structure due to the hypervelocity impact of the secondary fragment cloud

1.3 Overview of Infrared Thermographic NDT Technology Infrared thermographic NDT technology is widely used in the aerospace field, and has played an important role in the production, service, maintenance and support of aerospace spacecraft. This section mainly introduces the basic concepts of infrared thermographic NDT technology, the current research status at home and abroad, and its application in the aerospace field.

14

1 Background and Requirements

1.3.1 Basic Concepts of Infrared Thermographic NDT Technology Infrared thermographic NDT (also known as infrared nondestructive testing or infrared thermal wave detection) is a new type of digital nondestructive testing technology actively developed internationally, based on the characteristics of infrared radiation. It uses the different physical properties of thermal radiation of materials with different structures or materials to detect non-uniformity or anomalies on the surface and inside of materials. It combines technologies such as optoelectronic imaging, computer, and image processing. It can detect cracks, delamination, and other defects in metals, non-metals, and composite materials. It has the advantages of non-contact, large detection area, fast speed, and online detection [17]. The thermal image sequence obtained by the IR camera fully records the temperature and energy response characteristics of material defects in the spatial and temporal dimensions, enabling the detector to accurately judge the temperature distribution and change of the object surface. The defect detection data it obtains has very rich information. It can detect subtle thermal state changes of equipment, accurately reflect the heat generation inside and outside the measured object, and is very effective in detecting early defects and hidden dangers in equipment [18]. More importantly, through data acquisition and feature extraction processing algorithms, the internal quality of composite materials and the visual detection and classification recognition quantification of complex damage defects can be achieved. It also avoids the interference caused by changes in external environmental temperature and uneven emissivity of the specimen surface. In the field of visual NDT of large-scale specimens of aerospace spacecraft in situ, it has enormous application potential. Currently, the research hotspots of infrared thermography mainly focus on the thermal conduction mechanism and numerical simulation of defective materials, processing and analysis technology of infrared image sequences, quantitative identification of defects, and application extension. Infrared thermography NDT has played a significant role in the aerospace field. Since .1992, it has been used for nondestructive test the delamination of Atlas space launch composite materials and the heat-resistant protection layer of space shuttles. U.S. government agencies and large companies attach great importance to the promotion and application of this technology. For example, NASA has applied this technology to detect surface defects of space shuttles. TWI Company developed the first commercial product of flashlamp pulsed thermography, and its designed and manufactured portable infrared thermography equipment has the advantages of small size, light weight, and high sensitivity. After the Columbia Shuttle disaster in .2003, NASA paid special attention to the NDT method of composite materials used in space shuttles. After practical comparisons, they ultimately chose the infrared thermography NDT method. At the same time, the future direction of NDT technology proposed by Boeing clearly indicates that a detection device that is fast, accurate, large-scale, multi-technology integrated, intelligent, automated, and visualized will be the development trend of NDT technol-

1.3 Overview of Infrared Thermographic NDT Technology

15

ogy. Compact infrared thermography detectors can quickly and extensively detect major defects such as delamination and layering of composite materials, which is in line with the trend of future field testing technology development.

1.3.2 Basic Principles of Infrared Thermographic NDT Technology Infrared radiation (IR) is an electromagnetic wave with a wavelength ranging from 0.75µm to .1 mm, between microwave and visible light in the electromagnetic spectrum. It exists in every corner of the natural world, and theoretically, all objects with a temperature above absolute zero can emit infrared radiation, with the amount of radiation increasing with the object’s temperature. Depending on the wavelength, IR is usually divided into four bands: near-infrared (.0.76–3.µm), mid-infrared (.3–6.µm), far-infrared (.6–15.µm), and extreme infrared (.15–1000.µm). Infrared thermography is a technology that converts the infrared radiation pattern reflected or emitted by an object into a visible image [19]. According to the StefanBoltzmann law, the total power radiated per unit area . j ∗ by a black body (known as radiance or energy flux density) is proportional to the fourth power of the thermodynamic temperature .T (also known as absolute temperature) of the black body, and can be expressed as the following equation:

.

.

j ∗ = εσ T 4

(1.1)

In the equation, .σ is the Stefan-Boltzmann constant (.5.66 × 10−8 W m −2 K 4 ). .ε is the emissivity of the object, which is related to the material, surface characteristics, and temperature. Under the same temperature conditions, even slight differences in the material and surface characteristics of an object can lead to differences in emissivity .ε, resulting in differences in radiation intensity in different regions of the object being measured. Although the human eye cannot directly observe infrared radiation, this difference can be amplified and displayed on a specific monitor through a signal processing system, resulting in a thermal image of the object being measured. The principle of defect detection based on infrared thermography technology is that when the physical structure, material composition, or continuity (damage defects) of the tested component changes, its corresponding thermal physical properties will show non-uniformity, causing the thermal wave to be enhanced or attenuated in the changing area during thermal wave propagation, affecting the thermal distribution of the entire test sample. Infrared cameras can capture such temperature information changes and convert them into a sequence of infrared thermal images for defect detection research. Infrared thermography detection technology can be divided into two categories: passive and active. Passive infrared thermography detection refers to infrared temperature measurement during the thermal exchange process between the tested target

16

1 Background and Requirements

and the environment when the temperature of the tested target is different from the ambient temperature. The internal defects of the tested target change the thermal characteristics of the target, and the change in the thermal characteristics of the target is reflected in the temperature change on its surface, thus achieving passive detection. Passive infrared temperature measurement is mainly used for online detection of equipment and components. Active involves applying various forms of thermal stimulation to the tested target, which can be steady-state or transient. Infrared temperature measurement can be carried out during or after the thermal stimulation, depending on the specific detection situation. Active infrared thermography detection can be further divided into one-sided and two-sided methods based on the relationship between the detection object and the spatial position. The one-sided method involves heating and infrared temperature measurement on the same side of the tested object, and using an IR camera to record the surface temperature of the workpiece after the stimulation. The two-sided method involves heating and infrared temperature measurement on the front and back surfaces of the tested target, that is, heating on one surface and observing and recording its temperature distribution on the other surface. Taking active infrared thermographic NDT method as an example, when thermal excitation is applied to the tested sample, a corresponding incident heat wave is formed on the surface of the sample. A certain amount of incident heat wave becomes a reflected heat wave on the surface of the tested specimen, while another part of the incident heat wave passes through the surface and forms a transmitted heat wave inside the specimen. (1) When the surface and interior of the tested specimen exhibit uniform thermal physical properties and there are no defects on the surface and interior, the reflected wave will form a uniform temperature field on the surface of the tested specimen. When the heat wave propagates, as the energy accumulation reaches a certain level and can penetrate the tested specimen, the propagating heat wave will form a uniform temperature field on the surface of the tested specimen. (2) When the surface and interior of the tested specimen exhibit homogeneous thermal physical properties, if there is a defect with insulating properties inside the detection material, the transmitted heat wave will be partially reflected by the defect, and the reflected wave will be formed. The reflected wave and the transmitted heat wave add or interfere with each other, causing the temperature of the defect area to correspondingly increase and forming a local gradient distribution in the originally uniform surface temperature field. (3) When there is a defect with conductive properties inside the tested material, more heat energy will be transferred into the interior of the tested specimen due to the defect’s conductive properties, causing the accumulated heat in the defect area to be lower than that in other background areas, and causing the distribution of the surface temperature field in the corresponding area to change. As can be seen, regardless of the type of defect that occurs inside the specimen, as long as there are differences in heat transfer and accumulation processes, infrared NDT can theoretically be performed, and the greater the difference, the better the infrared thermographic NDT effect.

1.3 Overview of Infrared Thermographic NDT Technology

17

Fig. 1.13 Schematic diagram of the composition of an active infrared thermographic NDT system

1.3.3 Infrared Thermographic NDT System Composition A complete infrared thermographic NDT system usually consists of a thermal excitation device, an IR camera, and a control system, as shown in Fig. 1.13.

1.3.3.1

Thermal Stimulation Device

Surface or internal defects may cause differences in the distribution of thermal radiation on the surface of an object. Therefore, by collecting and analyzing the infrared thermal radiation information from the surface of the object, it is possible to make a judgment about the internal state of the object. However, when the temperature difference between the object and the surrounding environment is small or basically at the same temperature, or when the object reaches a stable thermal equilibrium, it is almost impossible to detect defects using the above method. Therefore, in order to be able to use the thermal radiation information from the surface of the object to achieve the purpose of diagnosing the internal state of the object, the temperature of the surface of the object must first be artificially increased so that the influence of the surrounding medium can be effectively overcome. Using a heat source with periodic variations or using a pulsed light source can both achieve the purpose of increasing the surface temperature of an object. Common thermal excitation methods include heat lamp excitation, ultrasound excitation, electromagnetic excitation, microwave excitation, laser excitation, and hot air flow excitation. Heat lamp excitation uses halogen lamps, infrared lamps, and other light sources to directly irradiate the surface of the test piece. It has the advantages of fast response, large single detection area, and long working distance. However, it

18

1 Background and Requirements

also has disadvantages such as shallow detection depth and poor detection effect on deeper internal defects. It is suitable for fast and wide-range detection of surface and near-surface defects. Ultrasound excitation has the feature of selectively heating closed crack defects and is not constrained by the shape of the test piece, but it may be affected by the acoustic field coupling situation, and the vibration generated by ultrasound may cause damage to sensitive materials. Electromagnetic excitation is suitable for detecting surface and sub-surface defects of high conductivity materials, and is not limited by the shape of the test piece. However, there is a skin effect, and the detection depth is shallow. Microwave excitation has high detection efficiency and can stimulate deeper defects, but it is only suitable for non-metallic material defect detection. Laser excitation has high energy density and high sensitivity to small specimens or tiny defects, but the single detection area is small, and high energy may cause local thermal stress. Generally, it is believed that using a pulsed light source to heat the object under test is a simple and effective method, and it is currently the most widely used and researched method. In active infrared thermography detection, in addition to using the correct detection method, it is important to extract the defects of the tested specimen. The selection of the heat stimulation source has a significant impact on the experiment. Poor selection of the heat stimulation source can lead to experiment failure and the failure to detect defective specimens. One thing that can be confirmed is that a heat stimulation source with uniform heat distribution and sufficient energy will definitely benefit active infrared thermography detection.

1.3.3.2

IR Camera

IR camera is a device that uses an infrared detector, an optical imaging lens, and an opto-mechanical scanning system (advanced focal plane technology eliminates the need for the opto-mechanical scanning system) to receive a distribution map of infrared radiation energy emitted by the target object being measured, reflecting it onto the photosensitive element of the infrared detector. Between the optical system and the infrared detector, there is an opto-mechanical scanning mechanism (which is absent in a focal plane thermal imager) that scans the infrared thermal image of the object under test and focuses it on a single unit or spectral detector. The detector converts the infrared radiation energy into an electrical signal, which is then amplified, processed, converted or transmitted as a standard video signal, and displayed on a TV screen or monitor as an infrared thermal image. The main parameters of an IR camera include temperature resolution, spatial resolution, temperature measurement range, and field of view. Temperature resolution is a key parameter that characterizes the temperature measurement accuracy of an IR camera, and the infrared detector is the core component that determines the thermal imager’s temperature resolution. The infrared detector used in an IR camera is not a charge-coupled device commonly used in cameras and digital cameras but a focal plane array (FPA) detector with micron-sized pixels made of various materials sensitive to infrared wavelengths. The resolution of an FPA ranges from about

1.3 Overview of Infrared Thermographic NDT Technology

19

160 × 120 pixels to as high as .1024 × 1024 pixels. The infrared radiation energy is converted into an electrical signal by the detector, which is then amplified, processed, converted, or transmitted as a standard video signal and displayed on a TV screen or monitor as an infrared thermal image. This thermal image corresponds to the thermal distribution field on the surface of the object, and essentially reflects the thermal image distribution map of the infrared radiation emitted by different parts of the object under test. Since the signal is very weak, compared to a visible light image, the infrared thermal image lacks depth and a sense of hierarchy. FPA detector technology can be divided into two categories: thermal detectors and quantum detectors. One common type of thermal detector is an uncooled microbolometer detector made of metal or semiconductor materials. These detectors are usually lower in cost than quantum detectors and have a broader spectral response. However, microbolometer detectors respond to incident radiation energy, and their speed and sensitivity are lower than those of quantum detectors. Quantum detectors are made of layered gallium arsenide/aluminum gallium arsenide and other materials, such as indium antimonide (InSb), indium gallium arsenide (InGaAs), platinum silicide (PtSi), mercury cadmium telluride (MCT), and quantum well infrared photodetectors (QWIP). The operating principle of a quantum detector is based on the state changes of electrons in the crystalline structure that respond to incident photons. Generally, the speed and sensitivity of quantum detectors are superior to those of thermal detectors. However, quantum detectors require cooling, sometimes even using liquid nitrogen or small sterling cycle cooling devices. With the continuous advancement of technology, IR cameras are moving towards high resolution, high sensitivity, low cost, and miniaturization, providing technical support for the rapid development of infrared thermographic NDT technology.

.

1.3.3.3

Control and Processing System

Under the condition of a determined excitation source, such as optical excitation thermography, there are different types of excitation source signals that can be classified into pulsed thermography (PT), lock-in thermography (LT), step heating thermography (ST), pulsed phase thermography (PPT), frequency modulated thermal wave imaging (FMTWI) and barker coded thermal wave imaging (BCTWI). To generate these different excitation source signals and complete the acquisition of infrared thermal imaging data, it is also necessary to design suitable control and processing modules. The control module is mainly used to synchronize the operation of the thermal imager and the thermal excitation device, ensuring that the IR camera can stably record the entire heating/cooling process. The data processing module includes a computer and signal processing software, which is used to collect, store, and display the data acquired by the thermal imager. Relevant data processing algorithms are used to process and analyze the infrared thermal imaging data. Generally speaking, the raw infrared thermal imaging data acquired by the IR camera is far inferior to the visible light image. Infrared images have

20

1 Background and Requirements

characteristics such as poor contrast, blurred image edges, indistinct temperature difference ranges, and high background noise interference. Therefore, processing and analysis are necessary to meet the requirements of defect quantification analysis and damage classification.

References 1. Tribble, A. C.: The Space Environment: Implications for Spacecraft Design-revised and Expanded Edition. Princeton University Press, (2003) 2. Silverman, E. M.: Space Environmental Effects on Spacecraft: LEO Materials Selection Guide, part 1. (1995) 3. Norberg, C.: The Space Environment[M]//Human Spaceflight and Exploration. Berlin, Heidelberg: Springer Berlin Heidelberg, (2013) 4. McKnight, D.: Examination of spacecraft anomalies provides insight into complex space environment. Acta Astronautica, 158, 172–177 (2019) 5. Thirsk, R., Kuipers, A., Mukai, C., Williams, D.: The space-flight environment: the international space station and beyond. Cmaj, 180(12), 1216–1220 (2009) 6. Klinkrad, H.: Space Debris: Models and Risk Analysis. Springer Science & Business Media, (2006) 7. Bao, W., Yin, C., Huang, X., Yi, W., Dadras, S. :Artificial intelligence in impact damage evaluation of space debris for spacecraft. Frontiers of Information Technology & Electronic Engineering, 23(4), 511–514 (2022) 8. Pelton, J. N., Pelton, J. N.: The space debris threat and the kessler syndrome. Space Debris and Other Threats from Outer Space, 17–23 (2013) 9. Novikov, L. S., Mileev, V. N., Voronina, E. N., Galanina, L. I., Makletsov, A. A., Sinolits, V. V.: Radiation effects on spacecraft materials. Journal of Surface Investigation. X-ray, Synchrotron and Neutron Techniques, 3, 199–214, (2009) 10. Banks, B. A., DeGroh, K. K., Miller, S. K.: Low earth orbital atomic oxygen interactions with spacecraft materials. MRS Online Proceedings Library (OPL), 851, NN8. 1 (2004) 11. Cabbage, M., Harwood, W.: Comm Check...: The Final Flight of Shuttle Columbia. Simon and Schuster, (2009) 12. Whitaker, A., Gregory, J.: LDEF Materials Results for Spacecraft Applications. National Aeronautics and Space Administration, Office of Management, Scientific and Technical Information Program, (1994) 13. Stein, B. A.: LDEF materials: an overview of the interim findings. LDEF Materials Workshop 1991, Part 1 (1992) 14. Christiansen, E. L.: Handbook for designing MMOD protection: NASA/TM-2009-214785, 35, (2009) 15. Huang, X., Yin, C., Huang, J., Wen, X., Zhao, Z., Wu, J., Liu, S.: Hypervelocity impact of TiB2-based composites as front bumpers for space shield applications. Materials& Design, 97, 473–482 (2016) 16. Huang, X., Yin, C., Ru, H., Zhao, S., Deng, Y., Guo, Y., Liu, S.: Hypervelocity impact damage behavior of B4C/Al composite for MMOD shielding application. Materials & Design, 186, 108323 (2020) 17. Vavilov, V., Burleigh, D.: Infrared Thermography and Thermal Nondestructive Testing. New York, NY, USA:: Springer International Publishing, (2020) 18. Yin, C., Huang, X., Cao, J., Dadras, S. Shi, A.: Infrared feature extraction and prediction method based on dynamic multi-objective optimization for space debris impact damages inspection. Journal of the Franklin Institute, 358(18), 10165–10192 (2021) 19. Maldaque, X. P. V., Infrared methodology and technology. CRC Press, (2023)

Chapter 2

Infrared Feature Extraction and Damage Reconstruction

2.1 Introduction After a period of thermal excitation of a specimen, changes in the temperature field of the surface are recorded in different infrared thermal images, forming an infrared thermal image sequence consisting of multiple frames. The presence of defects affect the change in the thermal response of a local area and the physical information specific to the damage is embedded in the infrared thermal image sequence. It is therefore essential to extract the features corresponding to the defect from the infrared thermal image sequence. Image reconstruction based on the thermal response of the defect provides an intuitive image with features of the defect and is the basis for a series of subsequent processes [1, 2]. With the data acquisition platform, an infrared thermal image sequence can be sequence can}be represented as a three-dimensional matrix block obtained. This { (. I C I S = T1R×C , . . . , TZR×C ), where . R × C denotes the size of a single infrared thermal image in the sequence, and . Z denotes the number of image frames contained in the infrared thermal images. That is, the infrared thermal image sequence records the heat absorption process when a plane is excited by external thermal excitation, and also the heat release process when the thermal excitation is stopped. Specifically for a particular pixel point, the change in temperature over time is recorded at the location of this point. The curve corresponding to the temperature variation with time for each pixel point is defined as the transient thermal response (TTR) curve [3, 4]. The infrared thermal image of each frame is a continuously varying twodimensional temperature field, where defects and boundaries are usually blurred and the signal-to-noise ratio is low. The TTR contains the heat transfer properties of each region over time and can be used as a feature carrier to identify subtle structural defects in different regions. As shown in Fig. 2.1, the infrared thermal image sequence consists of multiple frames, and on each frame, the shaded area framed by the dashed line indicates the damage area and the rest indicates the background area. Three pixel points are taken

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 C. Yin et al., Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft, https://doi.org/10.1007/978-981-99-8216-5_2

21

22

2 Infrared Feature Extraction and Damage Reconstruction TTR1 Damage area

Temperature

TTR4,5

TTR2

TTR4 Vertical axis

TTR5

Non-damaged areas

TTR1,2,3

TTR3 Time

Time axis

0

tmid

tend

Fig. 2.1 Infrared thermal image sequence and TTR

at different locations in the background region, and two pixel points are taken in the damage region. The change in temperature versus time is examined at each of these points to obtain five TTR curves. Where .T T R1 , .T T R2 and .T T R3 are taken from the background non-damaged region and .T T R4 and .T T R5 are taken from the damaged region. Representing the five curves in coordinates, it is clear that the TTR curves taken from the same region show similarity and the TTR curves taken from different regions show differences. This is the result of fundamental differences in the physical structure of the different regions, which are inherent to the workpiece being inspected and do not vary with external conditions in infrared thermal wave inspection. The TTR curve has the property of reflecting this fundamental difference and can therefore be used to distinguish between different areas of damage. The TTR curve is a curve of temperature change over time at a pixel point in an infrared thermal image sequence, which records the change in infrared thermal wave detection throughout the temperature change. Based on the thermal radiation values of each transient in the overall temperature change situation, it is possible to extract the physical properties of the thermal response curve at the corresponding location. For example, the energy of the curve, the rate of rise and fall of the curve. On this basis, the physical properties of the curve can be used as quantitative characteristics of the curve to differentiate between different curves. The physical properties based on transient thermal response curves are broadly as follows: (1) Energy (. E i ) The energy of a curve is usually expressed as the area under the curve, and there are differences in the energy contained in curves located at different positions. The energy is calculated using the square of the L2-norm and the formula is shown in Eq. (2.1) .

2 2 2 E i = ||Ji ||22 = ji,1 + ji,2 + · · · + ji,N , t

(2.1)

( ) where . Ji = ji,1 , ji,2 , . . . , ji,Nt denotes the .i-th TTR and . Nt denotes the number of frames in the infrared thermal image sequence.

2.1 Introduction

23

i (2) Rising rate of temperature (. Srise ) The temperature change per unit time is defined as the temperature change rate. Temperature change rate is an important tool to identify TTR curves in different damaged areas, which reflects the structural properties of the workpiece to a certain extent. The change rate of TTR curves in damaged areas and non-damaged areas is quite different in the process of heat absorption and heat release. The endothermic process starts at frame .0 and ends at frame .t. The temperature change rate in the endothermic process can be expressed by the slope of the TTR curve in the endothermic process, and it can be calculated by .tan θirise . Where .θirise represents the included angle between the connecting line between the initial endothermic moment and the endothermic end moment and the horizontal direction. However, because the angle information of the TTR curve cannot be extracted from the actual data, the temperature change rate is calculated by Eq. (2.2) | | | ji,t − ji,0 | i rise |, | . Srise = tan θi =| (2.2) | t

where . ji,t represents the temperature in the .t frame, and . ji,0 represents the initial frame temperature. (3) Temperature change rate in exothermic process (. S if all ) The temperature change rate of exothermic process can also measure the temperature change. The exothermic process is from frame .t to frame . Nt The temperature f all f all reprechange rate in the exothermic process can be calculated by .tan θi ,and .θi sents the angle between the connecting line between the initial exothermic amplitude and the exothermic end amplitude and the horizontal coordinate. Similarly, because the angle information is inconvenient to measure, it is calculated by Eq. (2.3) | | | ji,Nt − ji,t | f all i |. | . S f all = tan θi =| (2.3) Nt − t | i (4) Average temperature (.Tmean ) The average temperature is used to measure the temperature over the whole time period, the average temperature of the TTR located in different areas is not the same, the average temperature can be calculated by Eq. (2.4) ∑ Nt t=1 ji,t i . Tmean = . (2.4) Nt i (5) Peak temperature (.Tmax ) The peak temperature is used to measure the maximum temperature contained in a transient thermal response and can be calculated using Eq. (2.5)

Ti

. max

= max ji,t . t=1,...,Nt

(2.5)

24

2 Infrared Feature Extraction and Damage Reconstruction

Infrared thermal image sequence records the heat transfer process under excitation, and there are a large number of transient thermal response curves with similar variations in a specific sequence. A direct feature search of the entire thermal response process would be inefficient. Therefore, redundant information screening and processing of the infrared data is a necessary process. Due to the enormous kinetic energy of hypervelocity flight, the damage situation can be complex. Multiple types of defects may occur simultaneously. Classifying and extracting IR data for different types of damage and different areas of damage can help us to learn the damage patterns for specific impact scenarios in a more targeted manner, and thus to target the design of protective structures and damage severity assessment. Thermal image reconstruction based on representative TTRs of various types of damage, allowing the acquisition of reconstructed images of surface and sub-surface damage. It facilitates visual assessment and visualisation by the human eye. Reconstructed thermal images can enhance the imaging performance of corresponding defects.

2.2 Variable-Step-Based Pre-processing of Infrared Thermographic Image Sequence Due to the large amount of information about damaged and non-damaged areas in the infrared thermal image sequence. Direct processing of the sequence data is inefficient. Therefore, redundant information removal and core defect feature extraction is necessary.

2.2.1 Data Block Division Based on Thermal Extremes From the physical property characteristics associated with the TTR curves presented above, it is clear that each TTR curve has similarity within the same type of region and large variability between different damage regions. If the most representative TTR curve can be found in the infrared thermal image sequence based on one of the physical property features of the infrared thermogram, it would help defect analyzing. In addition, find the feature point on this curve and use the location information of this point to calculate correlation indicators with other near neighbouring points to classify the rows and columns in which they are located as different intervals. This operation, when viewed on a two-dimensional plane, means that the frame in which the feature point is located is divided into blocks by row and column intervals. The division of the blocks achieves the maximum possible grouping of information from similar types of regions into the same block, and information from regions with different impairments into different blocks.

2.2 Variable-Step-Based Pre-processing of Infrared Thermographic Image Sequence

25

Fig. 2.2 Block division based on temperature peaks

In the subsequent processing operations, typical TTR curves will be further searched in these blocks. By extracting these TTR curves and storing them in the data set, it can be considered that this data set effectively represents the overall thermal change characteristics on the basis of removing the redundant data of the original infrared thermal image sequence. The global temperature peak is the global temperature maximum point of the infrared thermal image sequence and is the most general expression of typical defects during temperature change. Therefore, block division can be adopted based on the global temperature peak points as feature points. The block division method based on temperature peaks is shown in Fig. 2.2. In an infrared thermal image sequence, the three-dimensional matrix forms a ternary function with respect to temperature. Therefore, the global temperature peak point is determined by the position of the frame, as well as the position of the pixel within the frame. Once the global temperature peak position has been determined, the work of data block division is undertaken. Row intervals can be divided based on the correlation metric of each TTR curve. In this section, the correlation metric is measured by the Pearson correlation coefficient (. PCC) [5]: .

PCC A,B = √

Cov (A, B) , var (A) var (B)

(2.6)

herein,.Cov (A, B) is the covariance of vector. A and vector. B;.var (A) is the variance of the vector . A.

26

2 Infrared Feature Extraction and Damage Reconstruction

Pearson correlation coefficient represents the degree of correlation between data, and its range is . PCC A,B ∈ [−1, 1]. When . PCC A,B > 0, there is a positive correlation between data A and B. When . PCC A,B < 0, there is a negative correlation between data A and B. When . PCC A,B = 0, data . A and . B are irrelevant. When . PCC A,B = ±1, it indicates that there is a linear positive or negative correlation between the data. In the frame where the global temperature peak is located, search for pixel points along the row where the peak is located to the left and right of the peak point and calculate the correlation between them. A correlation threshold is set and the magnitude of the correlation of these pixel points in relation to the threshold is determined. When the correlation is less than the threshold, it can be assumed that the difference in thermal response characteristics between this pixel point and the global temperature peak is great. The distance between this point and the global temperature peak point is then noted as the column step. Searching in both left and right directions gives a total of two column steps. In order to retain as much defect information as possible and preserve the maximum defect area, the operation .max (a, b) is taken to identify the maximum column step and retain the maximum column step. Similarly, the pixel points are searched along the column where the peak is located in both directions above and below the peak point. A column threshold is set to obtain the row step in the upper and lower directions, and then the maximum value is taken these to obtain the row step. On this basis, the row and column steps are used to divide the rows and columns where the global temperature peaks are located. After the division, the frame is divided into blocks of pixels. The specific algorithm steps can be described as follows: Step 1: Obtain an initial infrared thermal image sequence . I C I S (i, j, t), where .i, j denotes the .i-th row and . j-th column of the 3D matrix, respectively, and .t denotes the number of frames of the infrared thermal image. Step 2: Pick the value with the largest temperature value from the matrix . I C I S, i.e. the global temperature peak: .

I C I S (Io , Jo , To ) =

max

i=1,2,...,R j=1,2,...,Ct=1,2,...,Z

I C I S (i, j, t) .

(2.7)

As shown in Fig. 2.3, . Io , . Jo and .To are the row, column and frame positions of the maximum value, respectively. Step 3: Set the threshold for the column interval step . SSC and the threshold for the row interval step . SS R . Step 4: In the row . Io where the global temperature maximum . I C I S (Io , Jo , To ) is located, find the next position . I C I S (Io , Jo − 1, To ) in pixels to the left and calculate the . PCC of the TTR curve where this point is located and the TTR curve where the global temperature peak is located. If . PCC ≥ SSC , then continue searching for the next pixel point . I C I S (Io , Jo − 2, To ). Until a point is found to the left such that the correlation . PCC between the TTR curve in which it is located and the TTR curve in which the global temperature peak is located satisfies . PCC < SSC , the

2.2 Variable-Step-Based Pre-processing of Infrared Thermographic Image Sequence

27

Global temperature peak

Time Fig. 2.3 Acquisition of peak temperature point locations

TTR curve in which the pixel point is located at that point is considered to have the property of being uncorrelated with the TTR curve in which the global temperature peak is located. That is, there are differences in the physical properties fed by the TTR curves in the row direction, typical of both damage regions. The number of pixel points satisfying . PCC ≥ SSC during the above leftward traversal is counted, as shown in Fig. 2.4, and is noted as interval column step one of . SC1 .

Global temperature peak Find the first point A that satisfies the threshold condition to the left

Find the first point B that satisfies the threshold condition to the right

A

B

Column step SC1

Column step SC2

Fig. 2.4 Find the column step in the row where the peak point is located

28

2 Infrared Feature Extraction and Damage Reconstruction

Step 5: The next position . I C I S (Io , Jo + 1, To ) is searched to the right at row . Io and the correlation . PCC between the TTR curve at this point and the TTR curve at the global temperature peak is calculated. The above process is repeated to the right, as shown in Fig. 2.4, to obtain the interval column step of . SC2 . The column step is taken as . L C = max (SC1 , SC2 ). Step 6: Similarly, set the threshold . SS R for the row interval step and find the next position . I C I S (Io − 1, Jo , To ) in pixels up in the column . Jo where the global temperature maximum . I C I S (Io , Jo , To ) is located. Calculate the . PCC of this point and the global temperature peak. If . PCC ≥ SS R , continue searching for the next pixel . I C I S (Io − 2, Jo , To ). Until the .n-th point . I C I S (Io − n, Jo , To ) is found upwards, the correlation . PCC between its TTR and the TTR curve of the global temperature peak satisfies . PCC < SS R , then the TTR curve of the pixel point at this point is considered to be uncorrelated with the TTR curve of the global temperature peak. That is, the TTR curve feeds back differential physical properties in the column direction, typical of both damage regions. The number of pixel points satisfying . PCC ≥ SS R during the above upward traversal is counted, and is noted as interval row step one of . S R1 . Step 7: Search the next position . I C I S (Io + 1, Jo , To ) at the column, and calculate the location of this point. Repeat the above process and traverse downward to get the second interval column step size . S R2 . Then the column step size is . L R = max (S R1 , S R2 ). After getting the row step size and column step size, the infrared thermal image sequence can be divided into different temperature blocks according to these step size values. Realize the segmentation of infrared thermal image sequence. It can be considered that the TTR curves in the above-mentioned temperature blocks are similar, and there are great differences between TTR curves in different blocks. This operation effectively removes the redundant information inside the infrared thermogram, improves the signal-to-noise ratio, and is helpful for subsequent processing.

2.2.2 Various Regional Step Sizes Establishment After completing the attribute feature-based block segmentation of the thermal image sequence, temperature blocks were obtained that were well segmented according to different segmentation rules. It can be assumed that the temperature blocks processed as described above have similarity between the TTR curves within the blocks and large differences between the TTR curves of different blocks. This operation effectively removes the redundant information within the IR thermogram and improves the signal-to-noise ratio. In this subsection, TTR curves that typically reflect transient thermal information from different regions within this block will be extracted in each block. For each pixel block .Ck , the TTR curves of each pixel point have a certain degree of similarity. Then the TTR curves with similarity can be grouped into a certain kind

2.2 Variable-Step-Based Pre-processing of Infrared Thermographic Image Sequence

max

Ck Sk

1

i 1,2, , R j

29

ICIS i , j , t

1,2, ,Ct 1,2, ,Z

Ck

1

Ro , Cok 1 ,:

Ck

1

Sk

1

Ro , Cok 1 ,:

S k Ro , Cok ,:

c 1, 2,

, Cok 1

PCC

c Cok 1 , Cok 2 ,

, ZRCk

S k Ro , c,: Fig. 2.5 Establishment of the step size

for the purpose of de-redundancy. The extraction of TTR curves within block .Ck requires the use of row steps across the computation to reduce the computation time and improve the sampling speed of TTR curves. When the search for the TTR curve is finished using the row step in the first column of a block, it is necessary to use the column step of the block to jump to the next column to continue the search for the correlation of the TTR curve. As shown in Fig. 2.5, on the row interval corresponding to a block .k, find the temperature maximum for that interval. Calculate the Pearson correlation between the temperature maximum point and the TTR curve of the nearest neighbour pixel in the same row inside that interval. Set the correlation threshold . Q C L k to determine the correlation between the TTR curves of each pixel point using the threshold. The specific steps are ( ) Step 1: Find the maximum . S k Ro , Cok , : in the row interval corresponding to this block .Ck . ( ) Step 2: Compute . S k Ro , Cok , : and . S k (Ro , c, :), i.e. . PCC between the temperature maximum in the block and the other pixel points in the same row as it. Find k k k k .c = 1, 2, . . . , C o−1 to the left and .c = C o+1 , C o+2 , . . . , Z RC to the right. Compute k a total of . Z RC − 1 . PCC values. Step 3: Traverse to the left until it reaches . PCC ≤ Q C L k . Count the number of pixels of . PCC > Q C L k , and record it as the step size of .C L K 1 .

30

2 Infrared Feature Extraction and Damage Reconstruction Row step LR

Column step LC

Block 1

Column interval 1

Row interval 1 Global temperature peak ICIS I o , J o , To

The analysis block 1 is obtained from the row and column intervals

Fig. 2.6 Data blocks and steps

Step 4: Traverse to the right until it reaches . PCC ≤ Q C L k . Count the number of pixels of . PCC > Q C L k , and record it as the step size of .C L K 2 . Step 5: In order to retain the maximum defect feature, the column step size in the k-th block is .C L K = max (C L K 1 , C L K 2 ). As a result of this operation, the algorithm obtains the column step .C L K inside the block .C k . A similar operation is performed for the establishment of the row steps to obtain the row step values for the different regions. The operation of sampling curves in blocks of pixels is divided into variable row steps as well as variable column steps [3,4]. In a certain block .k with row intervals of length .m and .n respectively, the similarity between the corresponding TTR curves of each pixel can be used to calculate the row and column steps for extraction. As shown in Fig. 2.6, in the row interval .C k (Io , j, :) ∼ C k (Io , j + n, :) of block .k, the point in this data block where the maximum temperature value is located can be found. Determine the frame in which the maximum temperature point is located. In this frame, a pixel-by-pixel search is performed on each side to calculate the correlation between the TTR curve at the pixel on both sides and the TTR curve at the point of the local maximum temperature value in the row interval. The correlation threshold is set and the maximum value is taken between the obtained column steps in both directions to obtain the column step .C L k corresponding to this block. Similarly in the column interval .C k (i, Jo , :) ∼ C k (i + m, Jo , :) of block .k, the corresponding row step . R L k of this block can be found.

2.2 Variable-Step-Based Pre-processing of Infrared Thermographic Image Sequence

31

2.2.3 Variable Step Redundant Information Removal After the row and column step sizes in the corresponding blocks have been obtained, the TTR curves will be sampled across pixels within each block using the row and column step sizes found by the process described above, as shown in Fig. 2.7. This method of obtaining variable step sampling curves by considering the global temperature change curve correlation in local blocks allows, the typical TTR information to be effectively extracted. After obtaining the row step . R L K and column step .C L K , the TTR curves corresponding to each pixel in the infrared thermal image sequence need to be filtered to remove redundant data according to the similarity threshold. In each block, the TTR curves corresponding to each pixel are extracted according to the resulting row and column steps to be used as the initial sample for the correlation test. The data set . R (:, g) is set up and the TTR curve corresponding to the global maximum point is stored as . R (:, 1). The correlation between the TTR curve of each pixel point will be calculated and the . R (:, 1) will be used as the basis for filtering the TTR curve. The specific steps are: Step 1: Compute the . PCC between .C k in a block .C k (i, j, :) and the stored curve in . R (:, 1). Step 2: Set the correlation threshold. QTk . If. PCC ≤ QTk , the correlation between the two TTR curves is considered small and is not regarded as redundant information. Store the TTR curve corresponding to this point in. R (:, g). If. PCC > QTk , consider the correlation between the two TTR curves to be high and treat it as redundant information. Step 3: Let .i = i + R L K and compute the . PCC of the next TTR curve in this column with the curve stored in . R (:, g). Column step CLk

Single Pixel

Row step RLk

A pixel block k

Fig. 2.7 Search for pixel points using step size

32

2 Infrared Feature Extraction and Damage Reconstruction

Step 4: If .i > M, let .i = i − M and . j = j + C L K , i.e. this column is calculated and the column step should be increased for each pixel point in the next column. Step 5: If . j > N , then all calculations are complete. Thereby, all typical TTR curves in the infrared thermal image sequence were extracted into the sampled dataset . R (:, g).

2.3 Transient Thermal Responses Separation Based on Clustering Clustering is a method of partitioning a dataset into classes or clusters according to a particular criterion. The aim is to make the data objects as similar as possible within the same class, and as different as possible between classes. Specifically for the object of this chapter, the de-redundanted infrared thermal image sequence, the clustering algorithm is to achieve clustering of TTR curve samples of two types: damaged and non-damaged regions. And it can further implement multi-classification on the basis of binary classification, i.e. to achieve the discrimination and clustering of different damage types [6–9].

2.3.1 GMM-Based Clustering of Transient Thermal Responses To classify transient thermal response curves in different categories. The commonly used Euclidean distance can be used as a metric for clustering algorithm. It works well when the data set has the same range of variation. However, the disadvantage of this Euclidean distance metric is that the maximum scaling feature is controlled. Another disadvantage of the Euclidean distance metric is that the linear correlation between elements distorts the distance metric and the Euclidean distance always gives hypersphere clusters. Such a disadvantage can make it extremely easy to misclassify points at class boundaries when samples have overlapping data in different classes and clusters within classes also appear to overlap. In this case an effective clustering algorithm is required to separate the data in the overlapping regions, and probabilities are more informative and comparable than distances. The probabilities used in the Gaussian mixture model (GMM) are therefore expected to overcome the above limitations of superelliptical clusters and to normalise continuous features to a common range of variance. In this context, GMM is ideally suited to represent data from different models [10]. GMM is a probabilistic model that represents the process of feature temperature change as a probability density function as a weighted sum of multiple local Gaussian components. The aim is to distinguish classes of each data in the absence of sufficient information to build an explicit model for the data set. We used GMM to model the temperature change

2.3 Transient Thermal Responses Separation Based on Clustering

33

process in each class of feature region and then used the resulting model as the basis for classification. GMM can be used mostly for data clustering in datasets, and component-based generation allows the classification to be automated. Our goal is to use GMM to classify classes of curves in the TTR dataset . R (:, g) that have different temperature variation characteristics. Similar curves all have similar rates of change reflecting the same characteristic regional temperature. Once the distribution of each class is known, it is only necessary to calculate the probability that the data in the sample dataset belong to each cluster, thus classify the clusters by the probability magnitude. To obtain the probability of each TTR corresponding to each class of distribution, we use Bayesian estimation, i.e. estimating the posterior probability from the predicted state, using Bayesian rule written as [11] (

.

pμ M

(a)

| | (a) ) P(M (a) = b) • pμ ( j (a) M |C (a) = b) | = =b j , pμ ( j (a) )

(2.8)

| where . pμ ( j (a) | M (a) = b) is the prior probability, where the random variable . M (a) ∈ {1, 2, . . . , K } denotes the Gaussian mixture component of the sample . j (a) , taking an unknown value, and the edge distribution of each . M is specified according to the mixing factor thereby . P(M (a) = b) is the posterior probability corresponding .ωb , b = 1, 2, . . . , K . We can use the previous posterior distribution and use the state transition model to obtain the outcome probability distribution of . j (a) , i.e.

.

pμ ( j (a) ) =

K ∑

) ( ωb p j (a) |μb , ∑b .

(2.9)

b=1

The equation is expressed as (

.

pμ M

(a)

) ( | (a) ) ωb · p j (a) |μb , ∑b | . = K =b j ) ( ∑ (a) ωl · p j |μl , ∑l

(2.10)

l=1

The form of the Gaussian mixture score is controlled by the parameters .θ= {ωb , μb , ∑b } and our objective is to maximize the objective function with respect to the parameters

.

ln pμ (D |θ ) =

G ∑ a=1

{ ln

K ∑

(

ωb · p j

}

(a)

|μb , ∑b

)

.

(2.11)

b=1

The solution to this maximum likelihood function is complicated because the presence of the internal summation of . K prevents the logarithmic function from acting directly on the Gaussian function. It is not possible to obtain a closed form

34

2 Infrared Feature Extraction and Damage Reconstruction

solution by setting the derivative of the log-likelihood function to zero. To solve the above problem a set of implicit variables is first introduced. { 1, j (a) ∈ cb , .vab = (2.12) 0, other wise. That is, .vab are . j (a) indicator variables. For . j (a) divides into .cb if it has the largest probability value obtained in the .b-th Gaussian distribution, so our goal is to find the maximum likelihood solution of the model with implicit variables, for which an effective method is Expectation Maximization (EM) [11] which enables flexible application of the corresponding theorems and formulas to calculate the relevant parameters; in addition, it is a powerful tool capable of dealing with uncertain data, in particular its ability to work with data in the absence of samples. As the direct use of GMM algorithm into TTR classification in each step in the iterative operation resulting large computational volume. In order to improve the detection speed, this section proposes a k-means gaussian mixture model-expectation maximization (KG-EM) based on the TTR classification method, because the GMM needs to repeatedly solve the iterative parameters, the algorithm complexity is high, while K-means is able to find the clustering centre, and then calculate the distance. This algorithm uses the Kmeans algorithm to find the sample data centre, as the initialized mean value to build the GMM model, which is used to speed up the operation of the algorithm, and then uses the EM algorithm to determine the parameters of the model, and in the iterative process to build a complete classification model, and at the same time calculates the probability of each TTR to the type, and by comparing the probability size, to achieve the classification purposes, thus help us to obtain a reconstructed thermal image of the object that highlights the information of the defective features. A K-means based adaptive clustering algorithm was used to classify . R(:, g), g = 1, 2, . . . , G in TTRs for classification, where . R(:, g) holds the specific values of the TTRs. .G is the number of total TTRs in . R. The temperature change vectors corresponding to the sampled pixel points . R(: , g), g = 1, 2, . . . , G are clustered into . K classes with cluster centres . R(:, g1 ), R(: , g2 ), . . . , R(:, g K ). (1) Randomly select . K temperature change vectors from . R(:, g), denoted as . R(: , g1 ' ), R(:, g2 ' ), . . . , R(:, g K ' ), which are selected as the initial mean vector whose corresponding clusters are denoted by each type of . D1 ' , D2 ' , . . . , D K ' . (2) Compute distance between . R(:, g), g = 1, 2, . . . , G and || cluster centers' . R(: || .Δgl = || R(:, g) − R(:, gl )|| . , g1 ' ), R(:, g2 ' ), . . . , R(:, zl ' ), . . . , R(:, g K ' ), 2 || || Determine the cluster label .λlg = arg minl∈1,2,...,K || R(:, g) − R(:, gl ' )||2 for . R(:, g) based on theU nearest mean vector, and assign the sample to the corresponding cluster . Dλlz = Dλlz {R(:, g)}. ∑ (3) Update the mean vector . R new (:, gl new )= |D1l | R(:,g)∈Dl R(:, g), l = 1, 2, . . . , K , where .|Dl | is the total number of samples in the sample cluster . Dl . (4) Check whether . R new = R old , if not repeat (2) and (3). If yes, output . R new (: , gl new ), l = 1, 2, . . . , K .

2.3 Transient Thermal Responses Separation Based on Clustering

.

35

The final . K class sample data is obtained. The final obtained clustering centres R(:, g1 new ), R(:, g2 new ), . . . , R(:, gl new ), . . . , R(:, g K new ) as the initial value of the mean vector in GMM-reinforced clustering. .μl = R(:, gl new ), l = 1, 2, . . . , K . Denote the.T TTR curves in. R(:, gt ) as.( j1 , j2 , . . . , jT ) where. j1 = R(:, g1 ), j2 = R(:, g2 ), . . . , jt = R(:, gt ), . . . , jT = R(:, gT ). Create a Gaussian mixed probability density function:

.

p( jt |θ (v) ) =

K ∑

K ∑ | v atk f k ( jt |μvk , ∑kv ) , akt = 1, t = 1, 2, . . . , T

k=1

(2.13)

k=1

to approximate the complex distribution of TTRs in the sampled dataset, where √ ( jt − μvk )T ∑ vk −1 ( jt − μvk ) −b ± b2 − 4ac ) ( | v v ) exp(− 2/ 2a = . pk jt |μk , ∑k , d v (2π ) μk

(2.14)

is the probability density function of the Gaussian distribution. .d is the number of frames sampled from the infrared thermal image sequences. The initial value of .μk is the prototype vector, i.e. .μk = qk , k = 1, 2, . . . , K . Find the .t-th . R(:, gt ) which is the posterior probability generated from the .k-th Gaussian distribution. [ v ( | v v )]ρ ) ( | a pk jt |μk , ∑k (v) (v) | = [ tk .γtk = p k jt , θ ]ρ , t = 1, . . . , T, k = 1, . . . , K . K ∑ ati pi ( jt |μi , ∑i ) i=1

(2.15) By increasing .ρ, the effect of the posterior probability is reduced during the early iterations of the original EM algorithm, but the effect of the posterior probability is then augmented as the algorithm proceeds. First let .ρ = ρmin , 0 < ρmin ≤ 1, v = 0. Using the method of maximizing the log-likelihood function, we obtain . Q(θ, θ

old ) =

K ∑ T ∑

K ∑ T | | ∑ | | p(k | jt , θ old ) log atk + p(k | jt , θ old ) log f k ( jt |μk , ∑k ) .

k=1 t=1

k=1 t=1

(2.16) Compute the new parameters .θ v+1 = arg max Q (θ, θ v ), compute the new mixθ

ing

coefficients: .akv+1

=

1 T

T ∑

T ∑

γtk , calculate the new mean

t=1 T ∑

calculate the new covariance matrix: .∑kv+1 =

vector: .μv+1 k

=

γtk jt

t=1 T ∑ t=1

γtk ( jt −μk )( jt −μk )T

t=1 T ∑ t=1

. γtk

, γtk

36

2 Infrared Feature Extraction and Damage Reconstruction

Increase .ρ, if the stopping condition .ρ ≥ 1 is satisfied, the iteration stops and the GMM model parameters are output, otherwise update .θ v = θ v+1 to continue the steps. Using the final obtained model parameters .θ to divide the .T TTRs in the sample set . R into . K clusters, the cluster label for each sample . jt is determined by the following equation: .k = arg max γtk , k = 1, . . . , K . Classify . jt into the cork∈{1,...,K } U responding clusters: . Dk = Dk { jt }, divide the TTRs into the cluster set . D = {D1 , D2 , . . . , D K }, i.e. the TTRs in the sample set are divided into . K clusters.

2.3.2 DBSCAN-Based Clustering of Transient Thermal Responses The density-based spatial clustering of applications with noise (DBSCAN) clustering method [12, 13] is a density-based spatial clustering algorithm that is still essentially an unsupervised clustering algorithm. The algorithm looks for regions in the sample population that have a certain density and classifies them into clusters. Further, the algorithm is insensitive to noisy information, so clustering can be done in noisy spatial databases. Unlike classical clustering algorithms such as K-means, the DBSCAN algorithm does not artificially determine the centre and shape of the clusters, but rather calculates the density of the samples separately. The clusters are defined as the largest set of densely connected points. Therefore, DBSCAN can cluster dense data sets of arbitrary shape and the algorithm can find noise points while clustering. The clustering results are unbiased because the cluster centres are not pre-defined. The core parameters in the DBSCAN algorithm are the density radius .ϕ, and the number of points . P specified within the radius. In the DBSCAN algorithm, the data points are divided into three categories: (a) Core point: A sample point . jt is said to be a core point if it contains at least . P samples in the .ϕ neighbourhood of the sample point . Nϕ ≥ P, i.e. . jt ; (b) Border point: A sample point . jt is said to be a border point if the number of samples contained in the .ϕ neighbourhood of the sample point . jt is less than . P, but it is in the neighbourhood of other core points; (c) Noise: other sample points that are neither core nor boundary points. The specific steps of the DBSCAN algorithm are: (1) Pick an arbitrary point in the sample space as the starting point. Find all other sample points whose distance from this sample point is less than or equal to .ϕ, according to the parameter .ϕ, and denote their number as .d. (2) To determine the type of this point: (a) If .d < P, then this point is marked as a noise point. Return to (1) to find another point to repeat the algorithm; (b) If .d ≥ P, then this point is treated as a core point and a new cluster label is assigned to it. (3) Continue to visit all other sample points that are less than or equal to .ϕ away from the core point: (a) If these “neighbours” are not assigned a cluster label, assign them the new cluster label of the core point; (b) If these “neighbor points” are already

2.4 Representative Transient Thermal Response Extraction and Image Reconstruction

37

core samples of the same kind, visit their “neighbour points” in turn, so that the cluster carrying the cluster label gradually expands until there are no more core points within .ϕ distance of the cluster. (4) Select the remaining points in the sample space that have not been traversed and repeat the above steps.

2.4 Representative Transient Thermal Response Extraction and Image Reconstruction Feature extraction of the representative transient thermal response of spacecraft impact damage is the process of fully exploiting the information in the infrared thermal image sequence. Specific feature information is extracted that fully reflects the impact damage of each type of spacecraft. The feature information is then used to obtain a reconstructed thermal image of the impact damage features and material background features. The transient thermal response information combines a wealth of thermal information about the object under test. After clustering and separating the transient thermal response sets of different types of damage, the representative transient thermal response of each type of damage needs to be extracted to further reconstruct the thermal image and visualise the defective features of the specimen. The reconstructed thermal images corresponding to various types of defects are acquired for better defect quantification, identification and subsequent reconstructed thermal image processing. Therefore, the extraction of typical characteristic transient thermal responses is crucial [1].

2.4.1 Representative TTR Extraction Based on Local Average Performance A cluster is considered to be a local set that possesses a similar morphological thermal response. This set represents the transient thermal response corresponding to a series of pixel locations in a local region of a particular class of damage. Feature extraction based on local average performance averages the thermal radiation values over all moments at all pixel locations in that local region with the following equation: N (i ' ) 1 ∑ j .i ' RT (i) = ' J (i) N (i ' ) j=1 i

(2.17)

where .i ' RT (i) is the .i-th of the typical characteristic thermal response ) ( dimension of the .i ' -th category. . N i ' is the number of TTRs included in the .i ' -th category

38

2 Infrared Feature Extraction and Damage Reconstruction

cluster. .i ' J (i) is the .i-th dimension of the . j-th transient thermal response in the .i ' -th category cluster. The representative TTR .i ' RT (i ' = 1, . . . , L) corresponding to the various types of damage defects extracted are transformed to form a .t × L-dimensional feature matrix .Y , where .t is the number of frames of the original three-dimensional matrix . S of the original infrared thermal image sequences. Then . S is converted to a twodimensional matrix . O by vectorisation operations, and a linear transformation is performed on the two-dimensional matrices . O and .Y , i.e., . R = Yˆ ∗ O, where .Yˆ is the pseudo-inverse matrix of .Y . This results in a reconstructed thermal image . R for each type of defect. The TTR extracted based on the local average performance is the average response of all TTRs in the cluster at each moment during the thermal excitation process. The advantage of representative TTR extraction based on this strategy is that it is simple and effective and reflects the general damage pattern of the class of defects. The disadvantages are that it is susceptible to noise and that the extracted feature TTRs are not physically meaningful in terms of actual spatial coordinates and lack interpretability. j

2.4.2 Representative TTR Extraction Based on Distances 2.4.2.1

Representation Priority

As mentioned earlier, averaging is valid though. However, we expect to extract a more accurate representation of the transient thermal response at a specific coordinate position for this type of damage. This is because it is more accurate to find the specific pattern that reflects that type of damage. The problem with representativebased feature extraction is how to make the extracted transient thermal response more representative of the class. In this case, the representativeness and similarity between the extracted representative TTR and the transient thermal response points of the same class is used as the objective function, so that the extracted transient thermal responses can better characterise the thermal radiation of the type of defect to which they belong. The extraction process is characterised in terms of Euclidean distances as .i '

RT = {i ' J (i) |min ||i ' J (i) −i ' Center || } , i = 1, . . . , i ' N .

(2.18)

Then .i ' RT is the typical TTR where the thermal response of the .i ' -th category has the greatest similarity to the thermal response ( ' of the same ) category. .i ' J (i) is the ' .i-th transient thermal response. .i ' Center , .i , i = 1, . . . , L is the transient thermal response of the cluster centre of the class. . L is the number of clusters. .i ' N is the number of thermal responses of the temperature points of the .i ' -th class. The TTR extraction strategy of maximising intra-class representation is a conservative feature extraction algorithm. However, the relationship with the heterogeneous

2.4 Representative Transient Thermal Response Extraction and Image Reconstruction

39

TTRs is not considered in the objective function and the extracted transient thermal response curves may lack sufficient variability and contrast between the various damage transients.

2.4.2.2

Differentiation Priority

Representative TTR extraction for each type can be abstracted as an optimization problem. Based on maximising inter-class variability, it is a feature extraction algorithm for transient thermal response based on the idea of maximising the contrast between various types of defective damage. Its optimisation objective in the search process of representative TTR is to maximise the inter-class distance between different types of damage, thus enhancing the difference and contrast between the thermal response of each type of damage. Representative TTRs were extracted by: | ⎫ | ⎪ | ⎪ ⎪ | ⎪ ⎪ L | ∑ || || ⎬ | || || .i ' RT = , i = 1, . . . ,i ' N , i ' J (i) |max i ' J (i) − j ' Center | ⎪ ⎪ ⎪ ⎪ ' | ⎪ ⎪ ⎪ ⎪ j =1 | ⎪ ⎪ ⎩ ⎭ | j ' /= i ' ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

(2.19)

therein, .i ' RT is the representative TTR where the thermal response of the .i ' -th category has the greatest variability from the thermal responses of the other categories. .i ' J (i) is the .i-th TTR, . j ' Center is the cluster centre transient thermal response of the ) ( ' ' ' ' transient thermal responses, . L is the number of clusters, . j , j = 1, . . . , L , j / = i and .i ' N is the number of thermal responses of the temperature points of the .i ' -th class. It is a feasible approach to consider the differences between them when selecting the representative TTRs of the categories. The representative TTRs of each type of damage thus extracted possess the most obvious inter-class differences between them, and the TTR curves are more distinguishable from each other, allowing the differences in thermal response curves between the different types of damage to be visually discerned. However, the search process based on the maximum interclass difference tends to converge on isolated or noisy points at the edge of the clustering.

2.4.3 Representative TTR Extraction Based on Distance Weighting As representative TTRs in feature extraction algorithms are crucial to the processing of infrared thermal image sequences, when selecting representative TTRs, the differences between classes and similarities within classes are not taken into account simultaneously, resulting in a poor characterisation of the selected representative TTRs for

40

2 Infrared Feature Extraction and Damage Reconstruction

the classes they belong to. To this end, the concept of multiple distance weighting is proposed, and its main idea is to weight each distance function . f 1 (i ' J ) , . . . , f L (i ' J ) based on the importance of each distance function in the overall design accordingly. The ground is given as a set of weighting factors .w1 , . . . , w L , and the sub-distance functions and weighting factors are linearly summed. Assuming that in the feature extraction algorithm for spacecraft damage, the clustering algorithm classifies matrix . R (:, g) in which the representative TTR to be selected into . L classes, and a multi-distance weighted optimization problem is targeted to solve when selecting representative TTRs for each class of TTRs, the representative TTR selection problem for the thermal response of temperature points ( ) in the .i ' ,. i ' = 1, . . . , L class based on multi-distance weighting is defined as .

min F (i ' J ) =

L ∑

wi f i (i ' J ),

(2.20)

i=1

where . f 1 (i ' J ) is the similarity measure function between TTRs within class .i ' and ' . f i (i ' J ) , i = 2, . . . , L is the inter-class variability measure between the class .i and ( ' ) ' ' class . j TTRs, in which . j /= i . When measuring similarity and dissimilarity, since TTR classifications are all grouped around their respective cluster centers, the Euclidean distance of TTRs from the cluster centers is used to calculate the metric values, which are then expressed as ┌ | Z |∑ √ . f 1 (i ' J ) = min (i ' Jh − i ' Centerh )2 , h=1

fi i=2,...,L

⎛ ┌ ⎞ | Z |∑ ( ) 2 ⎠, (i ' J ) = min ⎝−√ i ' Jh − j ' Center h

(2.21)

h=1

where .i ' Centerh is the .h-th dimension of cluster centre of the .i ' -th class TTRs, . Z is the total frame of infrared thermal image sequence. Currently, on numerous weight allocation problems of practical significance [14– 17], the deviation ranking method is used to determine the weights of each objective function, which provides an effective weight allocation scheme for solving these practical problems. Therefore, in the weight allocation problem of Eq. 2.20, the deviation ranking method is used to assign weights to each objective function .w1 , . . . , w L . The deviation, also called the difference, is the deviation between the true value and the estimated value. In our multi-distance weighting problem, when a representative TTR is chosen for a certain class of regions, the meaning of the deviation can be understood as the deviation of the solution of each objective function from the optimal solution when different transient responses are brought in. Suppose the optimal solution of the .i, (i = 1, . . . , L) objective function is .i ' J i∗ , the optimal solution

2.4 Representative Transient Thermal Response Extraction and Image Reconstruction

41

of the . j, ( j = 1, . . . , L) of the objective function is .i ' J j∗ , and the expression for the deviation of the .ith objective function obtained from the solution .i ' J j∗ is | | | j j i| (2.22) .δi = | f i − f i | , i, j = 1, 2, . . . , L , ( j∗ ) i j where . L is the number of objectives, ( i∗ ) . f i denotes the function value . f i i ' J , . f i denotes the function value . f i i ' J . Then the steps of the algorithm for determining the weights by the deviation method are as follows: Step 1: With . L objective functions, and the .i ' class typical TTR selection, find the optimal solution for each objective is .min f j (i ' J ) , j = 1, 2, . . . , L respectively, and the corresponding decision variables are noted as .i ' J j∗ . Step 2: Bring the decision variable corresponding to the optimal solution .i ' J j∗ to j the other objective function . f i to find . f i . j Step 3: Calculate the deviation.δi of each objective function according to Eq. 2.22. Step 4: Calculate the mean deviation of the .i-th target. 1 ∑ j .u i = δ i = 1, 2, . . . , L . L − 1 j=1 i L

(2.23)

Since .δii = 0, the average deviation is obtained as . L − 1. Step 5: Calculation of the weighting factor: wi =

.

ui , i = 1, 2, . . . , L . L ∑ uj

(2.24)

j=1

As the deviations are all non-negative, the ∑ Lweight coefficients calculated by the wi = 1. After the . L weight coefabove process are all positive and satisfy . i=1 ficients have been calculated, the relationship between the mean deviation and the weight coefficients needs to be considered to assign weights to each objective function. Firstly, the calculated mean deviation .u 1 , . . . , u L and the weight coefficients .w1 , . . . , w L are sorted separately, and the weight coefficients with larger values are assigned to the sub-objective function with smaller mean deviation, and conversely, the weight coefficients with smaller values are assigned to the sub-objective function with larger mean deviation function. The deviation-based approach to weight assignment mainly considers the need to minimize the multi-distance weighting problem, and in calculating the deviation j j between . f i and . f ii . The smaller the deviation .δi , indicating that the deviation of the optimal solution of the other functions from the optimal solution of the current objective function is also smaller. Therefore, the average deviation of the current objective function .u i integrally reflects the deviation of the optimal solutions of other objective functions .i ' J j∗ ( j = 1, . . . , L , j /= i) from the optimal solution of

42

2 Infrared Feature Extraction and Damage Reconstruction

the current objective function .i ' J i∗ , when the smaller mean deviations are assigned with larger weights, it is more helpful to minimise the multi-objective optimisation problem. After determining the weight values of each sub-objective function using the deviation method, the total optimized objective function based on multi-distance deviation weighting is obtained

.

min F (i ' J ) =

L ∑

wi f i (i ' J ).

(2.25)

i=1

Afterwards, representative transient thermal responses corresponding to each type of defect damage are extracted based on the objective function.

2.5 Experimental Results and Analysis The experimental materials used in this book were provided by China Aerodynamics Research and Development Center (CARDC). The experimental material in this section is a double-layered Whipple wall after a hypervelocity impact. .250 infrared thermal image sequence with a frame resolution of .512 × 640 is collected by the detection experimental device platform, and then an observation matrix is created and processed for the collected infrared thermal image sequence, with each row of the matrix recording one pixel corresponding to the TTR curve data. Each row of the matrix records the TTR curve data corresponding to one pixel point, and a redundancy removal algorithm process based on variable step size search is performed to remove redundancy from this matrix TTR dataset by calculating: (1) First find the global temperature maximum pixel point, then calculate the . PCC of this pixel point and other pixel points by column and row, compare with the column step threshold . SSC and row step threshold . SS R to adjust the column and row step; (2) Set the temperature threshold, partition the observation matrix into data blocks, extract the maximum point of the temperature value of each data block, calculate the . PCC between the maximum point of the temperature of the data block and the other pixel points by row in the same way as the column step calculation, and adjust the row step of each data block based on the row step threshold . Q R L and the col step threshold . Q C L for each data block; (3) Finally, based on the row and column steps, the. PCC between the largest pixel point of global temperature and the TTR data of other pixel points are calculated, and the data below the set threshold are retained, and finally .667 TTR data of tested materials that retain the integrity of defective features are extracted; The specific parameters involved in the experimental process, and the comparison of the number of TTR curves before and after de-redundancy of the dataset are shown in Table 2.1.

2.5 Experimental Results and Analysis

43

Table 2.1 De-redundant parameters Number of original TTRs Block division threshold

327680 Row Column

. PCC

0.9897

0.9904

.· · ·

127 128 129 130 131 132 133 .· · · 77 1 73 38 1 41 34 .· · · .· · · 127 128 129 130 131 132 133 .· · · .· · · 1 1 1 1 1 1 7 .· · · 667

0.9897

0.9905

.· · ·

.· · ·

366

367

368

.· · ·

487

488

0.9998

1

0.9998

.· · ·

0.9903

0.9897

218

219

.· · ·

222

223

1

0.9994

.· · ·

0.9914

0.9867

Table 2.3 Correlation of column pixels 210 .· · · 217 Column 209 pixel index . PCC

= 0.99 = 0.99

.· · ·

Number of data blocks Step length for each column Number of data blocks Step length for each row Number of TTRs after redundancy removal

Table 2.2 Correlation of row pixels 209 210 .· · · Row pixel index

. SS R . SSC

0.9993

The experimental implementation is as follows: (1) First obtain the infrared thermal image sequence, find the global temperature peak point and record the location, which in this experiment is located in row .218, column .367 and frame .136. Storing this TTR curve in . R(:, 1). (2) Determine the correlation threshold . SSC = 0.99 in the column direction. And take this point as the reference, find the . PCC correlation between the TTR curve of adjacent pixels and the curve of this point in the row where this point is located, left to right, respectively, to the first pixel position that is less than the threshold, as shown in Table 2.2. (3) Take the maximum of the two column steps . SC1 = 158 and . SC2 = 121, noted as step . L C = 158. (4) Determine the correlation threshold . SS R = 0.99 in the row direction. Use this point as a reference, and look up and down in the column where this point is located to find the . PCC correlation between the TTR curve of adjacent pixels and the curve of this point, respectively, to the location of the first pixel that is less than the threshold, as shown in Table 2.3. (5) Take the maximum of the two column steps . S R1 = 9 and . S R2 = 5, noted as step . L R = 9.

44

2 Infrared Feature Extraction and Damage Reconstruction

Table 2.4 Mean vectors .U1 .U2 .U1 .U2

26.50226 26.6738 31.31492 28.86707

26.50228 26.67671 31.30495 28.86637

26.50504 26.67521 31.2933 28.8572

26.51331 26.6798 31.27265 28.85245

26.57394 26.69474 31.24615 28.84124

…… …… 31.22363 28.83403

(6) The blocking of the frame is achieved by the obtained step size. In the first block, for example, a block of .100 × 20 is obtained by means of row and column steps. (7) Continue to search for temperature peaks within each block and set row and column step thresholds, use the variable step operation to traverse the TTR curves within the block, calculate the correlation between the curve corresponding to the pixel within each block and the temperature maximum within the block, and retain the characteristic TTR curve. (8) Calculate the PCC of TTR curves of each block with the temperature peak, and filtered according to the threshold value, and the filtered TTR curves were stored in . R(:, g). A total of.667 TTR data were saved, enabling the filtering of valid information from .327013 data. After pre-processing the IR data, the remaining valid TTR dataset is clustered so that different types of defective TTRs can be separated from each other. The de-redundant TTR dataset was processed using the GMM-based clustering method. The weight values of GMM were .w1 = 0.1544; w2 = 0.8456. The extracted mean vectors are shown in Table 2.4. The graph of the results after GMM clustering is shown in Fig. 2.8. As can be seen from the figure, the clustered TTR set was successfully divided into two classes, with each class possessing high intra-class similarity and inter-class variability. Compact structure within the same category. Defect image reconstruction is performed using the extracted TTRs for each type of damage. A .t × 2 dimensional feature matrix .Y is formed, and a linear transformation is performed based on .Y , i.e.: . R = Yˆ ∗ O, which results in a reconstructed thermal image . R of all types of defects, as shown in Figs. 2.10 and 2.11. Afterwards, a representative TTR was extracted in each category for specific analysis. The corresponding representative TTRs in the two categories were extracted using a distance-based weighting method. the representative TTRs were extracted by weighting two distances, minimising the distance for similar damage and maximising the distance for dissimilar damage, and the results are shown in Fig. 2.9. The maximum temperature value for the representative .T T R1 is .Tmax = 32.8787 and the maximum temperature value for the representative .T T R2 is .Tmax = 29.5605. The peak temperature difference between the two types of TTR is .ΔT = 3.3182. The temperature rise rate for the warming period of .T T R1 is .0.0356. The rate of temperature rise during the warming period for .T T R2 is .0.0159.

2.5 Experimental Results and Analysis

45

Fig. 2.8 Clustering result

Tmax

32.8787

T max 0.0356 F

T Tmax

3.3182

29.5605

T max 0.0159 F

Frame1 180

Fig. 2.9 The extraction result of representative TTR

Frame2 182

46

2 Infrared Feature Extraction and Damage Reconstruction

50 100 150 200 250 300 350 400 450 500 100

200

300

400

500

600

500

600

Fig. 2.10 Reconstructed thermal image of two types of defects

50 100 150 200 250 300 350 400 450 500 100

200

300

400

Fig. 2.11 Reconstructed thermal image of two types of defects

References

47

As can be seen in the figures, the two types of damage from HVI have been reconstructed to give a high contrast, high definition representation. Figure 2.10 shows details of impact perforations and impact craters on the impact surface. Figure 2.11 shows damage features such as peeling of protective material and backside bulges on subsurfaces.

2.6 Summary This chapter explores the feature extraction and damage reconstruction methods after acquiring the raw infrared data. The extraction of valid information and the elimination of redundant information is achieved after first refining the infrared thermal image sequence based on a variable-step search method. The pre-processed TTR data are clustered so as to separate the set of response curves for each type of damage. Finally, representative TTRs representing the overall response characteristics of each type of damage are extracted from the TTR data set in a certain way, and image reconstruction is performed based on the representative TTR data for each type of damage to obtain high-quality reconstructed thermal images of each type of damage. This is the pre-requisite for subsequent image processing, defect segmentation and damage fusion.

References 1. Yin, C., Huang, X., Cao, J., Dadras, S., Shi, A.: Infrared feature extraction and prediction method based on dynamic multi-objective optimization for space debris impact damages inspection. Journal of the Franklin Institute, 358(18), 10165–10192 (2021) 2. Zhang, H., Huang, X., Yin, C., Cheng, Y. H., Shi, A. Dadras, S., Luo, J.: Design of hypervelocityimpact damage evaluation technique based on Bayesian classifier of transient temperature attributes. IEEE Access, 8, 18703–18715 (2020) 3. Yin, C., Xue, T., Huang, X., Cheng, Y. H., Dadras, S., Dadras, S.: Research on damages evaluation method with multi-objective feature extraction optimization scheme for M/OD impact risk assessment. IEEE Access, 7, 98530–98545 (2019) 4. Huang, X., Yin, C., Dadras, S., Cheng, Y., Bai, L.: Adaptive rapid defect identification in ECPT based on K-means and automatic segmentation algorithm. Journal of Ambient Intelligence and Humanized Computing, 1–18 (2018) 5. Edelmann, D., Móri, T. F., Székely, G. J.: On relationships between the Pearson and the distance correlation coefficients. Statistics & Probability Letters, 169, 108960 (2021) 6. Tan, X., Yin, C., Huang, X., Dadras, S., Design of Defect Diagnosis Algorithm with Multiobjective Feature Extraction Optimization to Assess the M/OD Impact Damages, 2021 American Control Conference (ACC), (IEEE, 2021), pp. 1712–1717 7. Huang, X., Shi, A., Luo, Q., Luo, J.: Variational Bayesian multi-sparse component extraction for damage reconstruction of space debris hypervelocity impact. Frontiers of Information Technology & Electronic Engineering, 23(4), 530–541 (2022) 8. Zhu, P., Yin, C., Cheng, Y., Huang, X., Cao, J., Vong, C. M., Wong, P. K.: An improved feature extraction algorithm for automatic defect identification based on eddy current pulsed thermography. Mechanical Systems and Signal Processing, 113, 5–21 (2018)

48

2 Infrared Feature Extraction and Damage Reconstruction

9. Yin, C., Cheng, Y., Xue, T., Huang, X., Zhang, H., Chen, K., Shi, A.: Method for separating out a defect image from a thermogram sequence based on weighted naive bayesian classifier and dynamic multi-objective optimization: Washington, DC: U.S. Patent and Trademark Office, 11,036,978, 2021-6-15, (2021) 10. Choi, S. W., Park, J. H., Lee, I. B.: Process monitoring using a Gaussian mixture model via principal component analysis and discriminant analysis. Computers & Chemical Engineering, 28(8), 1377–1387 (2004) 11. Yang, X., Huang, X., Yin, C., Cheng, Y. H., Dadras, S.: GMM-based automatic defect recognition algorithm for pressure vessels defect detection through ECPT. International Federation of Automatic Control-Papers On Line, 53(2), 820–825 (2020) 12. Li, S, S.: An improved DBSCAN algorithm based on the neighbor similarity and fast nearest neighbor query. IEEE Access, 8, 47468–47476 (2020) 13. Luchi, D., Rodrigues, A. L., Varejao, F. M.: Sampling approaches for applying DBSCAN to large datasets. Pattern Recognition Letters, 117, 90–96 (2019) 14. Blot, A., Kessaci, M. E., Jourdan, L., Hoos, H. H.: Automatic configuration of multi-objective local search algorithms for permutation problems. Evolutionary Computation, 27(1), 147–171 (2019) 15. Wei, W., Fan, W., Li, Z.: Multi-objective optimization and evaluation method of modular product configuration design scheme. The International Journal of Advanced Manufacturing Technology, 75, 1527–1536 (2014) 16. Khodadadi, N., Abualigah, L., El-Kenawy, E. S. M., Snasel, V., Mirjalili, S.: An ArchiveBased Multi-Objective Arithmetic Optimization Algorithm for Solving Industrial Engineering Problems. IEEE Access, 10, 106673–106698 (2022) 17. Jozefowiez, N., Semet, F., Talbi, E. G.: Multi-objective vehicle routing problems. European Journal of Operational Research, 189(2), 293–309 (2008)

Chapter 3

Reconstructed Thermal Image Fusion Based on Multi-objective Guided Filtering

Image fusion techniques can fuse the reconstructed thermal images that characterize different damage morphologies together, effectively improving the overall capability to characterize defects in a single image. This section considers the fusion needs of different types of defects in the image fusion process, while taking into account multiple fusion objective functions modelled together. A reconstructed thermal image fusion algorithm based on two-layer multi-objective guided filtering is proposed. The algorithm integrates different types of reconstructed thermal images by adopting multiple guided filtering cost functions, carrying out multi-objective optimization, and passing the obtained optimal weighting sets to the full pixel layer for weighted fusion. Experiments show that the reconstructed thermal image fusion algorithm, which considers the fusion needs of multiple types of defects simultaneously, has higher performance than the common single-objective fusion methods.

3.1 Introduction Based on TTR, reconstructed thermal images can obtain high-quality feature reconstruction of specific types of damage, overcoming the limitations of low contrast, low signal-to-noise ratio, and blurred visual effects of original infrared thermal images, and improving the detectability of specific defects. However, dispersed information may potentially lead to efficiency reduction. Image fusion can integrate and enhance multi-channel image information. Especially for complex types of damage caused by impacts, fusing various types of defect images to obtain a complete defect image is of great significance. Scientifically fused detection images can improve the accuracy and completeness of detection.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 C. Yin et al., Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft, https://doi.org/10.1007/978-981-99-8216-5_3

49

50

3 Reconstructed Thermal Image Fusion Based …

3.2 Complex Damage Fusion Requirement Image fusion refers to the process of combining multiple images into a single image based on certain fusion rules. The fused images combine information from multiple sources and provide a clearer and more accurate description of the target. Research on multi-source image fusion can be traced back as far as .1979, when Daliy et al. [1] applied composite images of radar images and Landsat-MSS images to geological interpretation. By the late 1980s, image fusion techniques were gradually applied to a wider range of images (e.g. visible light images, infrared images, etc.). Nowadays, multi-source image fusion techniques are of great value in the fields of medicine, remote sensing, digital photography, social security, etc. It is important to research and develop effective image fusion methods [2–6]. In the infrared thermographic NDT of hypervelocity impact damage, the reconstructed thermal image contains a large amount of information, among which defect information, including contour, position, and size information, is extremely important [7–9]. The high-frequency detail information of defect has a significant impact on the detection effect of damage. During the processing of the reconstructed thermal images, the edge contour information of the defect is one of the most important features for characterizing a defect. Image fusion not only needs to meet basic requirements but also needs to have the function of preserving the refined defect edges. Guided filter is an edge-preserving filter that can maintain edge clarity while smoothing the image. It can effectively preserve the edge detail information of the image, thus meeting the requirements of preserving and fusing the defect feature information in the reconstructed thermal images [10]. However, guided filter only considers a single target of thermal radiation value gradient change and cannot deal with more complex damage. As shown in Fig. 3.1, impact damage may result in complex damage, including multiple types or quantities of damage. It not only includes perforation defects at the impact center and largesized impact pit defects but also many subtle damages surrounding the impact pit. Subsurface damage has a slower radiation gradient and is easily submerged in noise in ordinary single-target fusion. Due to the universality of thermal radiation, noise information is also interference information for evaluating damage, which affects the overall accuracy of damage detection. Therefore, it is necessary to consider the fusion requirements of multiple types of damage to improve the effect of image fusion. To solve the above problem, this chapter combines a multi-objective evolutionary optimisation algorithm into reconstructed thermal image fusion. Three fusion objective functions are involved to be considered and optimised simultaneously. The fusion performance and information integration capability of the fused thermal images are further enhanced. The multi-objective evolutionary algorithm based on decomposition (MOEA/D) [14] algorithm is able to optimise multiple potentially conflicting objective functions simultaneously in the form of vectors, enabling multiple objective functions to reach Pareto optimality simultaneously. MOEA/D decomposes the multi-objective problem into a series of sub-problems and optimises them simulta-

3.3 Multiple Fusion Objectives Jointly Moulding

51

Excessive image noise

Defect lost

Large size defect fusion needs

Thermal image over-smoothing

Minimize fusion image noise

Small defects fusion needs

Multiple Needs

Multiple Considerations

Fig. 3.1 Shortcomings of single-target fusion

neously. The sub-problems are related to each other by weight vectors and co-evolve in the form of neighbourhoods.

3.3 Multiple Fusion Objectives Jointly Moulding Guided filter is an edge-preserving filter based on a local linear model, and its computation time is independent of the filter size [10, 11]. The guided filtering algorithm improves on the bilateral filtering algorithm, assuming that the output image is obtained by transforming the input image and the guidance image separately with different linear relationships. The output image has a similar structural organization to the input image and preserves the edge information and texture features of the guidance image. The output image and the guidance image are linearly correlated in gradient changes, so the guidance image guides the gradient change direction of the output image, thereby achieving the edge-preserving property. Moreover, the computational complexity of the algorithm is independent of the filter kernel size, which not only results in better edge effects of the filtered image but also lower time complexity. Note that the filter output .C of the guided filter is a linear transformation of the guidance image .Y in a local window .Wx centred on pixel .x. Ci = k x Yi + dx , ∀i ∈ Wx ,

.

(3.1)

52

3 Reconstructed Thermal Image Fusion Based …

where .Wx is a rectangular window of size .(2r + 1) × (2r + 1), .k x and .dx are linear transformation coefficients. .Yi and .Ci are the value of pixel .i in the guidance image and output image respectively. .k x and .dx are constants in the rectangular window .Wx . They can be obtained by minimising the following formula for the variance between the input image . Z and the output image .C, i.e., .

G(k x , dx ) =



((k x Yi + dx − Z i )2 + δ|k x |2 ),

(3.2)

i∈Wx

here .δ is a preset regularization parameter.The following equations can be obtained for .k x and .dx using a linear regression model

k =

1 |W |



Yi Z i − μx Z x

i∈Wx

. x

σx 2 +δ dx =Z x − k x μx ,

, (3.3)

therein, .σx2 is the covariance of guidance image .Y in window .Wx . .μx is the mean of the guidance image .Y in the window .Wx . .|W | is the number of pixels in the window . W x . . Z x is the mean value of input reconstructed thermal images . Z in the window . W x . Since a pixel can be contained in more than one window, the coefficients .k k and .dk are averaged .C i = k i Yi + d i , (3.4) where.k i =

1 |W |

∑ x∈Wi

k x and.d i =

1 |W |



dx . The guided filter cost function.G(k x , dx )

x∈Wi

considers only the minimisation of the difference between the input image and the output image. Such a single cost function cannot meet the fusion needs of complex damages. Therefore, we build multiple cost functions in the following subsection targeted to the thermal properties of different types of damage.

3.3.1 Thermal Radiation Variance-Aware Objective Function For significant defects with distinct boundaries, it tends to have large thermal radiation variations and variance. The advantages specific to this type of defect should be maintained as much as possible during the fusion process. The guided filter cost function .G 1 (k x , dx ), which incorporates a perceptual weight factor for the defect edge, is able to detect defect edge information in reconstructed thermal image with drastic variations in infrared thermal radiation. The edge perception weight factor is defined according to the variance of the thermal radiation value, where the weight factor corresponding to the region with large variance of thermal radiation is also larger [12]. The guided filter cost function .G 1 (k x , dx ) with defect edge feature perception

3.3 Multiple Fusion Objectives Jointly Moulding

53

weighted is defined at each coordinate position as follows .



G 1 (k x , dx ) =

[(k x • Yi + dx − Z i )2 +

i∈Wx

δ ϑ(Yx )

( )2 • k x ],

(3.5)

The linear transformation coefficients .k x and .dx are determined by the thermal radiation variance perception cost function .G 1 (k x , dx ). In Eq. (3.5), . Z i represents the thermal radiation value at the .i-th coordinate point of the input image . Z and .Yi represents the thermal radiation value at the.i-th coordinate point of the reconstructed thermal image.Y ..δ is the regularization factor, and.ϑ(Yx ) is the edge perception weight factor, which is defined as follows: ϑ

. (Yx )

=

I ×J 1 ∑ σY2x ,1 + γ , I × J n=1 σY2n ,1 + γ

(3.6)

where . I and . J are the number of rows and columns of the reconstructed thermal image. .σY2x ,1 denotes the variance of the thermal radiation values corresponding to each coordinate point in the.3 × 3 window centred on the.Yx pixel in the reconstructed thermal image .Y . .γ is a small constant of .(0.001 × D R(Z ))2 and . D R(•) is the dynamic range of . Z . The following derivation is obtained by minimizing the cost function .G 1 (k x , dx ), that is, .

∑ ∂G 1 2δ = [2(k x Yi + dx − Z i )Yi + k ] = 0, ∂k x ϑ(Yx ) x i∈W

(3.7)

x

.

∑ ∂G 1 = [2(k x Yi + dx − Z i )] = 0. ∂dx i∈W

(3.8)

x

Morphing (3.8) gives the expression for .dx : .

kx



Yi + |Wx | dx −

i∈Wx



Z i = 0,

i∈Wx

⎛ ⎞ ∑ 1 ⎝∑ Z i − kx Yi ⎠ , dx = |Wx | i∈W i∈W x

(3.9)

x

here, .|Wx | denotes the number of pixel points within the rectangular window .Wx , and substituting the expression for .dx into (3.7) yields: ∑ ∂G 1 2δ = [2(k x Yi + dx − Z i )Yi + k ] = 0, ∂k x ϑ(Yx ) x i∈W x

54

3 Reconstructed Thermal Image Fusion Based …

∑ i∈Wx

∑ i∈Wx

⎛ kx ⎝



i∈Wx



⎤ ∑ ∑ Y Y k δ i x i ⎣k x Yi2 + Zi − Yi − Z i Yi + k ⎦ = 0, |Wx | i∈W |Wx | i∈W ϑ(Yx ) x



x

x



⎞ ⎤ ∑ ∑ Y Y δ ⎣k x ⎝Yi2 − i ⎠+ i Yi + Z i − Z i Yi ⎦ = 0, |Wx | i∈W |Wx | i∈W ϑ(Yx ) x

x

⎞ ∑ ∑ Yi ∑ ∑ δ ∑ Yi ∑ ⎠= Yi2 − Yi + Z i Yi − Zi . |Wx | i∈W |Wx | i∈W ϑ(Yx ) i∈W i∈W i∈W i∈W x

x

x

x

x

x

Dividing both sides of the equation by .|Wx | at the same time gives ⎛

⎞ ∑ ∑ Yi ∑ ∑ δ 1 1 1 ⎠= .k x ⎝ Y2 − Yi + |Wx | i∈W i |Wx | i∈W |Wx | i∈W |Wx | i∈W ϑ(Yx ) x

x

x

x

1 ∑ Yi ∑ 1 ∑ Z i Yi − Zi , |Wx | i∈W |Wx | i∈W |Wx | i∈W x

here, . |W1x |

x

(3.10)

k



Yi2 represents the mean of the squared pixels of the guidance image ∑ Yi ∑ 1 Yi represents .Y within the rectangular window . W x , denoted as .Y x2 . . |Wx | |Wx | i∈Wx

i∈Wx

i∈Wx

2

the square of the mean of the pixels in the rectangular window .Wx , denoted as .(Y x ) . Therefore, the formula for calculating .k x can be transformed as follows 1 ∑ Yi ∑ 1 ∑ Z i Yi − Zi |Wx | i∈W |Wx | i∈W |Wx | i∈W x x x .k x = 1 ∑ δ 2 2 Yx − (Y x ) + |Wx | i∈W ϑ(Yx ) x

1 ∑ Yi ∑ 1 ∑ Z i Yi − Zi |Wx | i∈W |Wx | i∈W |Wx | i∈W x x x = 1 ∑ δ 2 σx,Y + |Wx | i∈W ϑ(Yx )

=

1 ∑ Z i Yi − Y x Z x |Wx | i∈W x

2 σx,Y

+

δ ϑ(Yx )

x

,

(3.11)

3.3 Multiple Fusion Objectives Jointly Moulding

55

2 here, .σx,Y represents the pixel value variance of the guidance image .Y within the ∑ Z i Yi denotes the mean of the Hadamard rectangular window .Wx . Moreover, . |W1x | i∈Wx

product of the input image . Z and the guidance image .Y within the rectangular window .Wx , which we denote as .h x,Y ⊗Z , where .⊗ represents the Hadamard product of matrices. Meanwhile, .μx,Y and .μx,Z respectively represent the means of the guidance image .Y and the input image . Z within the rectangular window .Wx . Substituting these values into the edge perception weight .ϑ(Yx ) , we obtain the final expression for .k x k =

. x

h x,Y ⊗Z − μx,Y μx,Z 2 σx,Y + ϑ(Yδ ) x

h x,Y ⊗Z − μx,Y μx,Z = 2 σx,Y + I ×Jδ (I 2× J ) ∑ σY ,1 + γ x n=1

σY2n ,1 + γ

(h x,Y ⊗Z − μx,Y μx,Z ) •

I ×J 2 ∑ σY ,1 + γ x

n=1

= 2 σx,Y •

I ×J 2 ∑ σY ,1 + γ x

n=1

σY2n ,1 + γ

σY2n ,1 + γ

.

(3.12)

+ δ (I × J )

Using .μx,Z to denote the pixel point average of the input image . Z within the rectangular window .Wx and .μx,Y to denote the pixel point average of the guidance image .Y within the rectangular window .Wx , the final expression for .dx is ⎛ ⎞ ∑ 1 ⎝∑ .d x = Z i − kx Yi ⎠ = μx,Z − k x μx,Y . |Wx | i∈W i∈W x

(3.13)

x

3.3.2 Multi-window Edge-Aware Objective Function For those small defects and sub-surface damage with fuzzy contours or low thermal radiation value, it is worth more attention on them in spacecraft damage detection. Edge-aware guided filtering incorporating multiple windows [13] is able to detect small impact damage defects in reconstructed thermal images. Based on the gradient domain for weight factor definition, it has a good ability to retain the texture of fine size defects. Therefore, in order to enhance the performance of characterisation of small size defects in fused thermal images for complex types of impact defects, the following cost function .G 2 (k x , dx ) is considered as:

56

3 Reconstructed Thermal Image Fusion Based … .

G 2 (k x , dx ) =



[(k x Yi + dx − Z i )2 +

i∈Wx

ρ (k x − νx )2 ], Ʌ(Yx )

(3.14)

where .ρ is the regularization parameter, .Ʌ(Yx ) is the multi-window edge awareness weight, and .νx is a factor to adjust .k x , which are defined respectively as follows: Ʌ(Yx ) =

.

I ×J 1 ∑ σYx ,1 σYx ,wx + ζ . I × J n=1 σYn ,1 σYn ,Wn + ζ

(3.15)

Among them, .σYx ,1 represents the standard deviation of pixel values in a .3 × 3 window centered at pixel .Yx in the guidance image .Y . .σYx ,Wx represents the standard deviation of pixel values in a rectangular window .Wx centered at pixel .Yx in the guidance image.Y .. I × J represents the total number of pixels in the guidance image. 2 .ζ is a very small constant with a size of.(0.001 × L) , where. L represents the dynamic range of the input image. The definition of .νx is given as follows 1

ν =1−

. x

1+e ( Therein, .η is .4/

1 I ×J

I∑ ×J n=1

η(σYx ,1 σYx ,Wx

I ×J 1 ∑ − σ σ ) I × J n=1 Yn ,1 Yn ,Wn

.

(3.16)

) σYn ,1 σYn ,Wn − min {σYn ,1 σYn ,Wn } . .σYn ,1 (n ∈ I × J ) n∈I ×J

denotes standard deviation of pixels in a .3 × 3 window centerd at .Yn , .σYn ,Wn (i ∈ N ) denotes standard deviation of pixels in .Wn centerd at .Yn . Derive .k x and .dx by the following derivation .

∑ ∂G 2 2ρ = [2(k x Yi + dx − Z i )Yi + (k x − νx )] = 0, ∂k x Ʌ (Yx ) i∈W

(3.17)

x

.

∑ ∂G 2 = [2(k x Yi + dx − Z i )] = 0. ∂dx i∈W

(3.18)

x

Transforming (3.18) yields the expression for .dx ∑ ∑ kx Yi + |Wx | dx − Z i = 0, i∈Wx

i∈Wx

⎛ ⎞ ∑ 1 ⎝∑ Z i − kx Yi ⎠ , dx = |Wx | i∈W i∈W x

x

where .|Wx | has the same meaning as above. Furthermore, substituting the expression for .dx into Eq. (3.17) yields

3.3 Multiple Fusion Objectives Jointly Moulding

57

∑ ∂G 2 2ρ = [2(k x Yi + dx − Z i )Yi + (k − νx )] = 0, ∂k x Ʌ(Yx ) x i∈W x

∑ i∈Wx

∑ i∈Wx

⎤ ∑ ∑ Y Y k ρ x i ⎣k x Yi2 + i Zi − Yi − Z i Yi + (k − νx )⎦ = 0, |Wx | i∈W |Wx | i∈W Ʌ(Yx ) x ⎡

x



x



⎞ ⎤ ∑ ∑ Y Y ρ ρ ⎣k x ⎝Yi2 − i ⎠+ i Yi + Z i − Z i Yi − νx ⎦ = 0, |Wx | i∈W |Wx | i∈W Ʌ(Yx ) Ʌ(Yx ) x

⎛ .

kx ⎝



i∈Wx

=



x

⎞ ∑ ρ ∑ Yi ∑ ⎠ Yi + Yi2 − |W | Ʌ x (Y ) x i∈W i∈W i∈W x

x

Z i Yi +

i∈Wx

∑ i∈Wx

x

∑ Yi ∑ ρ νx − Zi . |Wx | i∈W Ʌ(Yx ) i∈W x

x

Divide both sides by .|Wx | at the same time, one has ⎛



1 k ⎝ |Wx | i∈W

. x

x

⎞ ∑ ∑ ∑ Yi ρ ⎠ 1 1 Yi2 − Yi + |Wx | i∈W |Wx | i∈W |Wx | i∈W Ʌ(Yx ) x

x

x

1 ∑ ρ 1 ∑ Yi ∑ 1 ∑ Z i Yi + νx − Zi . = |Wx | i∈W |Wx | i∈W Ʌ(Yx ) |Wx | i∈W |Wx | i∈W x

x

x

(3.19)

x

Using the notation mentioned above the formula for .k x , one has 1

|Wx | k =

∑ i∈Wx

. x

1

|Wx | =

∑ i∈Wx

1 ∑ ρ 1 ∑ Yi ∑ νx − Zi |Wx | i∈W Ʌ(Yx ) |Wx | i∈W |Wx | i∈W x x x ∑ ρ 2 1 2 Yx − (Y x ) + |W | x Ʌ(Yx )

Z i Yi +

i∈Wx

1 ∑ ρ 1 ∑ Yi ∑ Z i Yi + νx − Zi |Wx | i∈W Ʌ(Yx ) |Wx | i∈W |Wx | i∈W x x x ∑ ρ 2 σx,Y + |W1 | x Ʌ(Yx ) i∈Wx

ρ 1 ∑ Z i Yi − Y x Z x + νx |Wx | i∈W Ʌ(Yx ) x = . ρ 2 σx,Y +Ʌ (Yx )

(3.20)

58

3 Reconstructed Thermal Image Fusion Based …

Substituting .Ʌ(Yx ) and .νx gives the final expression for .k x as ρ h x,Y ⊗Z − μx,Y μx,Z + Ʌ νx (Yx ) .k x = ρ 2 σx,Y + Ʌ(Yx ) =

Ʌ(Yx ) (h x,Y ⊗Z − μx,Y μx,Z ) + ρνx 2 Ʌ(Yx ) σx,Y +ρ (h x,Y ⊗Z − μx,Y μx,Z ) •

I ×J ∑ σYx ,1 σYx ,wx + ζ n=1

= 2 σx,Y •

σYn ,1 σYn ,wn + ζ

I ×J ∑ σYx ,1 σYx ,wx + ζ n=1

σYn ,1 σYn ,wn + ζ

+ ρνx I J .

(3.21)

+ I Jρ

The final expression for .dx is obtained as: ⎞ ⎛ ∑ 1 ⎝∑ .d x = Z i − kx Yi ⎠ = μx,Z − k x μx,Y . |Wx | i∈W i∈W x

(3.22)

x

3.3.3 Local Detail Extraction Objective Function The noise information in the fused thermal image not only interferes with the judgment of subtle defects visually, but also affects the accuracy of defect quantification recognition. Based on the local Laplacian of Gaussian (LoG) operator [11], the noise reduction guided filter uses the Gaussian Laplacian edge detection operator to detect the edge information in the reconstructed thermal image, and combines guided filtering to extract and preserve the areas with drastic gradient changes in the original reconstructed thermal image, thus achieving better smoothing of the background information in other areas of the image and removing noise. The cost function of the local LoG operator space noise reduction guided filter is defined as .G 3 (k x , dx ), that is, ∑ τ . G 3 (k x , d x ) = [(k x Yi + dx − Z i )2 + k x 2 ], (3.23) Ψ (Y ) x i∈W x

where .τ is the regularization parameter and .Ψ(Yx ) is the local LoG ( Laplacian-ofGaussian) edge weight factor, which is defined as follows: Ψ(Yx )

.

I ×J 1 ∑ |LoG(Yx )| + δ LoG = , I × J n=1 |LoG(Yn )| + δ LoG

(3.24)

3.3 Multiple Fusion Objectives Jointly Moulding

59

therein, . LoG() is the Gaussian Laplace edge detection operator, . I × J is the total number of pixel points in the bootstrap image,.|•| is the take absolute value operation, and .δ LoG is .0.1 times the maximum value of the LoG image. With a similar derivation as in the previous two sections, we get: ∑ ∑ Yi ∑ 1 1 |Wx | i∈W Z i Yi − |Wx | i∈W |Wx | i∈W Z i x x x .k x = ∑ τ 2 Yx2 − (Y x ) + |W1 | x Ψ(Yx ) i∈Wx

1 ∑ Yi ∑ 1 Z Y − Zi i i |Wx | |Wx | i∈W |Wx | i∈W i∈Wx x x = ∑ τ 2 σx,Y + |W1 | x Ψ(Yx ) i∈Wx ∑ 1 Z i Yi − Y x Z x |W | ∑

x

=

i∈Wx

.

2 σx,Y + Ψτ (Yx )

(3.25)

Substituting the edge perception weight .Ψ(Yx ) gives the final expression: k =

. x

h x,Y ⊗Z − μx,Y μx,Z 2 σx,Y + Ψτ (Yx )

(h x,Y ⊗Z − μx,Y μx,Z ) •

n=1

= 2 σx,Y



|LoG(Yn )| + δ LoG

I ×J ∑ |LoG(Yx )| + δ LoG n=1

d = μx,Z − k x μx,Y .

. x

I ×J ∑ |LoG(Yx )| + δ LoG

|LoG(Yn )| + δ LoG

,

(3.26)

+τIJ (3.27)

The above three cost functions independently determine the desired effect of the corresponding fusion thermal images. However, in order to meet the fusion requirements of complex defect types, it is necessary to optimize the above three cost functions simultaneously to ensure the optimal infrared fusion thermal image for complex defect types. However, since these three functions are mutually exclusive, the fusion requirements for each type of defect are not completely the same. To make multiple objective functions reach the optimal solution as much as possible, an algorithm is needed to simultaneously optimize the above three conflicting objective functions. Therefore, we introduce the following multi-objective evolutionary algorithm.

60

3 Reconstructed Thermal Image Fusion Based …

3.4 Multi-objective Guided Filtering Based Weight Acquisition Layer 3.4.1 Two-Layer Multi-objective Fusion Framework Single-objective guided filtering is difficult to balance the preservation requirements of large defects, small defect details, and noise information elimination for complex defect types simultaneously. As shown in Fig. 3.2, we combine a multi-objective evolutionary optimization algorithm with the guided filtering cost function to comprehensively utilize the advantages of each guided filter. This approach aims to improve the performance of fusing multiple thermal images of complex defect types. As shown in Fig. 3.2, the overall fusion framework is divided into two layers. Iterative calculation of the original reconstructed thermal image directly will lead to excessive consumption and low calculation efficiency. Therefore, a two-layer multiobjective optimization algorithm is proposed. Firstly, the original reconstructed thermal images are downsampled and input into the multi-objective evolutionary optimization algorithm to find the optimal weight vector. Based on the optimal weight vector, multi-guided weighted filter is applied to the original size reconstructed thermal image to complete the pixel-level image fusion.

External bubble of debris

Multi-objective guided filtering

Ejecta veil Hypervelocity particle

Ek

Min F ( X kx ) [G1 ( X kx ), G2 ( X kx ), G3 ( X kx )]T Internal structure

Bumper

s.t. X kx

Rear wall

Secondary debris cloud

G1 X kx

Edge retention for large defects

G2 X kx

Detail retention for tiny defects

G3 X kx

Noise cancellation

IR camera

(q)

Rear wall with complex defects

pareto optimal weights set

Outer Impact Area

q 2

kx Double layer multi-objective fusion

3



3

q r



2 x ,Y

kx

x ,Y

r 1

dx

x,Z

q

x

r



x ,Y

x,Z

hx ,Y

Z

r 1

(Yx ) q 1

q



2

q



3

(Yx )



(Yx )

(Yx )

Inner Impact Area

min G4 (k x )

q 1

G1 (k x )

q 2

G2 (k x )

q 3

G3 (k x ) Final fusion image

Penetration

Fig. 3.2 Proposed algorithm framework

Multi-target thermal image fusion

3.4 Multi-objective Guided Filtering Based Weight Acquisition Layer

61

Here we introduce a multi-objective evolutionary optimization algorithm [14–16] in order to simultaneously optimize the three fusion objective functions described above. For the downsampled reconstructed thermal image, the following multiobjective problem is defined in MOEA/D: .

Min F(X kx ) = [G 1 (X kx ), G 2 (X kx ), G 3 (X kx )]T , s.t. X kx ∈ Ω,

(3.28)

where . X kx is the linear transformation coefficient .k x in the .x-th guided filter window Wx . Since .dx can be derived ( ) from .k x , so take .k x as the independent variable. .Ω is the feasible domain. .G 1 X kx( is the ) value of thermal radiation variance-aware objective function under . X kx , .G 2( X k)x is the value of multi-window edge-aware objective function under . X kx , .G 3 X kx is the value of local detail extraction objective function under . X kx .

.

3.4.2 Multi-objective Decomposition Based on Penalty Term As spacecraft defect detection is complex, the optimal set of pareto solutions required varies from object to object. The penalty term based boundary intersection method is therefore well suited to the needs of the spacecraft defect detection. Because of the penalty term, which determines the angle of the pressure surface during population evolution and thus drives the population to approach the optimal solution set in different ways, the diversity of the population and the convergence of the population is controllable and can be dynamically adapted to different multi-objective optimisation problems as required. When optimizing two or more objectives, a uniform solution set can still be obtained. The decomposition equation is shown below

.

( − →) → min g pbi X kx |− ω , f ∗ = ┌1 + a • ┌ 2 , |( )T || | − − → ∗ | → f ω || − F(X ) kx | | | − →∗ | | − → | |− , ┌ ┌1 = = ) − ( f +┌ • ω ) |F(X |, 2 k 1 x | |→ ω subto X kx ∈Ω,

(3.29)

− → − → where . f ∗ is the reference point vector . f ∗ = (g1∗ , g2∗ , g3∗ )T and the initial value of { } − →∗ → ω = {ω1 , ω2 , ω3 } is a .f is set to: .g ∗j = min G j (X kx )|X kx ∈ Ω , .( j = 1, 2, 3). .− series of weight vector values uniformly distributed over the feasible domain. .a is a pre-determined penalty factor. → ω , the above decomposition As shown in Fig. 3.3. For the weight vector .− process can be viewed geometrically according to the decomposition formula:

62

3 Reconstructed Thermal Image Fusion Based …

Fig. 3.3 Analysis of the PBI formula

┌1 =

|( )T || | − − →| ∗ | → | f −F(X k x ) ω | − → , .f∗ − → ω

as the reference point, . F(X kx ) as the point in objec| | − →∗ tive function space. So . f − f (X kx ) represents for the vector of the origin to the named .μ.| reference point minus the |vector of the origin to . F(X kx ), tentatively |( | )T | | |− | T T|| ∗ − →∗ − → → − → − → And .( f − F(X kx )) ω = ||( f ∗ − F(X kx )) || | ω | cos θ , .┌1 = || f ∗ − F(X kx ) || |cos θ |. So, .┌1 can be understood as the projection length of the .μ vector in the direction of the weight vector according to |the formula, as shown in |Fig. 3.3. − → | | → ω )|. Because .┌1 is According to the formula, one has .┌2 = |F(X kx ) − ( f ∗ +┌1 • − → scalar positive,.┌1 • − ω is a segment of the vector in the direction of the weight vector, → ω denotes for that is, a vector in the same direction as the weight vector. So . f ∗ +┌1 • − − → sum of reference points sub-vector of the weight vector. While . F(X kx ) − ( f ∗ +┌1 • − → → − → ω , and then take ω ) is the point in objective function space minus vector . f ∗ +┌1 • − the mold. So .┌2 according to the formula can be expressed as shown in Fig. 3.3. According to the above analysis, under the constraint of decomposition function, the individual solution has the pressure to evolve in two directions. One is the vertical evolutionary pressure determined by .┌1 , it evolves along the weight vector towards PF, as shown in Fig. 3.4. One is the lateral evolutionary pressure determined by .┌2 , i.e. approaching the weight vector in a direction perpendicular to the weight vector, as shown in Fig. 3.5. The evolutionary pressures in both directions are analyzed as follows:

.

3.4 Multi-objective Guided Filtering Based Weight Acquisition Layer

Fig. 3.4 Analysis of the PBI formula: Situation-(a)

Fig. 3.5 Analysis of the PBI formula: Situation-(b)

63

64

3 Reconstructed Thermal Image Fusion Based …

→ ω , sub-objective funcAs shown in Figs. 3.4 and 3.5. For the weight vector .− − → − → − → → pbi ∗ ω ). From . X A , tion of point . X A is: .min g (X A | ω , f ) = A ┌1 ( ω ) + a× A ┌2 (− take the point . X B along the direction perpendicular to the weight vector and towards the direction close to the weight vector. The sub-objective function is − → → → → pbi .min g (X B |− ω , f ∗ ) = B ┌ 1 (− ω ) + a× B ┌2 (− ω ). From . X A , the point . X C along the opposite direction of weight vector, of which the sub-objective function is − → → → → pbi .min g (X C |− ω , f ∗ ) = C ┌ 1 (− ω ) + a×C ┌2 (− ω ). (1) For the case shown in Fig. 3.4, the projection lengths of points . X A and . X B → → along the direction of the weight vector are equal, i.e.. A ┌1 (− ω ) = B ┌ 1 (− ω ), indicating that the points . X A and . X B are equally far from true Pareto Front. And there is .B

→ → ┌ 2 (− ω )< A ┌2 (− ω ),

(3.30)

showing that the vertical distance between the point . X B to the weight vector is less than that of . X A . Then we have: .g

pbi

− → − → → → → → → → (X B |− ω , f ∗ ) = B ┌1 (− ω ) + a× B ┌2 (− ω )< A ┌1 (− ω ) + a× A ┌2 (− ω ) = g pbi (X A |− ω , f ∗ ).

(3.31) So the sub-objective function of . X B is less than the sub-objective function of − → − → → → pbi . X A ,.g (X B |− ω , f ∗ ) < g pbi (X A |− ω , f ∗ ). Therefore,. X A will evolve in the direction → ω . The towards point . X B , that is, ensure that point gets closer to the weight vector .− individual solution tends to approach the weight vector in the direction perpendicular to the weight vector under lateral evolutionary pressure. (2) For the case shown in Fig. 3.5, the vertical distances between points . X A , . X C → → → ω are equal, i.e. . A ┌2 (− and the weight vector .− ω ) = C ┌ 2 (− ω ). For a certain .a there is .C

→ → ┌ 1 (− ω )< A ┌1 (− ω ).

(3.32)

The above equation shows that the distance of point . X C to true Pareto Front is less than . X A , i.e., the projection length of point . X C in the direction of the weight vector is smaller. Then we have: .g

pbi

− → − → → → → → → → (X C |− ω , f ∗ ) = C ┌1 (− ω ) + a×C ┌2 (− ω )< A ┌1 (− ω ) + a× A ┌2 (− ω ) = g pbi (X A |− ω , f ∗ ).

(3.33)

.

Therefore, the sub-objective function value of . X C is less than that of point . X A , − → − → → → g pbi (X C |− ω , f ∗ ) < g pbi (X A |− ω , f ∗ ). Therefore, the population will evolve in the direction towards point . X C , that is, the point will get closer to the intersection of weight vector and true Pareto Front. Individuals tend to evolve towards Pareto Front along the direction of weight vector under vertical evolutionary pressure. As shown in Fig. 3.6. Based on the analysis of points . X A , . X B and . X C above, we can draw the conical contour of the evolution of the population. The exact shape of the contour surface depends on the value of .a. When the sub-objective function

3.4 Multi-objective Guided Filtering Based Weight Acquisition Layer

G3

1

65

,

2

,

3

2 1

G2

a

C B

G1

( ) 2( ) 1

B C

( ) a 2( )

1

C B

( ) 2( ) 1

B C

( ) 2( )

1

Fig. 3.6 The case where the solutions are not on the same Contour plane

− → → ω )− B ┌1 (− ω) of point . X B is better than that of point . X C , there is .a > C ┌1 (− → − → ; when B ┌2 ( ω )−C ┌2 ( ω ) the sub-objective function of point . X C is better than that of point . X B , we have − → − → C ┌1 ( ω )− B ┌1 ( ω ) . .a < − → − → B ┌2 ( ω )−C ┌2 ( ω )

3.4.3 Implementation of Multi-objective Guided Filtering Based Weight Acquisition Layer Detailed steps of the multi-objective evolutionary algorithm-based fusion weight acquisition layer are as follows: −→ − → Step 1: Initialize a set of uniformly distributed weight vectors .ω1 , . . . , ω N p , the subproblem that can be obtained using the decomposition of the boundary intersection method based on penalty terms: ( .

min g pbi

) → − → X kx |ω j , f ∗ = ┌1 + a • ┌2 ,

(3.34)

3 − → ( j j j )T )T ∑ − → ( j j herein,.ω j = ω1 , ω2 , ω3 , 1 ≥ ωi ≥ 0, ωi = 1, f ∗ = g1∗ , g2∗ , g3∗ , gi∗ ∈ R i=1 ( ) is the reference point, .gi∗ is the reference point corresponding to .G i X kx . Step 2: Randomly generate initial population of transform coefficients, generating Np 1 . N P random values . X k (0), . . . , X k (0) ∈ Ω when iteration .t = 0. x x

66

3 Reconstructed Thermal Image Fusion Based …

Step 3: Initialize the non-dominated solution set . N DS(0) = ϕ, which stores the candidate points for optimal transform coefficients; initialize the reference { ( ) − → points . f ∗ (0) = (g1 (0), g2 (0), g3 (0))T where .gi (0) = min G i X k1x (0) , . . . , ( )} N G i X kx p (0) , .i = 1, 2, 3. − → Step 4: For each weight vector .ω j , find .T T weight vectors that are closest to it according to the following formula:

.

dis jt

┌ | 3 ( )2 |∑ j ωkt − ωk , t = 1, . . . , N P , t /= j, =√

(3.35)

k=1

− → ( j j j )T − → here, .dis jt denotes the Euclidean distance between .ω j = ω1 , ω2 , ω3 and . ωt = ( t t t )T ω1 , ω2 , ω3 . Let . B[ j] = { j1 , . . . , jT T } denote the index set of the .T T nearest − → − → − −→ → weight vectors to .ω j . Then, .ω j1 , ω j2 , . . . , ω jT T are the .T T nearest weight vectors to − →j .ω . Step 5: At the .t-th evolution, make the following update operations for each − → weight vector .ω j , . j = 1, . . . , N p : (a) Update individuals: two index .m and .n are randomly selected from . B[ j], and a threshold .G P ∈ (0, 1) is set. Generate a new solution .Y (t) = G P × X kmx (t) + (1 − G P) × X knx (t) from . X kmx (t) and . X knx (t). − → (b) Update the reference point . f ∗ (t): for each .i = 1, . . . , 3, if .gi (t) > G i (Y (t)), let .gi (t) = G i (Y (t)). ( ) → →∗ pbi j − Y (t)|ω , f ≤ (c) Update the neighbourhood solution: for .t ∈ B[ j], if .g ( ) → − → g pbi X kt x |ω j , f ∗ , let . X kt x (t + 1) = Y (t). (d) Update . N DS(t): keep the solution vector that dominates .Y (t) from . N DS(t) and remove all solution vectors that are dominated by .Y (t). If none of the vectors in . N DS(t) dominate .Y (t), add .Y (t) to . N DS(t). Step 6: If the termination condition is not satisfied, .t = t + 1 then repeat Step 5. Otherwise, output the final solution set . N DS(t). q Step 7: Select a compromise solution . X kx from . N DS(t), use its weight vector )T ( )T ( −→ (q) (q) (q) (q) (q) (q) value as optimal weight set .γ (q) = γ1 , γ2 , γ3 = ω1 , ω2 , ω3 , and pass it to the full pixel fusion layer.

3.5 Multi-scale Fusion of Full Pixel Layers Based on Optimal Weight

67

3.5 Multi-scale Fusion of Full Pixel Layers Based on Optimal Weight 3.5.1 Dual Scale Decomposition of Reconstructed Thermal Images on Full Pixel Layers Considering the complexity of the algorithm, the base layer image and detail layer image of the reconstructed thermal image are obtained by means of the spatial mean filter. The source image to be fused is decomposed into basic layer and detail layer by means of mean filtering. The base layer image . Jn is obtained from the following equation . Jn = Z n ∗ M, (3.36) where . Z n is the .n-th reconstructed thermal image to be fused, . M is the mean filter, and usually the filter scale size is set to .31 × 31. After acquired base image, the image of the detail layer .Tn is obtained by subtracting the source image from the base image by . Tn = Z n − Jn . (3.37)

3.5.2 Multi-guided Filtering Based Weight Map Acquisition 3.5.2.1

Initial Fusion Weight Map Acquisition

After the two-scale decomposition of the reconstructed thermal images, we obtained the base layer image and detail layer image for each reconstructed thermal image. Now, different fusion strategies need to be specified between each base layer image and each detail layer image to obtain better fusion effect. For the base layer images, we perform a simple weighted fusion. For the detail layer images, which are rich in information, we use a weighting strategy based on multi-guided filtering. Firstly, the high-pass image is obtained by Laplace filtering on each source image . Hn . Hn = Z n ∗ Q, (3.38) where . Q is a Laplace filter of size .3 × 3. The local average of the absolute values of the high-pass image . Hn is then used to construct the saliency map . Sn : S = |Hn | ∗ G L rg f ,σg f ,

. n

(3.39)

where .G L rg f ,σg f is a Gaussian low-pass filter of size .(2r g f + 1) × (2r g f + 1) with parameters .σg f .

68

3 Reconstructed Thermal Image Fusion Based …

The saliency map provides a good description of the level of salience of the image details. The saliency map is then used to determine the weight map by means of the following equation { x . Pn

=

1 , if Snx = max(S1x , S2x , . . . , S Nx ), 0 , otherwise,

(3.40)

where . N is the number of reconstructed thermal images to be fused and . Snx is the salience value of the .x-th pixel point in the .n-th saliency map.

3.5.2.2

Multi-guided Filtering Based on Optimal Weights

After the multi-objective optimization weight acquisition layer, the set of Pareto opti−→ ( (q) (q) (q) )T ( (q) (q) (q) )T mal weight set .γ (q) = γ1 , γ2 , γ3 = ω1 , ω2 , ω3 is obtained. Based on the optimal weight value configuration, the final multi-guided filter cost function . G 4 can be obtained: .

(q)

min G 4 (k x ) = γ1

(q)

• G 1 (k x ) + γ2

(q)

• G 2 (k x ) + γ3

• G 3 (k x ).

(3.41)

Substituting the specific formula for each cost function, we can obtain the final formula for the transformation coefficients: (q)

γ2 k =

. x

3 ∑

γr(q)



• Ʌ ρ νx − (Yx ) 2 σx,Y

r =1

+

(q) γ1

3 ∑

) ( γr(q) • μx,Y μx,Z − h x,Y ⊗Z

r =1



δ ϑ(Yx )

+

(q) γ2

ρ τ (q) • + γ3 • Ʌ(Yx ) Ψ(Yx )

dx = μx,Z − k x μx,Y .

,

(3.42)

3.5.3 All-Pixel Image Fusion Implementation with Multi-objective Guided Filtering The overall all-pixel multi-objective fusion process is shown in the Algorithm 1. The original reconstructed thermal images to be fused and the corresponding downsampled images are obtained after Line 1 and Line 2, respectively. After Line 8, the optimal weight sets .γi J , .γiT corresponding to the base and detail layer images are acquired by multi-objective optimization respectively, where . M OG F denotes the weight acquisition layer for multi-objective guided filtering, .r1 , .r2 denote filter win-

3.5 Multi-scale Fusion of Full Pixel Layers Based on Optimal Weight

69

dow sizes, .ε1 , .ε2 denote regularization parameters. In Line 11, . Ji , .Ti and . Pi are the base layer, detail layer and initial weight map of the full pixel image, respectively. In Line 14, .WiJ and .WiT are the weight map of base layer and detail layer, respectively. Algorithm 1: Multi-Guided Fusion Data: guided filter parameters, r1 , ε1 , r2 and ε2 Result: Final Fused image, S 1 Acquire reconstructed thermal images Z = Z 1 , Z 2 , . . . , Z I −1 ; 2 Perform downsampling on each image down Z = down Z 1 ,down Z 2 , . . . ,down Z I −1 ; 3 for each down Z , i = 1, 2, . . . , I − 1 do 4 Dual scale decomposition to obtain base layer image down Ji and detail layer image down Ti ; 5 end 6 for i = 1, 2, . . . , I − 1 do 7 Perform saliency comparisons and obtain initial weight maps down Pi ; 8 Get optimal weights: γi J = M OG F(down Pi ,down Z i , r1 , ε1 ), γiT = M OG F(down Pi ,down Z i , r2 , ε2 ); 9 end 10 for each Z i , i = 1, 2, . . . , I − 1 do 11 Dual scale decomposition and saliency detection at the full pixel level, get Ji , Ti , Pi ; 12 end 13 for i = 1, 2, . . . , I − 1 do 14 Obtain refined weight map WiJ and WiT by multi-guided filtering of Pi ; 15 end 16 Get the final fusion image S by Eq. (3.46); In Line 14 of Algorithm 1, the original reconstructed thermal image . Z i is used as the guide image, and . Pi is used as the input image of multi-guided filtering. In order to ensure the consistency of the coefficients in different filter windows, the coefficients are corrected: ( p)

1 ∑ ( p) k . |Wn | x∈W x

. n

=

( p) dn

1 ∑ ( p) = d . |Wn | x∈W x

k

n

(3.43)

n

Output the final refined weight map according to the following formula: .

( p)

( p)

Pn ' = k n • Z n + d n ,

(3.44)

70

3 Reconstructed Thermal Image Fusion Based …

where .n represents the value of the .n-th pixel in the corresponding image. . Pn ' represents the base layer or detail layer weight map refined by multi-guided filtering. After obtaining the refined base layer weight map .W1J , W2J , W3J , W4J and the refined detail layer weight map.W1T , W2T , W3T , W4T . The final fused image is obtained by weighted fusion. That is, the images of the base and detail layers are weighted separately based on the refined weight map

.

J=

N ∑

WnJ Jn ,

n=1

T =

N ∑

WnT Tn .

(3.45)

n=1

Then, the final fused image is obtained by combining the fused base layer image and the fused detail layer image .

S = J + T.

(3.46)

3.6 Experimental Results and Analysis 3.6.1 Specimen #1 The experimental materials used in this book were developed by CARDC, which is a real HVI specimen. After the impact of hypervelocity particles, a circular impact crater is formed, and complex topography is formed around the impact crater. Meanwhile, the impact center is accompanied by perforation phenomenon. After a period of thermal excitation of the specimen, .363 frames of infrared thermal image sequence were taken. After extracting the features, we obtained representative TTRs for two types of damage: impact perforation and impact crater. Two reconstructed thermal images . Z 1 and . Z 2 were obtained for both types of damage, as shown in Figs. 3.7 and 3.8. The size of the reconstructed thermal images is .512 × 640 and a total of .327680 pixels of the thermal radiation values are recorded in each frame. In particular, . Z 1 highlights the impact centre perforation defect etc.; . Z 2 highlights the details inside and around the impact crater. The reconstructed thermal images . Z 1 and . Z 2 characterize the two most important types of defect features in the specimen, so they are subjected to image fusion combining multi-objective optimization and guided filtering to obtain fusion images that simultaneously characterize complex types of defects. First, the reconstructed thermal images . Z 1 and . Z 2 are mean filtered to obtain the base layer image and the detail layer image with the mean filter size parameter of

3.6 Experimental Results and Analysis

71

Fig. 3.7 Damage type 1, . Z 1 50 100 150 200 250 300 350 400 450 500 100

200

300

400

500

600

100

200

300

400

500

600

Fig. 3.8 Damage type 2, . Z 2 50 100 150 200 250 300 350 400 450 500

31 × 31, and the obtained base layer . J1 , J2 and detail layer .T1 , T2 images are shown in Figs. 3.9, 3.10, 3.11 and 3.12. Then the salience comparison and the initial weight map are obtained using a Laplace filter of size.3 × 3 and a Gaussian low-pass filter of size.11 × 11 with parameters .σg f = 5. The obtained high-pass thermal images . H1 , H2 , saliency maps. S1 , S2 , and initial weight maps . P1 , P2 are shown in Figs. 3.13, 3.14, 3.15, 3.16, 3.17 and 3.18. The initial weight map is used as the input image of the multi-objective guided filter, and the reconstructed thermal images are used as the guidance images to carry out multi-objective guided filtering of the initial weight map to obtain the refined weight map. The multi-objective guided filtering window size parameter for the base layer thermal image is .r1 = 8, the regularization parameter .ε1 = 0.32 , and the multiobjective guided filtering window size parameter for the detail layer thermal image is

.

72

3 Reconstructed Thermal Image Fusion Based …

Fig. 3.9 . J1

The base layer of Z1 obtained

50

by mean filtering 100 150 200 250 300 350 400 450 500 100

Fig. 3.10 . J2

200

300

400

500

600

500

600

The base layer of Z2 50

obtained by mean filtering

100 150 200 250 300 350 400 450 500 100

200

300

400

r = 4, and the regularization parameter .ε2 = 0.032 . The multi-objective evolutionary optimization population size is set to . N p = 20, the neighborhood size .T T = 6, the maximum number of iterations is .T = 100, and the penalty term factor .a = 5. After the MOEA/D evolutionary optimization algorithm, a series of Pareto optimal solutions of three cost functions are obtained. The number of Pareto optimal solutions obtained for the base and detail layers corresponding to the two reconstructed thermal images are . N DS 1 (t) = 20, . N DS 2 (t) = 20 , . N DS 3 (t) = 5 and . N DS 4 (t) = 16 respectively. PFs are shown in Figs. 3.19, 3.20, 3.21 and 3.22. After that, a Pareto-optimal solution is selected from a series of optimal nondominated solutions, and its corresponding Pareto Set is the multi-objective guided filtering optimal linear transformation parameter. The extracted multi-objective guided filtering optimal linear transformation parameters and their corresponding val-

. 2

3.6 Experimental Results and Analysis

73

Fig. 3.11 .T1

Detail layer image of Z1

160 170

50

180

100

190 380

150

400

420

200 250 300 350 310

400

315

450 320

500

325

100

Fig. 3.12 .T2

200

300

400

Detail layer image of Z2

390 395 400 405 410

160 170

50

180

100

190 390 400 410 420 430

150 200 250 300 350 400

315 320

450

325

500

330

100

200

300

400

390

400

410

ues of each cost function are . F(1) (X k(15) ) = [1.5449 × 104 , 1.9375 × 104 , 1.8019 × x 104 ]; . F(2) (X k(8) ) = [1.6167 × 104 , 1.8471 × 104 , −2.2377 × 104 ]; . F(3) (X k(3) )= x x (11) 3 3 3 . F(4) (X k ) = [1.8966 × 103 , [1.8972 × 10 , 1.9206 × 10 , 1.9198 × 10 ]; x 3 3 1.9176 × 10 , 1.5246 × 10 ]. The obtained optimal linear transformation parameters and the values of each cost function are specified in Table 3.1. After that, in order to ensure the consistency of the linear transformation parame∑ ( p) ( p) k x and ters.k x and.dx in different guided filter windows, the formula.k n = |W1n | d

( p)

. n

( p)

=

1 |Wn |

∑ x∈Wn ( p)

x∈Wn

( p)

dx

to get the final linear transformation parameters and use . Pn ' =

k n • Z n + d n to obtain the final refined fusion weight map .W1J , W2J , W1T , W2T . For visualization, Figs. 3.23 and 3.24 shows the fusion weight map of the base

74

3 Reconstructed Thermal Image Fusion Based …

Fig. 3.13 . H1

160

50

170

100

180

150

390

400

410

420

200 250 300 350

310

400

315 320

450

325

500

330

100

200 300 (H ) 400 High-pass image of Z 1 1

390 500

Fig. 3.14 . H2

400 600410

170 180

50 190

100

390 400 410 420

150 200 250 300 350 300

400 310

450 320

500 100

200 High-pass300 image (H 2400 ) of Z 2

330

500

400

410

600

420

430

layer (.W jmax = W1J + W2J ) and the fusion weight map of the detail layer (.W tmax = W1T + W2T ) obtained by multi-objective guided filtering. Based on the multi-objective guided filtering, the final fused image of the base layer and the fused weight map of the detail layer are obtained by combining each base layer image and the corresponding fused weight map of the base layer and the corresponding fused weight map of the detail layer as shown in Figs. 3.25 and 3.26. Ultimately, combining the base layer fusion image and the detail layer fusion image, the high quality detected thermal image after the multi-objective guided filtered image fusion algorithm is obtained as shown in Fig. 3.27. From the figure, it can be seen that the fused image can effectively characterize multiple complex defect types of impact perforation and impact crater defects on the defect surface as well

3.6 Experimental Results and Analysis

75

Fig. 3.15 . S1 160

50

170

100

180

150

390

400

410

420

200 250 300 350 310

400

315

450

320 325

500

330

100

200 map 300 Saliency (S 1) of Z 1 400

Fig. 3.16 . S2

500 390

400 600410

160 170

50

180 190

100

200

150

380

400

420

200 250 300 350 400

310

450 320

500 100

200Saliency300 map (S 2) 400 of Z 2

330 500

600 390

400

410

as the impact area bulge on the back simultaneously, achieving effective detection of complex types of defects. Multi-objective guided filtering takes into account the fusion objectives of largesize contour defects with strong thermal amplitude transformation, small-size fine defects with detail retention, and noise information elimination, and is therefore more suitable for the detection of complex types of defects. To further illustrate the effectiveness of the proposed algorithm, the results of our algorithm are compared with the results of the common guided filter fusion algorithm, and the comparison results are shown in Figs. 3.28, 3.29 and 3.30. As shown in Figs. 3.28, 3.29 and 3.30, after combining the multi-objective evolutionary optimization algorithm, the refined weight map obtained by our algorithm can clearly see the broken carbon fiber material defects in the center of the impact crater, regardless of the base layer weight map or the detail layer weight map, while

76

3 Reconstructed Thermal Image Fusion Based …

1 0.8 0.6 0.4 0.2

1 2 3 4 5 6 7 8 9 10 12 11 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

Initial weight map (P ) of Z

0

1

1

Fig. 3.17 . P1

1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

0.8 0.6 0.4 0.2

Initial weight map (P ) of Z

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

2

2

0

Fig. 3.18 . P2

the broken defects in the impact center are blurred in the weight map after the ordinary guided filtering. Meanwhile, the raised defects of carbon fiber material near the impact crater in the specimen almost disappear in the ordinary guided filtered map, while the fine defect features of the raised edge of carbon fiber material in the specimen can be clearly seen in our algorithm.

3.6.2 Specimen #2 The experimental data was provided by CARDC and contains a real hypervelocity impact sample. The hypervelocity particle impact on the Whipple plate produced a secondary debris cloud that struck the back wall and formed numerous large and

3.6 Experimental Results and Analysis

77

Fig. 3.19 PF of base layer weight map of . Z 1

Fig. 3.20 PF of base layer weight map of . Z 2

small craters on the surface of the specimen. Due to the severity of the impact, the specimen showed damage such as back bulges and cracks on the reverse side, with the cracks coinciding with the location of the large craters. All of these factors contributed to the complexity of the damage sustained by the specimens. After a period of thermal excitation of the specimen, the infrared thermal image sequence was captured to .251 frames. Feature extraction was then performed, yielding two typical types of TTR, namely damage to the surface shot hole and the backside bulge. Two reconstructed thermal images . R1 and . R2 were obtained after reconstructing typical TTRs for both types of damage, as shown in Figs. 3.31 and 3.32. The size of the thermal images was .512 × 640 and the thermal transients were recorded at

78

3 Reconstructed Thermal Image Fusion Based …

Fig. 3.21 PF of detail layer weight map . Z 1

Fig. 3.22 PF of detail layer weight map . Z 2

327680 pixels per frame. Of these, . R1 primarily highlights surface defects, including impact craters and surface perforation defects; . R2 provides sub-surface defects, containing defects such as spalling, bumps and cracks in the backing material. Single thermal images do not portray all types of damage simultaneously therefore a thermal image fusion algorithm incorporating two layers of multi-objective optimization was used. For efficiency reasons, the original thermal image was downsampled. The size of the thermal image was reduced to.256 × 320 for a total of.81920 pixel points.

.

3.6 Experimental Results and Analysis

79

Table 3.1 Multi-objective optimal linear transformation coefficients and cost function values Image Optimal . X k x .G 1 (X k x ) .G 2 (X k x ) .G 3 (X k x ) Index J

. W1

J

15

. W2

8

T

3

. W1

T

. W2

[0.4184,0.8524,0.9812,

15449.0078

19374.5711

18018.7894

16167.2061

18470.7417

–22376.7393

1897.1584

1926.0465

1919.8474

1896.6025

1917.6114

1524.6184

.. . .,0.8319,0.5787]

[0.1610,0.7988,0.5673, .. . .,0.9288,0.9877]

[0.5664,0.4716,0.3773, .. . .,0.5150,0.6981]

11

[0.2838,0.1577,0.3511, .. . .,0.2102,0.5506]

Fig. 3.23 .W jmax Raised surface of carbon fiber material

50 100 150 200 250 300 350 400 450

Carbon fiber material surface impact crater breakage

500 100

200

300

400

500

600

Fig. 3.24 .W tmax Detail of raised surface texture of carbon fiber material

50 100 150 200 250 300 350 400 450

Carbon fiber material surface impact crater breakage texture detail

500 100

200

300

400

500

600

80

3 Reconstructed Thermal Image Fusion Based …

Fig. 3.25 Fused base layer image

Weighted fused base layer image 50 100 150 200 250 300 350 400 450 500 100

Fig. 3.26 Fused detail layer image

200

300

400

500

600

500

600

Weighted fused detail layer image 50 100 150 200 250 300 350 400 450 500 100

200

300

400

A multi-objective optimisation function is built after a double-scale decomposition and saliency detection of the downsampled thermal image. The following is an example of the detail layer image corresponding to the thermal image . R1 , as shown in Table 3.2, where the population parameter is set to .100, the number of neighbours is .20 and the upper limit of iteration is set to .50. The Pareto-optimal solution set for the transformation coefficients is finally obtained by population update and iteration. After several iterations and evolutionary processes, the optimal Pareto Front for multi-objective guided filtering was successfully obtained, as shown in Fig. 3.33. From non-dominated solution set, a compromise solution was chosen, denoted as (22) .Xk . Calculating the transformation coefficients .k x and .dx(22) = μk,i P − k x(22) μk,i Rn , x the optimal solution corresponding to the three values of the guided filtering cost

3.6 Experimental Results and Analysis

81

50 Carbon fiber material texture details

100 150 200 250 300 350 400

Carbon fiber material surface impact crater breakage

450

Raised surface of carbon fiber material

500 100

200

300

400

500

600

Fig. 3.27 Final multi-objective guided filtering fusion image Wjmax Comparison The raised contours and details of the carbon fiber surface are clearly visible

50

170

50

Carbon fiber surface raised border blurred

180

100

100

190 400

150

420

440

460

150

170 180

200

200

190 400

250

410

420

430

440

450

460250

300

300 310

350

350

315 320

400

400

450

325

Multi-objective guided filtering can simultaneously take into account tiny defect features

390 310

400

410

420

320

500 100

Wj

max

200

300

430

450

315

400

500

325

500

100

600

obtained by Multi-objective guided filter

390

400

410

420

200

300

400

500

600

Wjmax obtained by General guided filter

430

Fig. 3.28 Comparison of final fusion thermal images Wtmax Comparison

50

170

Clear detail of carbon fiber raised edge and impact center breakage texture

Blurred carbon fiber raised edges and impact breakage

50

180 190

100

480

500

100

520

150

150 170

200

200

180 190

250

250 470

480

490

500

510

520

300

530

300 310

350

350

320 330

400

400 400

Multi-objective guided filtering image with rich details

450

420

440

460

310

450

320

500

500

330

100

Wt

max

200

300

400

500

600

400

410

420

430

440

obtained by Multi-objective guided filter

Fig. 3.29 Comparison of final fusion thermal images

450

460

100

200

300

400

Wtmax obtained by General guided filter

500

600

82

3 Reconstructed Thermal Image Fusion Based … Comparison of final fusion thermal images 220

50

50

240

100

100

260 300

320

340

360

380

150

400

150 220

200

240

250

260

200

300

320

340

360

250

380

400

420

290

300

300

300 310

350

350

320 330

400

450

340

Clear edges, carbon fiber bumps and impact center breakage defects are visible at the same time, with high detectability

360

380

400

420

440 400

290

Carbon fiber raised defects almost lost, impact center broken defects blurred

300

450 310 320

500

500 100

200

300

400

500

Fusion thermal image obtained by Multi-objective guided filter

600

360

380

400

420

440

100

200

300

400

500

Fusion thermal image obtained by General guided filter

600

Fig. 3.30 Comparison of final fusion thermal images Fig. 3.31 Damage Type 1, . R1

Secondary debris cloud perforation

Major impact crater Major impact crater

Fig. 3.32 Damage Type 1, . R2

Back bumps and crackss

Spalling of subsurface materials

Major impact crater

Raised back material

Major impact crater

3.6 Experimental Results and Analysis

83

Table 3.2 Multi-objective optimization parameters Original thermal image size (Pixels) × 640 Downsampled thermal image size (Pixels) .256 × 320 Parameters Maximum number of iterations, .T 50 Population size, . P 100 20 Neighborhoods, . N .512

327680 81920

Non-dominated solutions

G1 X kx22

2270.36

22

2272.13

22

2267.18

G2 X kx G3 X kk

compromise solution

Non-dominated solutions

X kx22

Fig. 3.33 Pareto Front

( ) ( ) ( ) function is obtained, .G 1 X k(22) = 2270.36; .G 2 X k(22) = 2272.13; .G 3 X k(22) = x x x 2267.18. The exact value of the result is shown in Table 3.3. Proceeding with the multi-objective optimization process, we proceeded to determine a compromise solution from the set of Pareto non-dominated solutions to obtain the optimal weight vector .γ (q) = [0.3616; 0.3249; 0.3135]. The optimal weight vector is then transferred to the original full pixel layer for the process of full pixel fusion. The optimal weight vector .γ (q) = [0.3616; 0.3249; 0.3135] was obtained by multi-objective guided filtering of the detail layer image corresponding to the downsampled thermal image . R1 . Multi-objective guided filtering is then performed on the . R1 downsampled base layer, the . R2 downsampled detail layer and the base layer to produce the respective optimal weight vectors. The specific optimal weight values for each layer are shown in Table 3.4.

84

3 Reconstructed Thermal Image Fusion Based …

Table 3.3 Results of multi-objective guided filtering Non-dominated solution set 100 Compromise solution

11

(22)

.Xk

x

Guided filtering cost function value ( ) (22) .G 1 X k 2270.36 x ( ) (22) .G 2 X k 2272.13 x ( ) (22) .G 3 X k 2267.18 x

(q)

.γ1

(q)

.γ2

(q)

.γ3

Table 3.4 Optimal weight set corresponding to each layer image (q) (q) .γ1 .γ2 J

. W1

J

. W1

T

. W2

T

. W2

0.3260 0.3471 0.3616 0.3298

0.4535 0.3095 0.3249 0.3430

0.3616 0.3249 0.3135

(q)

.γ3

0.2204 0.3433 0.3135 0.3271

Subsequently, a dual scale decomposition of the original reconstructed thermal image was performed in order to obtain a full pixel base layer image and a detail layer image. The multi-objective fusion cost function was then determined based on (q) the optimal weight vector values for each layer, denoted as . Min F(X kx ) = γ1 • (q) (q) G 1 (X kx ) + γ2 • G 2 (X kx ) + γ3 • G 3 (X kx ). The full pixel versions of the . R1 and . R2 weight maps are then fed to a fused guided filter. The fused weight map is shown in Figs. 3.34, 3.35, 3.36 and 3.37. As can be seen from the figure, the refined fusion weight map retains not only the large size edges with sharp gradients but also the fine surface breakage damage. Based on the multi-objective refined weight map, the respective salient features of surface damage . R1 and sub-surface damage . R2 can be assigned high fusion weights. The fusion results for the full pixel base and detail layers are shown in Figs. 3.38 and 3.39. The final results obtained after the two-layer multi-objective fusion are shown in Fig. 3.40. The fused image covers not only defects such as impact holes and impact craters on the surface, but also defects such as cracks, protrusions and faults on the subsurface and backside. In addition, even minor damage defects are retained in the fused images. These results demonstrate the efficacy of multi-objective fusion in accomplishing thermal image fusion of complex damage with excellent fusion results. For further explaining the superiority and effectiveness of considering multiple fusion objective functions compared with only considering single objective require-

3.6 Experimental Results and Analysis Fig. 3.34 .W1J 50

85

Base layer impact crater

6

100

4

150 2

200 250

0

300

-2

350 -4

400 -6

450 500

-8

100

200

Fig. 3.35 .W2J

300

400

500

600

Spalling damage on the back side

50 100

8 6

150

4

200

2

250

0

300

-2

350 -4

400 -6

450

-8

500 100

200

300

400

500

600

ment fusion. We make a comparative experiment using the ordinary guided filtering fusion, the proposed individual objective function for fusion. The results are shown below. Among them, the parameters of all filters are set the same. The size of Gaussian filter window in the process of saliency map acquisition is .r g f = 5, and .σg f = 0.5. The steering filter window of the base layer image is all set to.r1 = 64, and the regular term .ε1 = 0.25. The guided filter windows of detail layer images are all set to .r2 = 4, and the regular term .ε2 = 0.0025. The classical guided filter fusion, .G 1 (X kx ) fusion alone, .G 2 (X kx ) fusion alone, . G 3 (X k x ) fusion alone, and the multi-objective optimal fusion proposed in this section were used to perform the reconstructed thermal image fusion. The fusion results are shown in Figs. 3.41, 3.42, 3.43, 3.44 and 3.45.

86

3 Reconstructed Thermal Image Fusion Based …

Fig. 3.36 .W1T

Surface Penetration Defects Enhanced

50

0.6

100

0.5

Higher fusion weights

150

0.4

200 0.3

250 300

0.2

350 0.1

400 0

450 500

-0.1

100

Fig. 3.37 .W2T 50

200

300

400

500

600

Subsurface damage has higher fusion weights

0.8 0.7

100 0.6

150 0.5

200

0.4

250

0.3

300 350

0.2

400

0.1

450

0 -0.1

500 100

200

300

400

500

600

Figures 3.41, 3.42, 3.43, 3.44 and 3.45 show the comparison of various singleobjective fusion methods and the fusion method considering multiple objectives proposed in this section on the same specimen. As shown in the figures, our method has the best characterization ability for complex damage, all kinds of defects can be obviously highlighted, and the contrast of defects is high. Not only the internal details of the impact crater, impact perforation shape, but also the peeling of the subsurface and back materials can be clearly seen. GFF can preserve the edge of surface impact crater well, but the thermal radiation gradient of subsurface damage changes relatively slowly, so the spalling defects of subsurface materials are blurred to varying degrees. .G 1 function targeted at large-scale damage has a good performance effect on surface impact perforation and impact potholes, but other minor damage is treated as noise smoothing. .G 2 function can deal with subtle damage, and it has good

3.6 Experimental Results and Analysis

87

Fig. 3.38 Fused base layer 50 100 150 200 250 300 350 400 450 500 100

200

300

400

500

600

100

200

300

400

500

600

Fig. 3.39 Fused detail layer 50 100 150 200 250 300 350 400 450 500

Fig. 3.40 Final fusion result

Back spalling

Minor defects 50 100 150 200 250

Large meteorite craters

300 350 400 450 500 100

200

300

400

500

600

88

3 Reconstructed Thermal Image Fusion Based …

Fig. 3.41 GFF 50 100 150 200 250 300 350 400 450 500 100

200

300

400

500

600

100

200

300

400

500

600

Fig. 3.42 .G 1 alone 50 100 150 200 250 300 350 400 450 500

performance on surface tiny perforation damage when acting alone, but paying too much attention to details leads to artifacts in different degrees in the corners and sharp gradient areas of the image. .G 3 smoothing noise too much, which leads to large-scale damage and subtle blur. On the whole, the multi-objective guided filtering MOGF method proposed in this section has outstanding fusion performance for complex damage caused by hypervelocity impact. In order to quantitatively demonstrate the effectiveness of our method, we have adopted several quantitative indicators for analysis in this study. These indicators ab| f x y| f include . Q p [17] for measuring the significant information from the input, . Q w [18] based on structural similarity, mutual information . Q M I [19], nonlinear correlation information entropy . Q N C I E [20], and peak signal-to-noise ratio . Q P S N R [21]. In order to meet the measurement demand of defect clarity in the hypervelocity impact

3.6 Experimental Results and Analysis

89

Fig. 3.43 .G 2 alone 50 100 150 200 250 300 350 400 450 500 100

200

300

400

500

600

Fig. 3.44 .G 3 alone 50 100 150 200 250 300 350 400 450 500 100

200

300

400

500

600

field, the average gradient index. Q AG [22] and the edge intensity index. Q E I [23] were used. The quantitative results of each method’s indicators are shown in Table 3.5. As shown in Table 3.5, the proposed multi-objective fusion method also has the best comprehensive performance in quantitative indicators (bold type). Among them, mutual information. Q M I and nonlinear correlation information entropy. Q N C I E measure the amount of information obtained from the source image. When the function . G 2 is applied alone, its . Q M I and . Q N C I E are high because it mainly focuses on retainab| f x y| f ing more detailed features. However, its . Q p and . Q w are extremely low, which indicates that the outstanding features and structural features of the source image have also been destroyed. In addition, as can be seen from Fig. 3.43, when .G 2 is used alone, there are serious artifacts and halos, which seriously affect the accuracy of defect damage assessment. Therefore, we think that .G 2 alone is not a qualified fusion method. Besides it, our method has the highest performance in . Q M I and . Q N C I E , as

90

3 Reconstructed Thermal Image Fusion Based …

50 100 150 200 250 300 350 400 450 500 100

200

300

400

500

600

Fig. 3.45 Multi-objective guided filter fusion Table 3.5 Results of quantitative metrics for each method Method GFF

Metrics ab| f

x y| f

.Q P S N R

.Q E I

. Q AG

.Q p

. Qw

.Q M I

.Q N C I E

86.1204

0.0765

0.0077898

0.56108

0.90796

1.3089

0.80421

.G 1

alone 86.0609

0.078105

0.0079485

0.56367

0.90869

1.3369

0.8043

.G 2

alone 80.9138

0.00037278

3.8525e-05

0.00039573

6.2861e-05

2.6666

0.81355

.G 3

alone 86.1088

0.076831

0.007823

0.56184

0.90805

1.3099

0.80421

0.079931

0.0080492

0.61231

0.94917

1.9107

0.80757

MOGF

86.3665

shown in italics in the table. Therefore, compared with the single-objective fusion method, the multi-objective fusion method considering multiple objectives has the best performance in both qualitative and quantitative aspects.

3.7 Summary In this chapter, a fusion method of reconstructed thermal images based on multiobjective guided filtering is proposed. At the beginning of the chapter, the image fusion requirements of complex damages are explained. Simple single-target fusion may not meet the requirements of complex impact damage feature extraction. Based on this, three different demand-oriented filtering cost functions involved in this chapter are introduced respectively. Then the weight acquisition layer of multiobjective guided filtering is introduced, and the optimal weight is obtained by multiobjective optimization algorithm based on penalty term. Finally, based on the optimal

References

91

weight and fused multi-guided filtering, the original reconstructed thermal images is fused at the full pixel level. Thereby completing the fusion work and enabling the fused image to cope with more complicated damage detection. Experiments show that the fusion method considering multiple fusion targets has better performance than the general single-target fusion method.

References 1. Daily, M. I., Farr, T., Elachi, C., Schaber, G.: Geologic interpretation from composited radar and Landsat imagery. Photogrammetric Engineering and Remote Sensing, 45(8), 1109–1116 (1979) 2. Thackeray, J. T.: Preclinical Multimodality Imaging and Image Fusion in Cardiovascular Disease. Image Fusion in Preclinical Applications eds. (Springer, Cham, 2019), pp. 161–181 3. Makwana, G., Yadav, R. N., Gupta, L.: Comparative analysis of image fusion techniques for medical image enhancement. International Conference on Computational Intelligence: ICCI 2020, eds. (Springer, Singapore, 2022), pp. 241–252 4. Yang, Y., Yin, Y., Yang, N., Li, L.: Infrared and visible image fusion algorithm for substation equipment based on NSCT and Siamese network. International Workshop on Pattern Recognition, SPIE, 11913, 16–22 (2021) 5. Kavita, P., Alli, D. R., Rao, A. B.: Study of image fusion optimization techniques for medical applications. International Journal of Cognitive Computing in Engineering, 3, 136–143 (2022) 6. Azam, M. A., Khan, K. B., Salahuddin, S., Rehman, E., Khan, S. A., Khan, M. A., Kadry, S., Gandomi, A. H.: A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Computers in Biology and Medicine, 144, 105253 (2022) 7. Huang, X., Yin, C., Ru, H., Zhao, S., Deng, Y., Guo, Y., Liu, S.: Hypervelocity impact damage behavior of B4C/Al composite for MMOD shielding application. Materials & Design, 186, 108323 (2020) 8. Yin, C., Xue, T., Huang, X., Cheng, Y. H., Dadras, S., Dadras, S.: Research on damages evaluation method with multi-objective feature extraction optimization scheme for M/OD impact risk assessment. IEEE Access, 7, 98530–98545 (2019) 9. Yin, C., Huang, X., Cao, J., Dadras, S., Shi, A.: Infrared feature extraction and prediction method based on dynamic multi-objective optimization for space debris impact damages inspection. Journal of the Franklin Institute, 358(18), 10165–10192 (2021) 10. Tan, X., Huang, X., Yin, C., Dadras, S., Cheng, Y. H., Shi, A.: Infrared detection method for hypervelocity impact based on thermal image fusion. IEEE Access, 9, 90510–90528 (2021) 11. K. He, J. Sun, X. Tang.: Guided Image Filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1397–1409 (2013) 12. Li, Z., Zheng, J., Zhu, Z., Yao, W., Wu, S.: Weighted guided image filtering. IEEE Transactions on Image processing, 24(1), 120–129 (2014) 13. Kou, F., Chen, W., Wen, C., Li, Z.: Gradient domain guided image filtering. IEEE Transactions on Image Processing, 24(11), 4528–4539 (2015) 14. Zhang, Q., Li, H.: MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation, 11(6), 712–731 (2007) 15. Liu, H. L., Chen, L., Deb, K., Goodman, E. D.: Investigating the effect of imbalance between convergence and diversity in evolutionary multiobjective algorithms, IEEE Transactions on Evolutionary Computation, 21(3), 408–425 (2016) 16. Li, H., Zhong, Z., Shi, J., Li, H., Zhang, Y.: Multi-objective optimization-based recommendation for massive online learning resources. IEEE Sensors Journal, 21(22), 25274–25281 (2021)

92

3 Reconstructed Thermal Image Fusion Based …

17. Xydeas, C. S., Petrovic, V.: Objective image fusion performance measure. Electronics Letters, 36(4), 308–309 (2000) 18. Li, S., Hong, R., Wu, X, A novel similarity based quality metric for image fusion. International Conference on Audio, Language and Image Processing, pp. 167–172 (2008) 19. Qu, G., Zhang, D., Yan, P.: Information measure for performance of image fusion. Electronics Letters, 38(7), 1 (2002) 20. Wang, Q., Shen, Y., Jin, J.: Performance evaluation of image fusion techniques. Image Fusion: Algorithms and Applications, 19, 469–492 (2008) 21. Jagalingam, P., Hegde, A. V.: A review of quality metrics for fused image. Aquatic Procedia, 4, 133–142 (2015) 22. Cui, G., Feng, H., Xu, Z., Li, Q., Chen, Y.: Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications, 341, 199–209 (2015) 23. Rajalingam, B., Priya, R.: Hybrid multimodality medical image fusion technique for feature enhancement in medical diagnosis. International Journal of Engineering Science Invention, 2, 52–60 (2018)

Chapter 4

Stitching Technique for Reconstructed Thermal Images

For large-size spacecraft damage detection, the panoramic defect visualization image (PDVI) can be obtained by stitching the images of multiple IR detection results, taking into account the accuracy as well as efficiency of the detection. In order to achieve the above steps, in this chapter, the feature extraction technique of reconstructed thermal image will be introduced at first. The specific feature extraction and feature point description algorithm of reconstructed thermal image will be introduced in two perspectives of rapidity and fineness. After feature extraction, this chapter further introduces the alignment technique of reconstructed thermal image feature points to complete the model for matching and stitching these feature points. Furthermore, this chapter introduces the method for improving the stitching quality of reconstructed thermal images. Finally, the experimental validation of the stitching algorithm in this chapter is completed using reconstructed thermal images of the damaged specimens.

4.1 Introduction Unlike normal visible images, stitching reconstructed thermal images present several challenges. The numerical values of the elemental essence in the reconstructed thermal image represent the reconstructed temperature information. The magnitude of the temperature values is visualized through color assignment. Reconstructed thermal images often exhibit image degradation phenomena, such as low contrast, low definition, and blurred image texture, due to the limitations of the acquisition module. The above phenomena lead to images with fewer features seriously affect the stitching to achieve the function. Furthermore, the presence of multiple repetitive regular textures that will cause on the surface of the composite material used in the detection object can lead to incorrect mis-matching and increase the difficulty of stitching. So it is necessary to extract suitable high quality descriptors for stitching. Moreover, when the position or angle of the IR camera is shifted greatly

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 C. Yin et al., Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft, https://doi.org/10.1007/978-981-99-8216-5_4

93

94

4 Stitching Technique for Reconstructed Thermal Images

Fig. 4.1 Stitching of reconstructed thermal images for multi-view infrared detection

Fig. 4.2 Stitching process of the reconstructed thermal images

during the shooting, the splicing result will have the problem of ghosting error under the global single response condition, which is not conducive to the subsequent research of quantitative defect information extraction. In this chapter, a simultaneous study of a high-precision stitching algorithm will be carried out simultaneously to obtain stitching without ghosting and overall defect visualization images without significant deformation as shown in Figs. 4.1 and 4.2. For the reconstructed thermal images obtained with prominent defect features, i.e., the stitched objects. Due to the variability of temperature in different regions, images contain rich point-type, linear and other defect features with special geometric contour traits, as well as rich edge temperature gradient information. However, there are also problems of noise interference, viewpoint variation, and chromatic-

4.2 Feature Extraction Techniques for Reconstructed Thermal Images

95

ity differences caused by different temperature readings at the same location. This section will adopt a feature point-based collocation scheme due to its superiority in registration efficiency and accuracy. The stitching scheme of reconstructed thermal images is divided into feature extraction module, feature matching module, and stitching module. (1) Feature extraction module: According to different stitching requirements, the features that satisfy the definition are searched in the defect feature images, including corner and edge points. The variation of temperature features in the defect region, compared to the background region, will produce significant differences in the face of the same thermal radiation. The resulting reconstructed thermal image can use color to distinguish between the defect region and the background detection region to obtain a reconstructed image that highlights the defect region. (2) Feature matching module: For the extracted feature points, a comparison method is required to ascertain proximity between the features. If the constraints are satisfied, the initial matching is performed on the feature points to be matched. After the initial matching is completed, algorithms such as the random sample consensus (RANSAC) is also used to eliminate the mis-matching points. (3) Stitching module: After obtaining the correctly matched feature pairs, the geometric transformation matrix parameters are estimated and the image can be geometrically transformed to achieve the image stitching process. At the same time, when the two matched images are stitched directly, stitching seams may occur due to chromaticity differences in the reconstructed thermal images and parallax in the acquisition viewpoint. This may affect the quality of the obtained stitched images. Therefore, it is necessary to solve this unnatural stitching effect using algorithmic processing.

4.2 Feature Extraction Techniques for Reconstructed Thermal Images The feature points of reconstructed thermal images represent certain temperature information response, and the common image feature points include corner points, edge points, speckle points and so on. Corner points are points with obvious angular variations in the image, which are usually used for geometric estimation and matching of the image. Edge points are points with obvious edges in the image, which are usually used for edge detection and contour extraction of the image. Speckle points are points with obvious texture or color variations in the image, which are usually used for texture analysis and classification of the image. There are some feature extraction algorithms are used to extract the feature points of reconstructed thermal images, such as scale-invariant feature transform (SIFT),

96

4 Stitching Technique for Reconstructed Thermal Images

speeded up robust features (SURF), and feature point detectors. These algorithms can extract stable feature points across various scales and rotations, as well as computing the corresponding feature point descriptors for reconstructed thermal images. These feature point extraction and description algorithms are used for the matching of reconstructed thermal images to achieve stitching between images. In this section, the feature points of reconstructed thermal image and the basic detection algorithm, the high-speed oriented fast and rotated brief (ORB) detection algorithm and the KAZA detection algorithm based on the nonlinear space construction are introduced respectively.

4.2.1 Feature Points of Reconstructed Thermal Images Feature points of a reconstructed thermal image are local regions or points that have significant properties that distinguish them from other points in the image. Considering the process of reconstructing images for infrared video streaming, distinguishing from other image color compositions, the color or grayscale of a reconstructed thermal image is a mapping of temperature in color space. The different color intervals in its different regions represent the temperature response at different locations on the actual specimen being examined. Based on this response relationship, the description of temperature information of different damage areas can be achieved by measuring some image based features in the reconstructed thermal image, such as feature points. Specifically, in the image stitching and alignment process, the detection of regional feature points not only characterizes the image information such as abrupt changes in gray values on the image, but also shows the temperature change information of the reconstructed thermal image. In reconstructed thermal images, feature points usually have the following characteristics: (1) Invariance, i.e., feature points remain stable under different reconstructed thermal image scale transformations (e.g., rotation, translation, scaling, brightness changes, etc.) (2) Saliency, i.e., feature points can depict regions or structures with prominence in the reconstructed thermal image; (3) Sparsity, i.e., feature points are more evenly distributed in the feature regions of the image and do not appear to be too dense or repetitive locally. Considering the basic feature point detection algorithm for reconstructed thermal images, for the information processing subject of reconstructed thermal images-the computer, it cannot know the scale at which the measured workpiece is located by a reconstructed image. So it is necessary to represent the reconstructed thermal image under multiple scales in order to determine the best scale where the feature information of the reconstructed thermal images is located. The feature points of the reconstructed thermal images are further detected to complete the subsequent stitching work. Expressing feature points based on reconstructed thermal images containing different temperature information differences at multiple scales simultaneously requires detecting the same key points to be detected at different scales [1].

4.2 Feature Extraction Techniques for Reconstructed Thermal Images

97

Fig. 4.3 Scale space of reconstructed thermal images

There are two ways to construct multi-scale image sets for reconstructed thermal images: the scale-space method and the pyramid method. The pyramid method involves downsampling and smoothing of the image to reduce the number of pixels in a hierarchical manner, which is less computationally intensive but results in a greater loss of information about the representation of defects in the reconstructed image. The scale-space method maintains the same spatial sampling at all scales. Unlike the pyramid method, the scale-space method keeps the number of pixels per layer of the reconstructed thermal scale constant, and its method of constructing multi-scale images involves applying graded smoothing to the image. The disadvantage of this method is that it is computationally complex and produces more redundant information for low-resolution reconstructed thermal images with large scales. Figure 4.3 shows the scale space of reconstructed thermal images. The blurring of reconstructed thermal images in scale space becomes progressively larger as the scale increases, and the main purpose of representing a signal at multiple scales when all information is concentrated in the original data of the image is to explicitly represent the original image using multiple scales to remove unnecessary and details to simplify further processing. Technically speaking, the motivation for its use reflects the general need for a smoothing step as part of the reconstructed thermal images alignment process and thus as a means of noise suppression. In scale space theory, scale transformation of reconstructed⟨thermal images is per1 −(x 2 +y 2 ) 2σ 2 [1]. When acquirformed using Gaussian kernel .G (x, y, σ ) = 2πσ 2 · e ing infrared image sequence data, multiple reconstructed thermal image are generated due to the flexibility of the shot, resulting in rotational, translational, and affine transformation forms between them. Thus, it is necessary to acquire key points with rotation invariance in the scale space for subsequent alignment purposes. Since the reconstructed thermal image is a low resolution image, a large area exists as a solid color scene and has little feature information. Therefore, it is desired that the variation between adjacent scales of the reconstructed image can be relatively fine to facilitate the extraction of more detailed key points. For the reconstructed thermal images the defect contour information is contained in the large scale space, while the defect detail information is reflected in the small scale space. We hope that the feature points corresponding to the overlapping regions of the two images to be stitched together can be identified as peak points under different scale conditions. The Gaussian pyramid method is used to construct the scale space for the reconstructed

98

4 Stitching Technique for Reconstructed Thermal Images

Fig. 4.4 Gaussian pyramids for reconstructed thermal images

thermal images, which enables the feature point detection algorithm to have a finer scale resolution, and its structure is shown in Fig. 4.4. Each Gaussian pyramid consists of .n octave levels, different octave levels correspond to images of different sizes, and each octave level has .s image levels inside. The .s image layers of each octave level correspond to the Gaussian kernel function n .k σ, n = 0, . . . , s + 2 in .k times increments, as shown in Fig. 4.4. The purpose of constructing a Gaussian pyramid is to transform the reconstructed image into a scale space in which consistent feature points are obtained at any scale. Conventional feature point detection algorithms, such as the SIFT algorithm, use a differential Gaussian function instead of the computationally complex Laplace Gaussian function .σ 2 ∇ 2 G to do convolution on adjacent scales. The differential of gaussian (DOG) operator is essentially a corner point detection operator, i.e., the DOG operator has a strong responsiveness to corner points and corresponding scale block regions. However, it can also produce a strong response to strong edges, resulting in the detection of a larger number of key points and are increased sensitivity to noise. This leads to a weak uniqueness in the descriptors it creates when facing a reconstructed thermal image with more noise. Such descriptors are not easily distinguished from other descriptors and are not conducive to the alignment between descriptors of the image to be stitched and reconstructed. So a difference function is fitted to the extreme points to accurately locate the feature points [1], and the Taylor expansion of . D (x, y, σ ) at the local extreme point .(x0 , y0 , σ ) is expressed as .

D (x, y, σ ) = D (x0 , y0 , σ0 ) +

∂2 D 1 ∂ DT + X T 2 X. ∂X 2 X

(4.1)

4.2 Feature Extraction Techniques for Reconstructed Thermal Images

99

Fig. 4.5 DOG spatial extremum detection of reconstructed thermal images

Deriving derivative of . X = (x, y, σ )T in Eq. (4.1) and letting the equation equals to zero, the extreme value point is found as .

∂ 2 D −1 ∂ D . Xˆ = − ∂ X2 ∂ X

(4.2)

Then, as displayed in Fig. 4.5, the value of the equation corresponding to the extreme value point is ( ) T ˆ = D + 1 ∂ D Xˆ . .D X (4.3) 2 ∂X The .|D( Xˆ )| is filtered by setting a suitable threshold, and if .|D( Xˆ )| ≥ threshold, this point is judged to be accepted, otherwise it is rejected. Discarding low-contrast feature points resists the noise formed due to the infrared data during sampling. Because these feature points have small grayscale values, they are either susceptible to noise and become unstable, or they are themselves noise caused by the infrared acquisition system. Where . Xˆ = (x, y, σ )T represents the offset relative to the interpolation center and the scale size. At the same time, the DOG algorithm generates a certain degree of edge response, which in turn eliminates the corresponding points due to the aforementioned instability. First, the Hessian matrix at the feature points is calculated ] [ Dx x Dx y , .H = (4.4) D yx D yy where the eigenvalues .α and .β of . H are the gradients in the directions corresponding to .x and . y. In the case where .α is larger so that .α = rβ, then:

100

4 Stitching Technique for Reconstructed Thermal Images

.

T r (H )2 (α + β)2 (r + 1)2 (rβ + β)2 = = = , Det (H ) αβ r β2 r

(4.5)

r (H ) where .r = 10.0 is often taken, and if . TDet(H ≤ (r +1) is satisfied, the feature point is r ) kept, otherwise the edge response feature point is removed. In general, features of reconstructed thermal images are often built on higher dimensional feature descriptors. After the feature points are detected, the subsequent work is to perform vector operations based on these descriptors to achieve distance matching or clustering. The basic feature detection and description algorithm has some disadvantages in terms of computation and processing time due to its high feature dimensionality and description complexity. In the next subsection, we will introduce the ORB feature detection and description algorithm for infrared images based on binary feature point description. 2

2

4.2.2 FAST Feature Extraction of Reconstructed Thermal Images As shown in Fig. 4.6, for the description of the feature points of the reconstructed thermal image, in the conventional detection method, the feature points are described as .128 or .64−dimensional vectors according to some rules. Further operations are done using vector processing such as spatial distance metric or clustering. In the process of reconstructed thermal image stitching of spacecraft damage areas, the time efficiency of algorithm computation is also an important aspect for evaluating an algorithm’s performance, taking into account numerous limitations of objective conditions such as detection environment. Therefore, in order to avoid the negative impact of complex and high-dimensional vector computation on real-time detection, in this section, the algorithm of reconstructed thermal image feature point detection and description based on binary feature point description will be introduced with the aim of enhancing computational efficiency. The ORB algorithm is an image feature extraction technique based on the fast feature point detection algorithm as well as the Brief vector creation algorithm [2].

Fig. 4.6 Feature points and descriptors of reconstructed thermal images

4.2 Feature Extraction Techniques for Reconstructed Thermal Images

101

These feature point algorithms can quickly create feature vectors of key points in reconstructed thermal images, and using the feature vectors can be used to identify feature objects in reconstructed thermal images. First, special regions are found in the reconstructed thermal image, specifically a pixel point is called a key detection point. Key detection points are small regions that stand out in the reconstructed thermal image, such as corner points in the reconstructed thermal image. The ORB algorithm then computes the corresponding feature vector for each key detection point [2]. The feature vector created by the ORB algorithm is a binary feature vector, i.e., it contains only 1 and 0. The order of 1 and 0 varies depending on the particular key detection point and the pixel region around it. The feature vector of the key detection point represents the intensity pattern around the key detection point, so multiple feature vectors can be used to identify larger regions or even specific objects in the image [3]. In order to adapt to the many objective conditions of the detection site, ORB is characterized by fast detection and is independent of noise and image transformations, such as rotation and scaling transformations. In this subsection, the focus will be on the reconstructed thermal image stitching algorithm based on the ORB algorithm.

4.2.2.1

FAST Feature Point Extraction of Reconstructed Thermal Images

The ORB algorithm is built on the features from accelerated segment test (FAST) keypoint detector as well as the binary robust independent elementary features (BRIEF) descriptor [2]. Among them, the FAST algorithm is an efficient corner point detection algorithm. When applied to reconstructed thermal images, if a pixel point of a reconstructed thermal image, is in a different region compared to a sufficient number of pixels in its surrounding neighborhood, it is judged as a corner point, i.e., a key point for reconstructed thermal image detection [4]. Further, considering the grayscale reconstructed thermal image, if the gray value of a pixel point is too large or too small compared to the gray values of the remaining pixels in a certain neighborhood range, the pixel point is considered to be a corner point of the reconstructed thermal image, i.e., a key detection point, as displayed in Fig. 4.7. As shown in the figure, consider a reconstructed thermal grayscale image with only one feature attribute per pixel, and select a pixel c in the reconstructed thermal image that may become a key point for detection. Using the Bresenham discrete circle method, draw a Bresenham circle of radius 3 pixels centered at pixel .c. Let the gray value of the central pixel .c be .G c and set the threshold .t. If the gray value .G i of each pixel . Aedge in the set of 16 surrounding edge pixels satisfy: ⎧ ⎨ c ∈ C1 , G i ≤ G c − t, c ∈ C2 , G c − t < G i < G c + t, i = 1, 2, . . . , 16, . ⎩ c ∈ C3 , G c + t ≤ G i ,

(4.6)

102

4 Stitching Technique for Reconstructed Thermal Images

Fig. 4.7 FAST feature point detection by ORB algorithm for reconstructed thermal images

where the sets .C1 , .C2 , and .C3 are the three categories in which the central pixel may fall, i.e., dimmer compared to the surrounding pixel points, similar in gray value to the surrounding pixel points, and brighter compared to the surrounding pixel points, respectively. If a central pixel point is classified into the set .C1 and .C3 , it is called a key detection point, i.e., a corner point for FAST detection. By traversing this operation over the reconstructed thermal image, the set of key detection points of the reconstructed thermal image can be finally obtained. Considering the complexity of global traversal, the above process can be simplified according to the pixel x-axis and y-axis distribution by reducing all edge pixels to be detected to only the .1st, .5th, .9th and .13th edge pixels [5]. This is done by first comparing the gray value of the central pixel with the gray value of the .1st and .9th pixel points .G 1 and .G 9 according to the gray value comparison idea. If a central pixel is classified to the set .C1 , .C3 . Then continue to detect the gray value .G 5 and .G 13 of the .5th and .13th pixel according to this principle. If the gray value of all four surrounding edge pixels meet the judgment condition, then the center pixel is a key detection point, i.e. the corner point for FAST detection. And further global traversal on the image is then performed to finally find the set of corner points .Cor neri = [C1 , C3 ]. However, considering the characteristics of reconstructed thermal images and the possible distribution of corner points, this calculation leads to a clustering effect of feature points, i.e., adjacent and similar feature points near a certain feature region appear repeatedly and at high frequencies. Thus, the measurement parameters should be set in the process of corner point detection, and the algorithm of non-maximal value suppression should be incorporated. For each point in the set for detected corner points, define the corner point response: ⎡ .

S = max ⎣



i=1,2,...,C1

|G i − G c | − t,

∑ j=1,2,...,C3

⎤ | | |G j − G c | − t ⎦ .

(4.7)

4.2 Feature Extraction Techniques for Reconstructed Thermal Images

103

Fig. 4.8 Oriented FAST algorithm uses grayscale form centers to determine the direction of rotation

The corner response is compared around the clustered corner points between the neighboring pixel points and the points with high response values are retained. Based on this result, the set of corner points of the reconstructed thermal image detection results .Cor neri = [C1 , C3 ] is updated. In the ORB algorithm, by borrowing the idea of scale pyramid in the SIFT algorithm, the FAST algorithm constructs the scale pyramid, so that the key points of the detected reconstructed thermal images possess scale invariance and rotation invariance. However, unlike in the SIFT algorithm, there is only one image in each layer of the pyramid, where the scale of the .sth layer is, shown in Fig. 4.8 σ = σ0s ,

. s

(4.8)

where .σ0 is the initial scale and the original image is in layer 0. Then the size of the image at layer .s is ( ) ( ) 1 1 .si zes = H× (4.9) × W× . σs σs Further, feature points are detected at different scales using the FAST algorithm, using the corner point response function, and ranked according to the response values of the corner points. Specifically, the first . N feature points are selected as the feature points for the current scale. The principal direction of the feature points is calculated using the grayscale center-of-mass position in a circular neighborhood with a radius of .r centered at the feature points of the reconstructed thermal image. The centerof-mass position can be calculated from the grayscale moments as follows m pq =



.

x,y

x p y q · I (x, y).

(4.10)

104

4 Stitching Technique for Reconstructed Thermal Images

where, . I (x, y) is the reconstructed thermal image. Then, the position of the center of mass is ) ( m 10 m 01 . .C = , (4.11) m 00 m 10 Then the principal direction of the characteristic point at this time is θ = arctan (m 01 , m 10 ) .

.

(4.12)

On the basis of obtaining the feature points and their principal directions from the reconstructed thermal image detection, the feature points are further described. The neighborhood of the feature points is rotated to align with the principal direction and the BRIEF algorithm component feature descriptors are used to achieve the matching of the reconstructed thermal feature point pairs. Subsequently, the stitching process of the reconstructed thermal image is completed. The rotated BRIEF reconstructed thermal image feature point description algorithm in the ORB algorithm will be focused on in the next subsection.

4.2.2.2

Rotated BRIEF Reconstructed Thermal Image’s Feature Point Description

Considering the use of a binary feature descriptor algorithm to describe local features in reconstructed thermal images can significantly save computation time significantly in subsequent feature point distance matching, thus improving the overall detection efficiency. In the ORB algorithm, the BRIEF feature point descriptor is used. The basic idea of the BRIEF algorithm is to describe local features in reconstructed thermal images by using, for example, random generation sampling [6]. ( or uniform ) The algorithm first selects a set of pixel pairs.(xi , yi ) and. x j , y j of the reconstructed thermal image and calculates the difference in luminance between each pair of pixels .dbi, j | )| ( .dbi, j = | B (x i , yi ) − B x j , y j | , (4.13) ( ) where . B (xi , yi ) and . B x j , y j are the grayscale luminance values of points .i and . j, respectively. On the basis of feature point acquisition, a neighborhood window of size . R × R with radius . R is taken with the feature point as the center. And a pair of pixel points is randomly selected within the window and binary assignment is performed by the following principle { τ ( p| a, b) =

.

1, i f B (xa , ya ) < B (xb , yb ) , 0, other wise,

(4.14)

where . N (generally . N = 256) pairs of points are randomly selected in the window and the binary assignment is repeated to form a binary code which is a description of the feature points, i.e., feature descriptors. Then, these luminance differences are

4.2 Feature Extraction Techniques for Reconstructed Thermal Images

105

encoded into a binary string to serve as a feature descriptor for that pixel pair. In this way, the BRIEF algorithm can quickly compute local features in an image and represent them as binary strings. The BRIEF algorithm is characterized by its fast computation speed, which makes it suitable for real-time computation and large-scale image processing [6]. This is because the BRIEF algorithm uses binary descriptors, which can be computed and compared quickly. In the ORB algorithm, a modified rotational BRIEF algorithm is used to obtain the descriptors of rotational BRIEF based on rotationally invariant feature points. First, the .n pixel points around the key detection point of the reconstructed thermal image are reconstructed as a .2 × n matrix as follows: [ .

Pside =

] x1 , . . . , xn . y1 , . . . , yn

(4.15)

Rotate the point pair in accordance with the orientation angle of the key points and apply the rotation matrix to rotate them to obtain the new point pair [ .

Pφ−side = Rφ Pside =

] cos φ sin φ Pside . − sin φ cos φ

(4.16)

At the location of the new reconstructed thermal image feature point set, the size of the point pairs are compared to form binary string descriptors. This encoding is the description of the feature point as a rotated BRIEF feature descriptor for that pixel pair. It is important to note that the feature points are extracted at different scales when using the oriented FAST algorithm. Therefore, when BRIEF feature descriptors are used, the image is transformed to an image in the corresponding scale space and then the rotated BRIEF feature descriptor for that pixel pair is obtained from the corresponding scale image.

4.2.3 Fine Feature Extraction of Reconstructed Thermal Images Reconstructed thermal images based on infrared video streams are characterized by a certain degree of edge blurring and low signal-to-noise ratio. When doing qualitative and quantitative analysis of damage areas in the image, the target of analysis is the actual impact and its sputtering diffusion area. Therefore, in the stitching feature point detection of reconstructed thermal images, it is not desirable to detect a certain number of feature points in its dark background area, which requires blurring before detecting the feature points. Traditional image stitching algorithms tend to adopt a linear construction approach in the construction of feature space, and the global Gaussian blur used by them blurs the background region of the reconstructed thermal image with the focused region of interest on the same scale. Therefore, the setting

106

4 Stitching Technique for Reconstructed Thermal Images

of Gaussian filtering creates a certain contradiction with the detection purpose. A larger Gaussian blur parameter can filter out the unwanted information such as image background better, but it also damages the feature boundary of the reconstructed thermal image which is already not entirely distinct. If a smaller Gaussian blur parameter is set, the unwanted background information in the reconstructed thermal image cannot be filtered out effectively. Consequently, the detection of feature points may result in more pronounced detection of extreme point regions. This interferes with the establishment of a typical image stitching transformation model, which in turn leads to the generation of false matches. Considering the above factors, the fine extraction of reconstructed thermal images will be introduced in this subsection. In the stitching stage of the reconstructed thermal image, nonlinear diffusion filtering is introduced in the establishment of the feature scale space of feature points. It can simultaneously take into account the blurring of the background region information and the preservation of the damage region boundary in the reconstructed thermal image. The additive operator splitting algorithm is used to solve the nonlinear diffusion filtering equation, and an arbitrary step size can be used to construct a stable nonlinear scale space. The establishment of the nonlinear scale space ensures the filtering and boundary keeping of the background noise of the reconstructed thermal image, thereby enhancing the detection quality of the feature points.

4.2.3.1

Construction of Nonlinear Scale Space of Reconstructed Thermal Images

By constructing the nonlinear scale space of the reconstructed thermal image and detecting the feature points of the reconstructed thermal image in the nonlinear scale space, more image details can be preserved, enabling more refined feature point detection can be performed. In the construction of the scale space, KAZE algorithm specifically adopts the nonlinear scale space construction method. It mainly consists the following steps: construction of nonlinear scale space based on reconstructed thermal images; detection and precise positioning of feature points; determination of the main direction of feature points; and generation of feature descriptors. In the KAZE feature point detection algorithm, the nonlinear scale space is constructed by a nonlinear diffusion filtering algorithm [7]. The conduction coefficient in the nonlinear diffusion filtering method changes as the local features of the reconstructed thermal image change. Unlike the Gaussian filtering operation in the basic feature point detection method, it can be regarded as a linear smoothing filtering, which uniformly applies a fuzzy filtering operation on the whole image. In the nonlinear diffusion filtering, the conductivity coefficient of the smoothed region of the reconstructed thermal image can be adaptively increased for improved noise filtering [8]. In the edge texture region of the reconstructed thermal image, the conduction coefficient can be adaptively decreased to better preserve the image edges.

4.2 Feature Extraction Techniques for Reconstructed Thermal Images

107

Consider the brightness of a reconstructed thermal image. Its variation in the space of different scales can be considered as the dispersion of some flow function [7]. This process can be described by the nonlinear partial differential equation, that is, ∂B (4.17) . = div [c (x, y, t) · ∇ B] , ∂t where . B is the brightness of the reconstructed thermal image. The calculation operators .div and .∇ express the calculation of scatter and gradient, respectively. .c (x, y, t) is the conduction function and has the following representation 1 , |∇ |2 1 + B2σ k

c (x, y, t) = g (|∇ Bσ (x, y, t)|) =

.

(4.18)

where .∇ Bσ represents the gradient of the original image after Gaussian smoothing, of the region retained by the the choice of .g (·) function ⟨ )] ⟨} affects ( the characteristics filter, and .g (|∇ Bσ |) = 1 1 + |∇ Bσ |2 k 2 is able to retain the region with larger width. The parameter .k is the contrast factor that controls the diffusion level and determines how much edge information is retained. The larger its value, the less edge information is retained.Since this nonlinear partial differential equation has no analytical solution, numerical methods are used to approximate the aforementioned nonlinear partial ( ) differential equation. The reconstructed thermal image conductivity matrix . Ad B k is calculated in each dimension .d, (d = 1, . . . , n), and the time step .τ is set. The above nonlinear partial differential equation is discretized using the additive operator splitting method, and the semi-implicit solution is obtained as follows n ∑ ( ) B k+1 − B k . Ad B k B k+1 . (4.19) = τ d=1 The solution of the nonlinear partial differential equation is obtained by solving the above equation to complete the nonlinear diffusion filtering results . B k+1 (x, y) for all subimages of reconstructed thermal images in the nonlinear scale space. [ .

B

k+1

= I −τ

n ∑

(

Ad B

]−1

k

)

Bk .

(4.20)

d=1

The feature scale space construction process of KAZE algorithm is similar to that of SIFT algorithm, with scale levels increasing logarithmically, but the same resolution as the original image is used for each layer of KAZE. In the SIFT algorithm, the linear scale space construction is processed by downsampling, and the new image obtained by each downsampling is one layer of the pyramid. Multiple downsampling operations then form the image pyramid, resulting in a different resolution for each layer of the image. In the KAZE algorithm, each layer has the same resolution as the

108

4 Stitching Technique for Reconstructed Thermal Images

original image. And the indexing of the scale space is performed using the octave number .o and the story number .s. They have the following correspondence ⎧ ⎨ o ∈ [0, . . . , O − 1] , o+s / S s ∈ [0, . . . , S − 1] , .σi (o, s) = σ0 2 , ⎩ i ∈ [0, . . . , O × S − 1] ,

(4.21)

where .σ0 is the initial scale parameter. In addition, since the nonlinear diffusion filtering model is based on the scale parameter over time, it is necessary to convert the scale parameter into time units, called the evolution time t =

. i

1 2 σ , i ∈ [0, . . . , O × S − 1] . 2 i

(4.22)

In turn, these evolved time values can be used to construct a nonlinear scale space. In the nonlinear scale space, there is no correspondence between the filtered image obtained by.ti and the image obtained by convolving the original image using a Gaussian function with standard deviation .σi . Further, this time-scale transformation equation is substituted into Eq. (4.20) to obtain the results of constructing all images in the nonlinear scale space as follows .

4.2.3.2

[ ∑M ( )]−1 B i+1 = I − (ti+1 − ti ) · Al B i · Bi . l=1

(4.23)

Generation of KAZE Feature Descriptors

After the construction of the nonlinear scale space is completed, the following equation is used to calculate the determinant response values of the matrix at different scales, and then feature point detection in the scale space is achieved [7] .

( ) B H essian = σ 2 Bx x B yy − Bx2y ,

(4.24)

/ i σ = σi 2o .

(4.25)

.

Among them, . L x x and . L yy are the second-order differentials in horizontal and vertical directions, respectively, and . L x y is the second-order partial differential. The amplitude of the spatial differential decreases with scale, and the differential values of each corresponding scale need to be normalized. The specific implementation process for feature point detection is as follows. Performing a local maximum search on the scale space. That is, at each evolutionary scale, it detects whether the feature response value is greater than a predetermined threshold and detects whether the point is a local maximum in the .3 × 3 pixel neighborhood centered at that. This step eliminates the points with non-maximum responses.

4.3 Alignment Techniques for Reconstructed Thermal Image’s Feature Points

109

For the current potential extreme value point, the point is compared with the points of the previous scale .i + 1 and the next scale .i − 1 within the corresponding size .σi × σi neighborhood. If the point is an extreme value point, it is a feature point. A two-dimensional quadratic function is fitted to the response values of the Hessian determinant in the .3 × 3 pixel neighborhood and the extreme value points are found. This, in turn, enables sub-pixel precise localization of the two-dimensional positions of the feature points. Further, the process of determining the principal direction of the feature points is similar to the basic SURF feature direction description process. If the scale parameter of the feature point is .σi , the search radius is set to .6σi . A .60-degree sector is made in this circular field, and the sum of wavelet features in this sector is calculated. Then the sector is rotated and the sum of wavelet features is counted again. The direction with the largest sum of wavelet features is considered the principal direction. For the feature point with scale parameter.σi , a window of size.24σi × 24σi is taken at centered on the feature point on the gradient image. This window is divided into .4 × 4 subregions, each of which has a size of .9σi × 9σi , and the adjacent subregions have an overlapping band of width .2σi . The differential response values within each region are center-weighted using the Gaussian weighting function of .σ1 = 2.5σi , and the number of feature vectors within each subregion is counted as described in [9]: d =

. v

[∑

Bx ,



By ,



|Bx | ,

∑ | |] | By | ,

(4.26)

∑ ∑ ∑ ∑| | where . Bx , . B y , . |Bx | and . | B y | are the sum of horizontal response values, the sum of vertical response values, the sum of absolute values of horizontal response values and the sum of absolute values of vertical response values of .harr wavelet responses on the sub-region, respectively. Then, the feature vectors of each subregion are center-weighted using the Gaussian weighting function with the .σ2 = 1.5σi over the window of .4 × 4 to form a 64dimensional feature vector. Finally, the .64-dimensional feature vector is normalized to obtain a reconstructed thermal image feature point description vector with contrast invariance. So far, the feature point descriptions of the reconstructed thermal images based on the nonlinear scale space as well as the KAZE descriptors have been obtained. Further, distance-based matching of these feature descriptors will be performed to establish feature point pairs establishment between the two reconstructed thermal images that are to be stitched together.

4.3 Alignment Techniques for Reconstructed Thermal Image’s Feature Points After obtaining the feature point descriptors of the reconstructed thermal image, the focus of this section is on how to match the feature points of the reference image with

110

4 Stitching Technique for Reconstructed Thermal Images

those of the aligned image and obtain the model of the image transformation based on the matching. For feature point matching in reconstructed thermal images, a suitable matching algorithm needs to be selected. Due to the sparsity and noise interference of the infrared image, the matching algorithm needs to have good robustness and high matching accuracy. Therefore, based on the commonly used matching algorithms, the mis-matching rejection algorithm for feature points is further described. After the matching of feature points and the rejection of mis-matching points, the transformation relationship between the reconstructed thermal reference image and the alignment image can be established, and then the image stitching can be realized.

4.3.1 Alignment of Feature Points of Reconstructed Thermal Images Generally, when the construction of the reconstructed thermal image feature description is completed, the spatial distance metric of the feature points can be calculated to determine whether the corresponding feature points match each other. Considering the vector representation form of image feature point descriptions, Euclidean distance is commonly used to describe the distance between reconstructed thermal image feature points. Euclidean distance can effectively measure the difference between feature vectors, but is more sensitive to factors such as noise and distortion. In addition, other spatial distance measures such as cosine distance are also applicable to feature matching in reconstructed thermal images. For example, in the reconstructed thermal reference image and the aligned image, the local binary pattern (LBP) algorithm can be used to extract the temperature features of the image and then calculate the cosine distance between the feature vectors for matching. The nearest-neighbor and second-nearest-neighbor distance ratio method is similarly used in different algorithms as a metric to calculate feature point matching. By calculating the Euclidean distance between the description vector of each feature point in the target image of the reconstructed thermal image and all feature vectors in the reference image, the nearest neighbor matching points and the second nearest neighbor matching points of the feature points are selected. If the ratio of the spatial distance metric of the nearest neighbor feature points to the corresponding distance metric of the second nearest neighbor feature points is less than a set threshold, they are considered as matching points. For example, in the ORB algorithm, considering that the feature descriptors of the extracted reconstructed thermal images are binary descriptors, the Hamming distance can be used for feature matching. Since the Hamming distance is based on the distance of binary description, a simple dissimilarity calculation on the descriptors of the reconstructed thermal image in the Hamming distance greatly enhances the matching efficiency. Considering the threshold-based nearest neighbor distance ratio judgment method, for the reconstructed thermal reference image . P1 aggregate descriptor . S P−1 , calculate the relative

4.3 Alignment Techniques for Reconstructed Thermal Image’s Feature Points

111

distance between it and the feature descriptor . S{P−2 in the reconstructed thermal } alignment image . P2 and place it in . D P1 −P2 = d P1 −1 , d P1 −2 , . . . , d P1 −num ( S p−2 ) . Then, find the matching descriptor{ pairs using the 2NN algorithm. The } set . D P1 −P2

is sorted to obtain .sor ted D P1 −P2 = d ' P1 −1 , d ' P1 −2 , . . . , d ' P1 −num ( S p−2 ) in ascending order. If .d P' 1 −1 /d P' 1 −2 < ε2N N is satisfied, the descriptors corresponding to two of the descriptor }subsets ]are determined feature point set }( ) (to match. ) The (initial matching ) ] 1,2 1 2 = d11 , d21 1 , d12 , d22 2 , . . . , d1n , d2n n is obtained by travers. Dmatch = D , D ing all feature point pairs. Based on the detection of feature points for the reconstructed thermal image and their initial matching using the Hamming distance-based nearest neighbor search algorithm, the corresponding matching point pairs are obtained. Given the importance of stitching accuracy in the reconstructed thermal image, it is necessary to further solve the problem of mis-matching of the feature pairs based on the initial matching. Therefore, the initial matching point pairs need to be purified to obtain the correct transformation model of the reconstructed thermal image, and the reconstructed thermal alignment image needs to be transformed to the region of the reference image using the transformation model to realize the stitching of two or more images. In this subsection, a random sampling consistency algorithm and its improved version will be used to fit data with noise using an iterative approach in order to find the correct spatially transformed homography matrix model. That is, considering a given set of data point pairs containing correct and noisy points, the algorithm finds a straight line to pick out the correct points-the inner points (i.e., correct matches) while eliminating the outer points (i.e., wrong matches, noisy points). The main design concepts and algorithm steps of this method are outlined below. According } 1 to 2what ( the )set] of feature point pairs ] }(is 1described ) ( in Sect. ) 4.3.1, 1,2 . Dmatch = D , D = d1 , d21 1 , d12 , d22 2 , . . . , d1n , d2n n , where n is the number of( detected ) feature point pairs and .d is a 2-dimensional coordinate point, 1 1 1 .d1 = x 1 , y1 obtained from the reconstructed thermal reference image and the alignment image based on distance matching is obtained. Which is used as the input to the feature point pair set de-error matching algorithm. Step 1: Set the threshold .εmax for the maximum number of iterations of the algorithm, the threshold .εinner for the judgment of the distance metric of points within the model, the threshold .εnum for the judgment of the number of internal points, and initialize the iteration counter .i = 0. While .i ≤ εmax do: Step 2: In the input feature point pair set: .

] }( ) ( ) ) ] } ( 1,2 Dmatch = D 1 , D 2 = d11 , d21 1 , d12 , d22 2 , . . . , d1n , d2n n .

Randomly take 4 feature point pairs: .

D '1,2 =

[( ) ( ) ( ) ( )] 1 1 2 2 3 3 4 4 d '1, d '2 , d '1, d '2 , d '1, d '2 , d '1, d '2 .

112

4 Stitching Technique for Reconstructed Thermal Images

Step 3: Solving the homography matrix . M of the( image)stitching model, taking the .kth reconstructed thermal image point pair .d2k = x2k , y2k flush coordinates as an example in [11], where the homography matrix is: ⎤⎡ k ⎤ ⎡ k ⎤ x1 m 11 m 12 m 13 x2 k k k k . M · d2 = ⎣ m 21 m 22 m 23 ⎦ ⎣ y2 ⎦ = ⎣ y1 ⎦ = d1 . m 31 m 32 m 33 1 1 ⎡

(4.27)

Considering the flush coordinates of the infrared image, although the homography are )8. ( Therefore, using matrix .∈ R 3×3 , its degrees of freedom ) it( can be )calculated ( )] [( the random point pair, i.e., . D ' = d ' 11 , d ' 12 , d ' 21 , d ' 22 , d ' 31 , d ' 32 , d ' 41 , d ' 42 in Step 3. Step 4: For the remaining point pairs in the set of feature point pairs excluding the point . D ' 1,2 for calculating the homography matrix, .

] [( 1 1 ) ( 2 2 ) ) ] } 1 ( 1,2 2 d1 , d2 1 , d1 , d2 2 , . . . , d1n−4 , d2n−4 n−4 , Dremain = Dremain , Dremain

2 which applies the homography matrix to the feature point pair . Dremain from the reconstructed thermal reference image to obtain the set of homography transforma2 tion coordinates . D ' remain

⎤ ⎤⎡ 1 ⎤ ⎡ '1 m 11 m 12 m 13 x2 x2 x ' n−4 x2n−4 2 2 2 n−4 ⎦ 1 ⎦ = D ' remain . M · Dremain = ⎣ m 21 m 22 m 23 ⎦ ⎣ y2 · · · y . = ⎣ y ' 12 · · · y ' n−4 2 2 m 31 m 32 m 33 1 1 1 1 (4.28) Step 5: Calculate the Euclidean distance between the set of homography trans2 1 and the remaining set of point pairs . Dremain of the formation coordinates . D ' remain feature point pairs from the reconstructed thermal reference image in turn to obtain the set of distance metrics for the infrared image stitching model ⎡

) ( ) ( )] [ ( 1 2 n−4 = dis d ' 1 , d21 , dis d ' 1 , d22 , . . . , dis d ' 1 , d2n−4 . (4.29) Step 6: Based on the imaging characteristics of reconstructed thermal images, the set of distance metrics of the stitching model is screened for interior points using the stitching model interior point judgment threshold .εinner . Consequently, the set of interior points .inner s is obtained: in f raed '2 remain −D remain

dis D1

.

.

( {( ) } | ) k d1k , d2k ∈ inner s | dis d1k , d ' 2 < εinner .

(4.30)

Step 7: Determine whether the number of inner point set .inner s points meets the inner point number judgment threshold .εnum requirement. If .innernum ≤ εnum , iterate counter .i = i + 1, return to Step 2, and re-select 4 feature point matching point pairs randomly for calculation.

4.3 Alignment Techniques for Reconstructed Thermal Image’s Feature Points

113

Fig. 4.9 Image stitching with RANSAC algorithm schematic

Step 8: If .innernum > εnum , output the set of interior points .inner s and the model homography transformation matrix . M3×3 at this time. Finally, the reconstructed thermal image alignment image . P2 is transformed to the model space of the reference image . P1 based on the set of points within the output parameters .inner s, as displayed in Fig. 4.9. Meanwhile, the model homography transformation matrix . M is obtained. Finally, the stitching work of the reconstructed thermal image is accomplished, which is shown in Fig. 4.9.

4.3.2 Analysis of Reconstructed Thermal Image’s Feature Point Alignment Techniques In the RANSAC algorithm, two important judgment thresholds are involved, they are the splicing model inner point judgment threshold .εinner and the inner point number judgment threshold .εnum , to some extent, the setting of these two thresholds requires empirical guidance [10]. In addition, the maximum number of iterations is also a value that requires empirical guidance. If the number of iterations of the RANSAC algorithm on the model is too small, the optimal solution of the model cannot be found. If the number of iterations is too high, the program time for the model calculation increases and will not be conducive to the real-time thermal infrared inspection of the specimen. In this subsection, the adaptive number of iterations for

114

4 Stitching Technique for Reconstructed Thermal Images

reconstructed thermal image feature point alignment and a more robust method for reconstructed thermal image feature point alignment method will be analyzed. In the process of image alignment model calculation, if the percentage of mismatched points between images is small, the iteration should be stopped as soon as possible; conversely, if a high percentage of mis-matched points between images is detected, sufficient iterations are needed before terminating the algorithm to obtain more reliable results [11]. For two images to be stitched, we know from the previous section that at least four feature point pairs are needed to calculate the model parameters. It is required that, after applying the RANSAC model matching calculation algorithm there is a probability .q of excluding all outlier points with a high degree of certainty, and the proportion of the initial data belonging to the in-model points is approximately. p. Consider the probability of a pixel point pair becoming an in-model point. The optimal number of loop iterations can be estimated accordingly. The probability of randomly selecting 4 points, which are all inside points of the computational model is . p 4 , then the probability of at least one of these 4 points being an outside point is .1 − p 4 . In n iterations, the probability ( )n of each selected pair of feature matching points being an outside point is . 1 − p 4 , then the probability of at( least one ) selected pair of feature matching points being an inside point is 4 n .1 − 1 − p , considering the algorithm’s certainty of computing this model, this probability should be greater than or equal to .q, i.e. The probability needs to satisfy ( )n 1 − 1 − p4 ≥ q.

.

(4.31)

After equation transformation processing, Eq. (4.31) can be ) ( and logarithmic rewritten as .ln (1 − q) ≥ n · ln 1 − p 4 , and the number of iterations .n can be expressed as: ln (1 − q) ). ( .n ≥ (4.32) ln 1 − p 4 Therefore, the optimal number of model computation iterations can be estimated. In addition, in the RANSAC algorithm, the cost function as in Eq. (4.33) is used to consider the outlier outliers considering the overall error of model matching. The setting of its threshold value will directly affect the robustness of the algorithm. Cost R AN S AC =



.

( ) ω Ri2 ,

(4.33)

k

ω

.

(

Ri2

)

{ =

0, R 2 < t 2 , C, R 2 ≥ t 2 ,

(4.34)

where . Ri2 denotes the difference ( ) between ( ) the .ith actual value of the data .(xi , yi ) and the theoretical value . xˆi , yˆi . .ω Ri2 is the data error weight. .Cost R AN S AC is the overall error of the RANSAC model. It can be seen that if the threshold .t is obtained too large, all pairs of points of the same class in the set of feature point pairs of

4.3 Alignment Techniques for Reconstructed Thermal Image’s Feature Points

115

the reconstructed thermal image yield the same cost, which leads to the inability to distinguish between inner and outer points. In order to avoid such situations where the model calculation results are affected by inappropriate threshold values, the M-estimation algorithm based on the RANSAC algorithm is presented in the remainder of this subsection. The M-Estimate Sample Consensus (MSAC) algorithm is similar to the basic Random Sample Consensus algorithm. The MSAC algorithm, compared to the Random Sample Consensus algorithm, overcomes the threshold-sensitive disadvantages of RANSAC by modifying the cost function of RANSAC [12]. It can better handle the noise and outliers in the data with higher robustness and accuracy. In the stitching process of reconstructed thermal images, MSAC algorithm can be used to estimate the transformation matrix between images to achieve the stitching of reconstructed images. Specifically, reconstructed images usually have low contrast and low signal-to-noise ratio, so they are prone to problems such as matching errors and noise points in the stitching process. The MSAC algorithm can remove these noise points and outliers by random sampling and robust estimation, thus improving the accuracy and robustness of reconstructed image stitching. The MSAC method uses the following objective function ∑

( ) ω Mi2 ,

(4.35)

M 2, M 2 < t 2, t 2, M 2 ≥ t 2,

(4.36)

Cost M S AC =

.

k

(

ω M

.

2

)

{ =

where . Mi2 denotes the difference ( ) between ( ) the .ith actual value of the data .(xi , yi ) and the theoretical value . xˆi , yˆi . .ω M 2 is the data error weight, and .Cost M S AC is the overall error of the MSAC model. The threshold value .t = 1.96σ , where .σ is usually taken as the mean squared difference of the Gaussian noise in the image. It can be seen that the cost of the inner points in the set of feature point pairs of the reconstructed thermal image is determined by their proximity to the threshold value, thus enabling a good distinction between the mis-matched and correctly matched points, i.e., distinguishing the outer points from the inner points. The specific application steps of the MSAC algorithm in the stitching process of reconstructed images are in the following. A random set of feature points is selected and the transformation matrix is calculated. For all feature points, their positions in the transformed image are calculated, and the distances between them and the matching feature points are calculated. According to the distance threshold, the feature points with a distance less than the threshold are defined as inner points, and the feature points whose distance is greater than the threshold are defined as outer points. If the number of interior points is exceeds a certain threshold, the transformation matrix is re-estimated and the interior and exterior points are recalculated.

116

4 Stitching Technique for Reconstructed Thermal Images

Repeat the above steps until the maximum number of iterations is reached or enough interior points are found. The transformation matrix is re-estimated based on all the interior points and the two images are stitched together. The accuracy and robustness of reconstructed image stitching can be improved by the application of MSAC algorithm, which is better suited for reconstructed image processing and analysis.

4.4 Stitching Quality Improvement Method of Reconstructed Thermal Images In the stitching process of reconstructed thermal images, there are often unsatisfactory stitched results due to problems in the actual detection process. For example, during the stitching process, the stitching seams are generated due to the layout of the image in the panoramic space canvas, and the stitched results are distorted unexpectedly in the spatial transformation due to the large parallax during the shooting process. Considering the quantitative detection based on reconstructed thermal images, undesirable stitched results may interfere with the subsequent quantitative sessions, which in turn may lead to misjudgment of the damage area. In this subsection, the above two possible problems will be further elaborated.

4.4.1 Seamless Stitching Fusion of Stitched Reconstructed Thermal Images The purpose of the reconstructed thermal image stitching is to assemble the multiview infrared inspection results of larger tested spacecraft component specimens into a single panoramic image, with a view to locating and quantitatively analyzing the defect areas in a wide view. The aim is to locate and quantify the defect areas in a wide view. The analysis of the stitched reconstructed thermal images is an effective way to achieve the above steps. Considering the stitching result of the reconstructed thermal image, it has a seam caused by the pixel overlap in the overlapping area between the reference image and the aligned image. Referring to Fig. 4.10, this seam not only visually affects the stitched result. Considering that the color of the pseudo-color image of the reconstructed thermal image characterizes the damage and temperature difference of the measured part, if these stitching seams appear in the possible damage area, the color or grayscale of the pixels at the seam will affect the segmentation and positioning of the subsequent damage area, which will lead to the error of damage quantification. Therefore, in order to avoid stitching seams caused by overlapping pixels of two images, we design a seamless stitched reconstructed thermal image based on the weighted average fusion of alpha channels in this subsection.

4.4 Stitching Quality Improvement Method of Reconstructed Thermal Images

117

Fig. 4.10 Splice of the stitched result image

Fig. 4.11 Stitched result image based on overlapping area division

First, based on the stitched result of the reconstructed thermal image, the image is divided into three regions, as shown in Fig. 4.11. Among them, region .a and region .c do not have the overlap of the stitched images, and region b is the overlap of the two original reconstructed thermal images. The values of each color channel of the pixel points in the considered image overlap region should be determined by the values of the color channels of the corresponding points in the two images to be stitched together [13]. There is a weighted calculation formula provided in the following .

Poverlap = k × Ple f t + (1 − k) × Pright .

(4.37)

The parameter .k is the gradient factor of the stitching seam, which satisfies the condition of .0 < k < l. In the overlapping region, .k is graded from 1 to 0 in the

118

4 Stitching Technique for Reconstructed Thermal Images

direction from the left to the right image. In the overlap region,.k is gradually changed from 1 to 0 in the direction from the left image to the right image, thus achieving a smooth transition from the left overlap region to the right overlap region in the overlap region of the stitched result. The point . Ple f t in the overlap region of the left image overlaps with the point . Pright in the right image at the time of stitching, and when the left image and the right image are stitched together, the point . Poverlap in the overlap region is obtained by weighting the overlapping . Ple f t and . Pright pixel points, thereby ensuring a smooth transition.

4.4.2 Natural Stitching Method with Large Parallax for Reconstructed Thermal Images Considering the acquisition of reconstructed thermal images from different viewpoints, two reconstructed thermal images for stitching with overlapping areas are given, where the reference image is denoted as . P1 and the image to be aligned and stitched with alignment is denoted as . P2 . When the overlapping area between the reconstructed thermal reference image. P1 and the image to be stitched with alignment . P2 cannot be approximated as a plane or there is a change in depth between scenes. If the traditional homography projection transformation matrix is used to transform and stitch the two images, it will inevitably cause ghosting in the overlapping part of the image and affect the stitching quality. The image to be matched is divided into .C1 × C2 cells, and the transformation between each corresponding grid centroid . pi j (x, y) and . pi' j (x, y) for both images follows the principle of homography transformation, i.e.. p˜ i' j = Hi j p˜ i j , where [ ]T .p ˜ i' j = piTj , 1 is the chi-square coordinate of the grid centroid and . H is the local { }N N and . p˜ i' i=1 , homography matrix of .3 × 3. For a set of . N matched point pairs .{ p˜ i }i=1 the local homography transformation matrix . H can be estimated using the following equation in [14]: h j = arg min

N ∑

.

hj

i=1

||[ ] || || ||2 || ai,1 ||2 || ωi, j || h|| = arg min ||W j Ah|| , ai,2 || hj {

(4.38) }

where the weight diagonal array is .W j = diag w1, j , w1, j , w2, j , w2, j , . . . , w N , j , w N , j . For each pixel in the non-overlapping region, the projection transform is calculated using a linear weighting of the local single strain in the overlapping region, i.e., a homography linearization. For the overlapping region of the reconstructed thermal image to be stitched, the linearization of the homography matrix at any proximity .q near the point . p of the overlapping region is performed by expanding the Taylor formula for the homography matrix at point .q as follows ) ( h (q) = h ( p) + Jh ( p) (q − p) + o ||q − p||2 ,

.

(4.39)

4.5 Experiment and Analysis

119

where .Jh ( p) is the Jacobi matrix of the homography matrix of point .q at point . p. Considering that it is not straightforward to calculate the linearization of point.q in the non-overlapping region [14]. And there may be multiple points at the boundary of the overlapping and non-overlapping regions and it is not known where to calculate the Jacobi matrix, so the anchor points are linearized at the boundary and the weighted average is calculated. At the boundary, a sequence of anchor points, denoted as { }R . pi at the boundary, the weighted combination of linearization is expressed as i=1 h L (q) =

R ∑

.

αi (h ( pi ) + Jh ( pi ) (qi − pi )),

(4.40)

i=1

( || ||2 / ) where .αi = exp −||q − p j || σ 2 is the Gaussian weight, based on which the projection transform better takes into account the local structure around the anchor { }R point. pi i=1 at the boundary, and thus improves the accuracy of local area alignment of the reconstructed thermal image. For all the blocks in the overlapping region of the stitched reconstructed thermal image, the homography transformation matrix in each block is changing the weights as the position changes, and thus can be flexibly adapted to the reconstructed thermal image stitching to obtain detailed stitched results.

4.5 Experiment and Analysis In this section, the reconstructed thermal images of the four inspection views of HVI damaged specimen Hyper-1 will be stitched separately, based on the stitching algorithm for the reconstructed thermal images, which is described in the previous section. Among them, reconstructed thermal images of views 1 and 2 contain core damage areas with more complex damage types. Meanwhile, reconstructed thermal images of views 3 and 4 contain more thermal diffusion regions, and the image information is more blurred and the signal-to-noise ratio is relatively low. The fast stitching of the reconstructed thermal images of view 1 and 2 of HVI damaged specimen Hyper-1 will be introduced in Sect. 4.5.1. The seamless processing of the stitching results will be realized. In Sect. 4.5.2, we introduce the fast stitching of the reconstructed thermal images of viewpoint 3 and 4 of HVI damaged specimen Hyper-1. Finally, the stitched result of views 1 and 2 is stitched with those of views 3 and 4 again, in order to achieve the panoramic stitching effect and realize the seamless processing of the final stitched results.

120

4 Stitching Technique for Reconstructed Thermal Images

4.5.1 Fast Feature Extraction Stitching Experiment for Reconstructed Thermal Images In this subsection, the ORB feature detection and description algorithm will be used to detect and stitch the feature points of the reconstructed thermal images of views 1 and 2 of HVI damaged specimen Hyper-1, which are shown in Fig. 4.12. First, the input image is converted into a grayscale image, and the feature regions of the reconstructed thermal grayscale image are detected with feature points and described using the BRIEF operator. And the detected features and their scales are depicted on the original input image, which are shown in Fig. 4.13. Further, the feature points are matched and the detected feature points are removed by mis-matching using the random sampling consistency algorithm. As can be seen from Table 4.1, in the segment of feature point detection, 2938 and 3083 feature points were detected in the two reconstructed thermal images of this specimen, considering the complexity of the reconstructed thermal images. Using distance matching, a one-to-one unique initial matching feature point pair of 504 pairs was derived, as shown in Fig. 4.14. Obviously, some of the matching results shown in Fig. 4.14 are

Fig. 4.12 Reconstructed thermal images of views 1 and 2 for HVI damaged specimen Hyper-1

Fig. 4.13 Feature point location labeling of reconstructed thermal images of view 1 and 2

4.5 Experiment and Analysis

121

Table 4.1 Number of feature point detection and matching point pairs Initial number Initial number Number of Number of of detection of detection matched de-mismatched points (view 1 points (view 2 point pairs result image) image) point pairs 2938

3083

504

347

Feature point success rate of matching 68.85%

Fig. 4.14 Initial pairing of feature points for reconstructed thermal images of views 1 and 2

in the mis-matched state, and their yellow connecting lines are not consistent with the actual detected view displacement. Further, the RANSAC algorithm is used to remove the mis-matched point pairs and establish the matching relationship between the two reconstructed thermal images. The results are shown in Fig. 4.15. After the RANSAC de-mismatching algorithm, 347 pairs of matched point pairs satisfying the threshold requirements are established, and the homography matrix of the splicing transformation model is successfully established, as shown in Eq. (4.41): ⎡

⎤ 0.9920 −0.0029 269.6123 4.8659 ⎦ . . M = ⎣ −0.0008 0.9903 0 0 1

(4.41)

Based on the above image stitching model, the stitched image is obtained as shown in Fig. 4.16. It can be seen that there is an obvious stitching gap at the bottom of Fig. 4.16 at the scale range 250–300. Considering the influence of the obvious stitching gap on the subsequent quantitative analysis, the stitching results are processed using the seamless processing of the stitched reconstructed thermal image with weighted average. As shown in Fig. 4.17, based on the preliminary stitched results, a mask is created for the overlapping area of the stitched image, and a gradient mask of gray values from right to left is made within this mask. The gray value of the gradient is used as a weight, and a weighted average is calculated for each pixel of each channel in

122

4 Stitching Technique for Reconstructed Thermal Images

Fig. 4.15 The results of feature point de-matching for reconstructed thermal images of views 1 and 2

Fig. 4.16 Preliminary stitched results for views 1 and 2

Fig. 4.17 Seamless processing masks in stitched results for views 1 and 2

4.5 Experiment and Analysis

123

Fig. 4.18 Final stitched image for views 1 and 2

the overlapping area of the stitched image. Using this stitching post-processing, a seamless stitched image is finally obtained, as shown in Fig. 4.18.

4.5.2 Fine Feature Extraction Stitching Experiments of Reconstructed Thermal Images In this subsection, the KAZE feature detection and description algorithm is used to detect and stitch the reconstructed thermal images of the four views of HVI damaged specimen Hyper-1. Considering that the stitching process of views 1 and 2 of the actual damage specimen has been demonstrated using Sect. 4.5.1, in this subsection, the stitching KAZE feature point detection and matching algorithm is demonstrated based on views 3 and 4 of HVI damaged specimen Hyper-1, which is shown in Fig. 4.19. Compared with the actual damage views 1 and 2, the image information in view 3 is more blurred and the signal-to-noise ratio is relatively low due to the presence of the thermal diffusion region with a higher area as displayed in Fig. 4.20. As shown in Fig. 4.19, the reconstructed thermal images of views 3 and 4 of HVI damaged specimen Hyper-1 were taken as input and converted to grayscale images. The input grayscale images are detected and matched with feature points by using the KAZE feature point detection and description method. As can be seen from Table 4.2, in the segment of feature point detection, the complexity of the reconstructed thermal image of this specimen is lower than that of the reconstructed thermal images of views 1 and 2, considering that there are more thermal diffusion areas in this specimen. The 680 and 1268 feature points were detected in the two reconstructed thermal

124

4 Stitching Technique for Reconstructed Thermal Images

Fig. 4.19 Reconstructed thermal images of views 3 and 4 for HVI damaged specimen hyper-1

Fig. 4.20 KAZE feature point description of reconstructed thermal images of views 3 and 4 for specimen hyper-1 Table 4.2 KAZE feature point detection results (FAST-SURF detection results for comparison) Initial number Initial number Number of Number of Feature point of detection of detection matched de-mismatched success rate points points point pairs result of matching (view 1 (view 2 point pairs (%) image) image) KAZE 680 FAST-SURF 2362

1268 4055

148 188

103 78

69.60 41.49

images, respectively. Using distance pairing, 148 pairs of one-to-one unique initial matching feature point pairs were derived, as shown in Fig. 4.21. Obviously, some of the matching results shown in Fig. 4.21 are in the mis-matched state, and their yellow connecting lines are not consistent with the actual detected view displacement. Therefore, further, the RANSAC algorithm is used to remove the mis-matched point pairs and establish the matching relationship between the two reconstructed thermal images. The results are shown in Fig. 4.22. After using the RANSAC de-mismatching

4.5 Experiment and Analysis

125

Fig. 4.21 Initial pairing of feature points for two reconstructed thermal images of views 3 and 4

Fig. 4.22 The results of feature point de-mismatching in two reconstructed thermal images of views 3 and 4

algorithm, 103 pairs of matched point pairs satisfying the threshold requirements are established, and the success rate of feature pair matching is 69.60%. As shown in Table 4.2, compared with the feature point detection algorithm in linear space, the feature point detection in KAZE nonlinear scale space is more refined. As can be seen from Table 4.3, in the KAZE algorithm, the generated feature point descriptors of the reconstructed thermal images are in the form of floating point numbers with higher dimensionality and better discrimination, which are suitable for scenarios such as the reconstructed thermal images of views 3 and 4 that require more refined image matching and recognition. For the KAZE feature points and their descriptions of the obtained reconstructed thermal images of views 3 and 4 of the actual damaged specimens, the correct matching point pairs are obtained using distance-based matching calculation and the MSAC algorithm to eliminate the mis-matched points as shown in Fig. 4.21 and Fig. 4.22, and the transformation matching model of the reconstructed thermal images is established

126

4 Stitching Technique for Reconstructed Thermal Images

Table 4.3 608 feature points with floating point 64-dimensional descriptors for reconstructed thermal image of view 3 64-dimensional KAZE description sub-dimension No. 1 2 .· · · 319 .· · · 608

1 0.0741 0.061

2 0.05 0.1354

.· · ·

33 –0.0919 –0.1383

.· · ·

.· · ·

32 0.1916 0.1466

.· · ·

64 0.0121 0.1367

–0.0037

–0.0059

.· · ·

0.0348

–0.0332

.· · ·

0.0107

–0.0005

0.0025

.· · ·

0.0059

–0.0044

.· · ·

0.3384

.· · ·

.· · ·

Fig. 4.23 Preliminary splicing results for views 3 and 4

accordingly. In which, the homography matrix of image transformation is shown in Eq. (4.42). ⎡ ⎤ 0.9735 −0.0094 476.3926 24.9681 ⎦ . . M = ⎣ 0.0111 1.0097 (4.42) 0 0 1 Based on the above stitching model, the stitched image is obtained as shown in Fig. 4.23. It can be seen that there are several obvious stitching gaps of different degrees in the yellow box marked in Fig. 4.23. Considering the influence of the obvious splicing gaps on the subsequent quantitative analysis, the stitched results are processed by the seamless processing of the stitched reconstructed thermal image with weighted average, as displayed in Fig. 4.24. After seamless processing, the final seamless stitched result image for views 3 and 4 is shown in Fig. 4.25.

4.5 Experiment and Analysis

127

Fig. 4.24 Seamless processing masks in stitched results for views 3 and 4

Fig. 4.25 Final stitched image for views 3 and 4

In order to obtain the complete damage area, the stitched reconstructed thermal images of views 1 and 2 were further stitched with the stitched reconstructed thermal images of views 3 and 4. The above process was repeated to detect and match the feature points between the two sets of stitched reconstructed thermal images, as shown in Fig. 4.26. The left image shows the preliminary stitching result for the complete damage area using all four views. It can be seen that there are obvious traces of stitching seams within the yellow box, and the seamless processing is done for this phenomenon that may interfere with the subsequent quantitative analysis experiments. The right image of Fig. 4.27 displayes that the all four views stitching result map with seamless weighted fusion achieves the processing without stitching

128

4 Stitching Technique for Reconstructed Thermal Images

Fig. 4.26 4 Perspective stitched reconstructed thermal images in RANSAC process

Fig. 4.27 Regional splicing results of HVI damaged specimen hyper-1

seams and ensures the quality of the stitched image. It provides strong support for the subsequent quantitative analysis of spacecraft damage defects.

4.6 Summary Based on the need for panoramic detection of spacecraft damage areas, this chapter introduces the stitching of reconstructed thermal images for multiple detection views of spacecraft damage areas. Regarding the features of the reconstructed thermal image, the construction of the scale space of the reconstructed thermal image, the pyramid scale space model and so on are explained. The fast detection and fine detection methods of feature points of reconstructed thermal images are introduced

References

129

for specific scene requirements. Further, the distance-based pairing of these feature points is performed and the removal of mis-matches is achieved. Considering the actual reconstructed thermal stitching results, this chapter introduces the seamless fusion processing of reconstructed thermal image stitching results. Finally, the above algorithm is experimentally validated using HVI damaged specimen Hyper-1. The stitched results based on the multi-view reconstructed thermal images of the spacecraft will guide the subsequent quantitative analysis of positioning of spacecraft defects.

References 1. Lowe, D. G.: Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60, 91–110 (2004) 2. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: An efficient alternative to SIFT or SURF. 2011 International conference on computer vision, pp. 2564–2571 (2011) 3. Gupta, S., Kumar, M., Garg, A.: Improved object recognition results using SIFT and ORB feature detector. Multimedia Tools and Applications, 78, 34157–34171 (2019) 4. Bansal, M., Kumar, M., Kumar, M.: 2D object recognition: a comparative analysis of SIFT, SURF and ORB feature descriptors. Multimedia Tools and Applications, 80, 18839–18857 (2021) 5. Chhabra, P., Garg, N. K., Kumar, M.: Content-based image retrieval system using ORB and SIFT features. Neural Computing and Applications, 32, 2725–2733 (2020) 6. Calonder, M., Lepetit, V., Strecha, C., Fua, P.: Brief: Binary robust independent elementary features. Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, pp. 778–792 (2010) 7. Alcantarilla, P. F., Bartoli, A., Davison, A. J.: KAZE features. Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, 214–227 (2012) 8. Tareen, S. A. K., Saleem, Z.: A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. In 2018 International conference on computing, mathematics and engineering technologies (iCoMET) (pp. 1–10). IEEE (2018) 9. Sharma, S. K., Jain, K.: Image stitching using AKAZE features. Journal of the Indian Society of Remote Sensing, 48, 1389–1401 (2020) 10. Wu, F., Fang, X.: An improved RANSAC homography algorithm for feature based image mosaic, Proceedings of the 7th WSEAS International Conference on Signal Processing, Computational Geometry & Artificial Vision. 2, (2007) 11. Andrew, A. M.: Multiple view geometry in computer vision. Kybernetes, 30(9/10), 1333–1341 (2001) 12. Fischer, C., Wetzl, J., Schaeffter, T., Giese, D.: Fully automated background phase correction using M-estimate sAmple consensus (MSAC)—application to 2D and 4D flow. Magnetic Resonance in Medicine, 88(6): 2709–2717 (2022) 13. Xiu, C., Ma, Y.: Image stitching based on improved gradual fusion algorithm. 2019 Chinese Control And Decision Conference (CCDC), 2933–2937 (2019) 14. Lin, C. C., Pankanti, S. U., Natesan, Ramamurthy, K., Aravkin, A. Y.: Adaptive as-natural-aspossible image stitching. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1155–1163 (2015)

Chapter 5

Weight Vector Adjustment-Based Multi-objective Segmentation of Reconstructed Thermal Images

Complex damage has special segmentation needs during reconstructed thermal image segmentation. In this section, three segmentation objective functions oriented towards the needs of noise cancellation, detail preservation and edge retention are considered simultaneously to solve the complex damage segmentation problem. However, in practical multi-objective problems, the Pareto Fronts are not ideally continuous and uniform. This chapter therefore proposes two methods of weight vector adjustment for non-regular Pareto Front surfaces. One is based on the crowding degree adaptive weight vector adjustment method from the perspective of population individual distribution; the other is based on the effective region incremental learning and PDM adjustment method from the perspective of Pareto Front surface shape learning. Experiments show that the weight vector adjustment method can cope with irregular pareto front surface problems such as fracture and unevenness

5.1 Introduction The segmentation of reconstructed thermal image is the key step of defect recognition and quantification. After obtaining reconstructed thermal images of different types of defects, it is essential to extract and quantify the specific form of defects. Segmentation of reconstructed thermal images refers to the process of using image segmentation technology to divide the image into different areas and extract the concerned defect target. In the segmentation process, it is important to segment as many damaged areas as possible from the background area of the material to ensure that the damage detection is complete, and to ensure that as few areas of the background area of the material as possible are incorrectly divided into damaged areas to ensure that the damage detection is accurate [1, 2].

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 C. Yin et al., Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft, https://doi.org/10.1007/978-981-99-8216-5_5

131

132

5 Weight Vector Adjustment-Based Multi-objective …

5.2 The Challenge of Complex Damage Segmentation Traditional image segmentation methods generally include threshold based segmentation [3, 4], boundary detection-based segmentation [5, 6], region-based segmentation [7], etc. In recent years, some new segmentation methods based on graph theory [8, 9] and clustering [10–12] have gradually emerged. A clustering method is the process of dividing a collection of objects into classes made up of similar objects. The idea of clustering can be applied to image segmentation, clustering similar pixels in the image into the same area or image block, and constantly iteratively correcting the clustering results until convergence, thus forming the image segmentation results. This chapter focuses on the application of clustering algorithm in reconstructed thermal images segmentation. Due to the particularity of space debris impact, the hypervelocity impact will not only produce various types of damage on the surface, subsurface and interior of materials, but also the energy and heat generated in the collision process will melt part of materials to different degrees, forming more complex spacecraft impact damage. Therefore, when segmenting the reconstructed thermal images of spacecraft, the main challenges are as follows: (1) Reducing noise interference in the imaging link of the infrared imaging system: in the infrared thermal imaging process, the infrared image sensor inevitably has the effect of random noise during the capture, loading and transmission of the detection results. (2) Reducing interference from the detection environment and material surface impurities: during the detection process, interference from atmospheric thermal radiation, small changes in air velocity, and crushing impurities attached to the spacecraft after impact can affect the infrared thermal imaging results, bringing different degrees of noise impact. (3) Solve the problem of blurred edges in the infrared images of spacecraft impact damage: as the infrared images reflect the difference in radiation between different impact damage and material background. However, there is no clear break in this radiation undulation, which tends to be a smooth transition, leaving no clear contours between the various types of regions of the spacecraft’s impact damage infrared images. (4) Overcome the problem of low contrast in the infrared image of spacecraft impact damage: when the impact damage region of the spacecraft is excited, in addition to the radiation to the excitation source, the heat exchange effect between the impact damage region and the material background region, and the influence of the detection environment on the thermal radiation, makes the contrast between the impact damage region and the material background region low. All these challenges pose considerable difficulties for complex defect reconstructed thermal images segmentation. Therefore, this chapter proposes a multiobjective optimization based complex defect reconstructed thermal images segmentation method. Multiple optimisation objective functions are also considered to cope with the extraction performance requirements of complex defects. The large compu-

5.3 Complex Object-Oriented Infrared Image Segmentation Objectives

133

tational cost associated with multi-objective iterations is considered. The algorithms in this chapter also design a two-layer multi-objective optimisation model for thermal damage image segmentation. A relative trade-off between the two is achieved in order to ensure segmentation quality while minimising computational cost.

5.3 Complex Object-Oriented Infrared Image Segmentation Objectives Due to the complexity of space environmental effects, spacecraft often have multiple damage modes such as surface and subsurface coupling, which poses a new challenge to defect segmentation. Traditional image segmentation often considers the optimization of a single objective function, which may lose the features of subtle damage. Therefore, it is necessary to consider a variety of damage targets for impact defect feature segmentation. The Fuzzy C-Means (FCM) clustering algorithm [13] is a typical soft clustering algorithm, which divides the data set into .T classes and assigns affiliation to each data in each class, and labels the data according to the division matrix (affiliation matrix), thus completing the data clustering. Compared to hard clustering, FCM does not require an “either/or” relationship between the elements of the clusters. This makes the FCM algorithm more suitable clustering problems. ) ( for image pixel For an reconstructed thermal image .G = g1 , . . . , g P×Q with size . P × Q to be segmented, the FCM algorithm defines the following objective function:

.

O FC M =

PQ ∑ T ∑

λαtm r 2 (gm , ct ),

(5.1)

m=1 t=1

meanwhile, the constraints is ⎧ λ ∈ [0, 1] , ∀m = {1, . . . , P Q} , t = {1, . . . , T } , ⎪ ⎪ ⎨ tm T ∑ . ⎪ λtm = 1, ∀m = {1, . . . , P Q} , ⎪ ⎩

(5.2)

t=1

here, .α ∈ (1, ∞) is the smoothing parameter and .r (gm , ct ) is the pixel-to-pixel similarity metric, which is generally the inter-pixel distance. .λtm is the membership degree of the .mth pixel .gm to the .t, (t = 1, . . . , T ) clustering centre .ct . The above optimization problem is solved using the Lagrange multiplier method to obtain the updated formulas for the affiliation and clustering centres as follows:

134

5 Weight Vector Adjustment-Based Multi-objective …

λtm =

1 T [ ∑ i=1

.

PQ ∑

ct =

2 r (gm , ct ) α − 1 r (gm , ci ) ]

,

(5.3)

λαtm gm

m=1 PQ ∑

. λαtm

m=1

Through the above iteration formula, membership degree and clustering center are constantly updated to correct the membership degree and clustering center value of each pixel relative to the cluster, so that the objective function value . O FC M is constantly optimized to the minimum value. Finally, the classification of pixels is realized based on the final membership matrix to achieve image segmentation. However, the traditional FCM algorithm is sensitive to noise and cannot address a series of challenges brought by complex damage detection mentioned. In order to solve the above problems, the FCM algorithm is improved in this chapter to meet the needs of reconstructed thermal images segmentation.

5.3.1 Noise-Cancellation Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images Infrared sensors inevitably introduce ambient noise when capturing subtle temperature changes on the surface of the specimen. Uncertainty is thus introduced during the infrared thermal imaging link, causing noise erosion of the infrared thermal image. The hypervelocity impact may lead to melting around the impact crater, and the noise data creates a greater interference with the detection of fine damage. To address this problem, this section proposes an improved objective function for the noise cancellation requirement, thus eliminating the interference of noisy information. In solving the link noise interference problem of reconstructed thermal images, infrared image is conthe noise cancellation function . f 1 (c) of the impact-damaged ∑ structed by introducing a new weighted fuzzy term . n∈Umg ,n/=m H (gn , ct ), according to the similarity between the pixels of the original impact-damaged infrared image, and after the superposition of the weighted blur term, a smooth filtered image of the impact-damaged infrared image is obtained, which can effectively solve the noise interference problem [14]. In the weighted fuzzy term, the fuzzy factor .ξi j takes into account not only the spatial distance constraint relationship .ξdc between pixel points in the neighbourhood window, but also the grey scale constraint relationship .ξgc between pixel points in the neighbourhood window and the central pixel point, so that the constructed noise cancellation function . f 1 (c) fully takes into account the neighbourhood information of the impact-damaged image, and more flexibly adjusts

5.3 Complex Object-Oriented Infrared Image Segmentation Objectives

135

the constraint effect of the neighbourhood information on the central pixel, so as to eliminate the influence of noisy pixel points on the image segmentation as much as possible. Adequate consideration of the random noise caused by the infrared imaging system, as well as the noise interference caused by crushing impurities formed by hypervelocity impacts, is essential for noise cancellation of reconstructed thermal images. In this section, the following expression for the noise cancellation function . f 1 (c) of the impact-damaged reconstructed thermal image is given for the above noise situation ⎡ ⎤ PQ ∑ T ∑ ∑ t . f 1 (c) = λαtm ⎣ H (gm , ct ) + ζmn H (gn , ct )⎦ , (5.4) g

n∈Um ,n/=m

m=1 t=1 g

where .Um is the .g × g neighborhood window centered on pixel .gm , . H (gn , ct ) is the Gaussian radial basis similarity measure function of pixel .gn and clustering center .ct . Meanwhile, the similarity measurement function satisfies .

) ( ||gn − ct ||2 H (gn , ct ) = 1 − exp − , θ (

which.θ is the scale parameter, and satisfy.θ = || || PQ || || 1 ∑ || .dism = gm − gm || || ||. PQ

1 P Q−1

PQ ∑ m=1

(5.5)

( dism −

1 PQ

PQ ∑

))2 dism

,

m=1

m=1

An important tool for reconstructed thermal image noise cancellation is the super∑ imposed weighted blur term . n∈Umg ,n/=m H (gn , ct ), which smooths the image. In this construction of the weighted fuzzy term, the spatial information of the pixel points is fully considered, and the fuzzy factor .ξmn flexibly adjusts the constraining effect of the neighbouring pixels on the central pixels to more scientifically suppress the noise points in the reconstructed thermal images. The expression of the fuzzy factor is t α .ζmn = ξdc • ξgc • (1 − λtn ) , (5.6) where the spatial distance constraint .ξdc satisfies ξ

. dc

=

1 , rmn + 1

(5.7)

where .rmn is the pixel space distance of .gn , n /= m within a .k × k neighborhood of g and the center pixel .gm . Take .k = 5 for example, the values of .rmn and .ξdc are shown in Fig. 5.1, on the left of the graph is .rmn value, the right of the graph is .ξdc value. From the numerical analysis, the closer the pixel point is to the central pixel, the stronger the effect on the central pixel. This is also in line with the actual situation

. m

136

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.1 Spatial distance constraint values under .r = 5 neighbourhood window

in the process of infrared image segmentation with impact damage, that is, the closer the pixel has the stronger effect on the central pixel. In addition, the spatial grayscale constraint .ξgc satisfies

ξ

. gc

=

⎧ − ∑ ⎪ ϕmi , An < A, ⎪ ⎨2 + ϕmn / g

i∈Um

⎪ ⎪ ⎩2 − ϕmn /



g

(5.8)



ϕmi , An ≥ A,

i∈Um

where . An =

V ar (Z m ) [Mean(Z m )]2

denotes the ratio of the variance to the mean square of the ∑ An −

n∈U

g

m pixel points set . Z m located in the .k-neighborhood of pixel .gn and . A = k × k . The .ϕmn represents for the value of mean squared deviation of neighbouring points . gn and )] [ (



central point .gm in the kernel space projection, i.e., .ϕmn = exp − An − A g Um .

,n ∈

The constant .2 is used to enhance the suppression of the neighbourhood pixel points to the central pixel point. t , the constraint of .ξdc in the neighborhood window should In the fuzzy factor .ζmn be considered, and the constraint of .ξgc should also be considered, which can accurately describe the constraint effect of the neighborhood pixel on the center pixel of the impact damaged infrared image. In order to better illustrate the advantages of considering two constraint relations at the same time, two possible situations in the process of image segmentation are described. Figure 5.2 shows an image frame with a size of .3 × 3. In .r A , the central pixel in the neighborhood window is not affected by noise, while noise points exist in other pixels in the neighborhood window. In .r B , the central pixel in the neighborhood window is affected by noise. Other pixels in the neighborhood window are not noise points. Neighborhood window size .k = 3, this figure shows the neighborhood window space distance constraints .ξdc . For the pixel .r A , there are two noise points, .210 and .60. The difference between the noise point .210 and the central pixel is greater than the difference between the noise point .60 and the central pixel, but the spatial distance corresponding to the two noise points is different. For pixel .r B , the central pixel is destroyed by noise, and

5.3 Complex Object-Oriented Infrared Image Segmentation Objectives

137

Fig. 5.2 Description of the two constraints

other pixels in the neighborhood are normal pixels. However, if only the constraint of .ξdc is considered, the pixel values of pixel .102 and pixel .99 are different, but the corresponding constraint is .0.414. Therefore, when only the constraint of .ξdc is considered, the constraint force cannot be distributed according to the gray value of the pixel. The gray values of pixels must be considered together in order to make full use of the neighborhood spatial information of pixels. t = ξdc • ξgc • (1 − λtn )α , where .λtn is the neighbourhood pixel Fuzzy factor .ζmn point .gn with respect to the affiliation of the cluster centre .ct , the fuzzy factor is controlled by both the affiliation .λtn the .ξdc , the .ξgc constraints. When .λtn is small, both constraints have less influence on the central pixel, and conversely, when .λtn is larger, both constraints have more influence on the central pixel. In the noise cancellation function for reconstructed thermal images, .λtm is the affiliation of neighbourhood pixel points .gm about the cluster centre .ct , and the affiliation function .λtm is solved using ∑ T the Lagrange multiplier method: λtm = 1, then, one has Equation (5.4) needs to satisfy . t=1 L 1 (λtm , ct , μm ) =

T ∑

⎡ λαtm ⎣ H

(gm , ct ) +

⎡ ∂ L1 α−1 ⎣ H (gm , ct ) + = αλtm ∂λtm

⎤ t ζmn H

(gn , ct )⎦ + μm

g

n∈Um ,n/=m

t=1

.





t ζmn H

( T ∑

) λtm − 1 ,

t=1



(gn , ct )⎦ + μm = 0.

g

n∈Um ,n/=m

(5.9) Solving the above equation, the affiliation of the pixel .gm with respect to the cluster centre .ct is: λ

. tm

=

1 1 t ζmn H (gn , ct ) α − 1 H (gm , ct ) + g T ∑ n∈Um ,n/=m ⎥ ⎢ ∑ ⎣ t H (g , c ) ⎦ H (gm , ci ) + ζmn n i i=1 ⎡



g

n∈Um ,n/=m



.

(5.10)

138

5 Weight Vector Adjustment-Based Multi-objective …

5.3.2 Detail-Preserving Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images In defects formed by hypervelocity impacts, a more complex geomorphology is formed within the impact crater. The state of preservation of the details of the segmented defect part can affect the impact damage assessment. Therefore, it is necessary to consider the detail information preservation of the segmented image during the reconstructed thermal images segmentation. In solving the detail retention problem of the reconstructed thermal image, the detail retention function . f 2 (c) is constructed to achieve a good segmentation of the pixel points of the reconstructed thermal image, which requires the resultant image after segmentation has a small tightness and a large separation, i.e., it has to satisfy a small dispersion between similar pixel points and a large separation between dissimilar pixel points [15]. Therefore, when constructing the function . f 2 (c), the tightness measure .Com (G, c) and the separability measure . Sep (G) are considered simultaneously to ensure that the ratio of the tightness measure to the separability measure is as small as possible, so that the detail information of the reconstructed thermal image segmentation result is fully preserved. Then the reconstructed thermal image detail retention function . f 2 (c) is f (c) =

. 2

Com (G, c) , Sep (G)

(5.11)

where .Com (G, c) is the tightness measure function, the smaller the tightness measure, the smaller the difference between pixels in the same class, i.e., the less dispersion; . Sep (G) is the separateness measure function, the larger the separateness measure, the larger the difference between pixels in different classes, i.e., the greater the separation between classes. The detail retention function is defined as the ratio of the tightness measure to the separability measure. Therefore, the minimum value of the function . f 2 (c) corresponds to the best degree of detail information retention. The spatial information between pixels of reconstructed thermal images can guide pixel segmentation. The spatial information of a pixel includes the neighborhood information of the pixel point to be segmented, and the information of the pixel points that have similar neighborhood structure with the pixel. The former is defined as local spatial information .γm (Local information, L-Info) and the latter is defined as nonlocal spatial information .ψm (Non-local information, NL-Info), and the two spatial information are combined to measure the tightness of segmentation. The .Com (G, c) can compensate the limitation of single spatial information, and the two spatial information complement each other to obtain more accurate detail information of the impact-damaged reconstructed thermal image. As a result, the inter-pixel tightness metric function .Com (G, c) of the segmentation result of the impact-damaged image is

5.3 Complex Object-Oriented Infrared Image Segmentation Objectives

Com (G, c) =

T ∑

.

PQ ∑ m=1

139

) ( λαtm • ||gm − ct ||2 + ω1 ||γm − ct ||2 + ω2 ||ψm − ct ||2 PQ ∑

t=1

, λtm

m=1

(5.12) therein, .ct is cluster center, .λtm is the affiliation of the .mth pixel to the .tth cluster cen1 ter. Here we take .λtm = T ( 1 , ∑ ||gm − ct ||2 + ω1 ||γm − ct ||2 + ω2 ||ψm − ct ||2 ) α−1 ||gm − ci ||2 + ω1 ||γm − ci ||2 + ω2 ||ψm − ci ||2 .ω1 and .ω2 are controls for L-Info and NL-Info, the weighting factors for the two types of spatial information, .γm and .ψm denote the local spatial information L-Info and the non-local spatial information NL-Info, respectively. L-Info satisfies 1 ∑ .γm = gx , (5.13) |Q m | x∈Q i=1

m

where . Q m is the set of pixels in the neighborhood window centered at the .mth pixel, g is the .xth pixel in . Q m . NL-Info satisfies ∑ .γm = (5.14) n∈Vmi σmn gn ,

. x

among them, .Vmi denotes the search window of size .i × i, which is centered on the weight between the pixel point .gm , .σmn (.n ∈ Vmi ) denotes the similarity measure ∑ pixel point .gm and the pixel point .gn and satisfies .0 ≤ σmn ≤ 1, . n∈Vmi σmn = 1. The weight .σmn is defined as σ

. mn

⎛ || ( ) ( v )||2 ⎞ v || ┌ Q − ┌ Q n ||2,ω 1 m ⎠, = exp ⎝− 2 χm e

(5.15)

( ) where .┌ Q vm is the grayscale vector after vectorizing the pixel points in the similarity search window, which is centered on the pixel point .gm with size .v × v, and .e is the filtering degree parameter to control the attenuation of .σmn degree. The || ( ) ( )||2 similarity measure of pixel point .gm and pixel point .gn is .||┌ Q vm − ┌ Q vn ||2,ω , which is a Gaussian-weighted Euclidean distance, and .ω is the standard varito achieve normalization, and ance of the Gaussian while .χm is used ⎛ function, || ( v ) ( )|| ⎞ ||┌ Q − ┌ Q v ||2 ∑ m n 2,ω ⎝ ⎠. .χm = n∈Vmi exp − 2 e When classification segmentation of pixel points is performed for reconstructed thermal images, the pixel points of each damage category always revolve around the affiliation with the cluster center .ct . Therefore, in defining the pixel separability measure function . Sep (G), the degree of separation between classes is reflected by

140

5 Weight Vector Adjustment-Based Multi-objective …

calculating the segmentation of the cluster centers for each set of segmented class pixel points, which is defined by

.

Sep (G) =

s s ∑ ∑

|| || λiαj ||c j − ci ||,

(5.16)

i=1 j=1

herein, .λi j is the affiliation of the clustering center .ci for .c j , and λ =(

. ij

T ∑

l=1,l/= j

1 || || ) 1 , i /= j. ||c j − ci || α−1 || || ||c j − cl ||

(5.17)

5.3.3 Edge-Retention Oriented Segmentation Objective for Complex Damage Reconstructed Thermal Images As a major feature of defect damage, the edge segmentation quality of the defect area needs to be paid attention to during image segmentation. The segmented defect edge features are important basic information in the defect size quantification process. In the noise cancellation function . f 1 (c) for the impact damage reconstructed thermal image, the infrared image noise generated by the hypervelocity impact can be eliminated by the superposition of a weighted blurring factor, but this superposition operation, which is equivalent to a smoothing process, also has the possibility of blurring the contours of the damage region. At the same time, because the infrared image reflects the radiation difference between different impact damage and material background, this radiation undulation has no obvious fault and tends to be a smooth transition, making the spacecraft’s impact damage image has no very distinct edge contour between the various types of regions. Therefore, designing an objective function to maintain the damage edges during the reconstructed thermal image segmentation can be more helpful to segment and peel the damage regions from the material background. Therefore, in the single objective edge-preserving function . f 3 (c) of the reconstructed thermal images, the damage edges need to be searched in the impactdamaged image first. Therefore, a method based on the pixel coefficient of variation is used in this thesis to determine the edge pixels [16]. The pixel .gm variation coefficient .C V is ( ) V ar G km ( ). (5.18) .C Vm = mean G km From Eq. (5.18), the pixel coefficient of variation is the ratio of the pixel variance to the mean value within a neighborhood window .G km of size .k × k centered at pixel . gm , and the degree of homogeneity of pixels within the neighborhood window can

5.3 Complex Object-Oriented Infrared Image Segmentation Objectives

141

be described by the value of the coefficient of variation .C V . When the value of the coefficient of variation .C V is large, the homogeneity is weak and the more likely the pixel is an edge pixel of damage. The original impact damage infrared image contains noisy pixels, and if the coefficient of variation is calculated directly on the original infrared image to determine the damage edge pixels, it will lead to a large false positive rate. In order to avoid the influence of noisy pixels as much as possible and to determine the damage edge pixels as much as possible, this chapter uses the following three steps to process the reconstructed thermal images. Step 1: Nonlinear sum infrared image .G sum acquisition using NL-Info of the original impact-damaged reconstructed thermal image .G to eliminate the effect of noise points on edge information acquisition. The method calculates the pixel .gm the nonlinear sum infrared image .G sum m with the help of the similarity metric weights .σmn (.n ∈ Vmi ) between the pixel points in the similarity search window and the central pixel point .gm , and the arithmetic is sum .G m

∑ n∈V i σmn gn . = ∑ m n∈Vmi gmn

(5.19)

Step 2: Calculate the pixel coefficient of variation.C V of the original reconstructed thermal image.G, quantify the coefficient of variation value using histogram statistics, determine the coefficient of variation threshold, and obtain the initial edge pixel point sum . Edge (G ) of the sum infrared image .G sum . Step 3: Counting the orientation information of the initial edge pixel points of sum .G , locating the edge pixels, determining the gray value of the damage edges in original image and positions; replacing the pixel points at the corresponding positions of sum infrared image .G sum with edge pixels to obtain the edge-improved sum infrared image .η. The main processing procedures of this step are shown in Fig. 5.3. The orientation of the damaged edge pixels is determined by: let the pixel point sum . Edge(G )0 be an initial edge pixel point determined in the second step. Calculate the mean gray values of the .3 × 3 neighborhoods of the initial edge pixel point and its eight neighboring pixel points, labeled as .m 0 , m 1 , m 2 , m 3 , m 4 , m 5 , m 6 , m 7 , m 8 , and the horizontal, left diagonal, vertical and right diagonal directions are labeled as 1 2 3 4 .d , d , d , d , and make the following calculations to determine the damage edge pixel orientation, that is, d = max (|m 1 − m 5 | , |m 2 − m 6 | , |m 3 − m 7 | , |m 4 − m 8 |) .

.

(5.20)

In addition, the pixel value of the damaged edge pixel is determined by: if the direction .d = d 3 , i.e., vertical, then the gray value of the central edge pixel is determined by the smaller side of .|m 1 − m 0 |, .|m 0 − m 5 |. For example, if .|m 0 − m 5 | is smaller, the grayscale value of the damaged edge pixel is the mean value of .m 4 , m 5 , m 6 . After obtaining the edge-improved sum infrared images .η, the edge-retention function . f 3 (c) of the reconstructed thermal image can be defined as follows

142

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.3 Directional statistics of damaged edge pixels

f (c) =

Yq T ∑ ∑

. 3

( )2 δq λαtq ηq − ct ,

(5.21)

t=1 q=1

where .λtq denotes the affiliation of the pixel points with gray level .q about the cluster center, .Yq is the number of gray levels of the impact damage IR image (usually, .Yq f j (o' ) then . f j∗ = f j (o' ), j = 1, 2, 3. Step 3.6: Update the neighborhood to Tchebycheff’s math( | → ) solution: ( | according ) → | | ematical expression, if .h t f o' ||ρ i ≤ h t f v a ||ρ a , .a ∈ A(i), then .v a = o' , and a ' . FC = F(o ). Step 3.7: Renew EP: remove all vectors dominated by . F(o' ), if . F(o' ) is not dominated by any individual inside EP, add .o' to EP.

5.6 Effective Area Incremental Learning and PDM Based Weight Vector Adjustment Method The essence of population optimization and weight vector optimization for nonregular Pareto fronts is the conflict between uniform weight vectors and non-uniform Pareto fronts. The main problems are (1) not every uniform weight vector has intersection with PF, and (2) those intersections with PF are not necessarily uniform samples of PF. These two points are the essential reasons for the decreasing diversity

152

5 Weight Vector Adjustment-Based Multi-objective …

of non-dominated solution sets and the waste of resources. Therefore, if a method can be found to find the weight vectors that have intersection points with the real PF and make these intersection points uniform, that will solve a series of problems of the non-uniform frontier surface.

5.6.1 Effective Area and Active Vectors 5.6.1.1

Effective Area

Due to the presence of irregular PFs, the discontinuous Pareto Front surface is covered by a continuous vector of weights produced uniformly. The uniform weight generation strategy may result in individuals overlapping at some points on the true PF, which leads to a reduction in diversity. As shown in Fig. 5.7. Due to the irregular shape of the Pareto front surface, the uniform weight vector does not completely cover the PF. the effective region is defined as the central projection region of the true PF on the unit simplex, i.e., the red region in Fig. 5.7.

5.6.1.2

Active Vector

Due to the presence of PF discontinuities, missing, etc., the weight vectors located in non-valid regions may find the same frontier individual. Therefore, by non-dominated sorting, we search for the boundary individual (frontier) and assign it to the nearest

Fig. 5.7 Effective area

5.6 Effective Area Incremental Learning and PDM Based Weight …

153

weight vector. The weight vectors that have frontier individuals are considered as active vectors.

5.6.2 PDM and Population Evolution 5.6.2.1

Proximity and Diversity Metrics

The proximity and diversity metric (PDM) in [18] is used to measure the boundary individuals attached to the same reference vector.

.

P D M (λ, β) = P M (λ) + D M (λ, β) = mean (λ) + θ ||λ||2 sin (λ, β) / ( )2 ||λ||22 ||β||22 − λT β λT 1 = , +α ||β||2 L

(5.34)

where .λ is the boundary individual. .1 is the vector with all values of one. .β is the weight vector closest to .λ. and .θ is the coefficient that balances proximity and diversity and is chosen to be .5, similar to PBI. Note that the actual calculation is not as complicated as the formula, since .sin (λ, β) is precomputed when attaching the boundaries to the reference vector. . P M (λ) is used to calculate the convergence of each individual, which is defined as the average of the objective function in each dimension, the smaller the . P M (λ), i.e., the closer the individual is to the PF. . D M (λ, β) denotes the distance from the boundary individual .λ to the nearest reference vector .β, and represents the distribution error between .λ and .β and the distribution error between them. The smaller . D M (λ, β) is, the closer the boundary individual .λ is to the reference vector .β. Thus, by using appropriately distributed reference vectors, . D M (λ, β) guides the evolution of individuals toward better diversity.

5.6.2.2

Population Evolution

The evolutionary operator is the basis of population evolution. It starts from any population, and through random selection, crossover and mutation operations, it generates a group of individuals better suited to the environment, so that the population evolves to a better and better region in the search space, and in this way it keeps reproducing and evolving from generation to generation, and finally converges to a group of individuals best adapted to the environment, thus finding a quality solution to the problem. Crossover and variation [21] are the main ways in which populations adapt to their environment. The specific evolutionary operator is shown in the following.

154

5 Weight Vector Adjustment-Based Multi-objective …

(a) For each individual in the .e th generation population, two parent individuals po p e [m 1 ] , po p e [m 2 ] , m 1 , m 2 ∈ Ne(m), where .Ne(m) denotes the neighborhood of individual .m. A new representative point scheme is generated using the simulated binary crossover operator, that is,

.

⎧ 1 [ −→e+1 [m −→e [m ]] , −→e [m ] + (1 − σ ) . − ⎪ pop pop × (1 + σ ) . − pop ⎨− new1 ] = 1 2 r r r 2 . [ 1 ⎪ −→e [m ] + (1 + σ ) . − −→e+1 [m −→e [m ]] , ⎩− × (1 − σ ) . − pop pop pop new2 ] = 1 2 r r r 2 −→e denotes .r th dimension of .eth individual. pop here, .− r ⎧ 1 1+χ ⎪ ⎨ (rand() × 2) , rand() ≤ 0.5, 1 ( ) 1+χ .σ = 1 ⎪ ⎩ , other wise, 2 − rand() × 2

(5.35)

(5.36)

where .χ is the crossover factor that determines the extent to which the cluster of representative points of the next generation approximates the distribution of the representative points of the parent. A child individual is chosen as the associated child individual of that individual . po p e+1 [m new ]. (b) Impose polynomial variation on the offspring individuals . po p e+1 [m new ]: − −→e+1 [m ] = − −→e+1 [m ] + Δ , pop pop new new r r {r 1 (2 × rand()) χ+1 − 1, rand() < 0.5, . Δr = 1 (1 − 2 × (1 − rand())) χ+1 , else.

(5.37)

5.6.3 Cascade Clustering Based Dominant Solution Selection Widely distributed individuals are not only beneficial for population diversity, but also for updating and adjusting the weight vector. An overly concentrated distribution of individuals may lead to populations overlapping in a certain region. The selection operator based on cascade clustering divides the selection of non-dominated and dominated individuals into two parts of cascade clustering in a circular selection manner. Non-dominated individuals are guided by the reference vector, while dominated individuals are guided by the elite among non-dominated individuals. Thus, the individuals are evenly distributed across the entire range of the current PF while providing sufficient evolutionary pressure. The selection of the superior solution is accomplished by cascading clusters of frontier individuals . F and non-frontier individuals . I shown in Fig. 5.8. The details of the process are in the following. (1) Frontier individual identification: instead of fully using non-dominated sorting (NDSort) [19], a frontier individual identification mechanism is used, which termi-

5.6 Effective Area Incremental Learning and PDM Based Weight …

155

Fig. 5.8 Cascade Clustering (CC)

nates NDSort when the first level of non-dominated frontier is identified, so that only the frontier of the first level is selected as the frontier. (2) Two-level clustering and intra-class ranking (2.1) Activity vector establishment: Each frontier individual is attached to its nearest reference vector by calculating the sine of the angle between the frontier individual and each of its reference vectors. The reference vector with frontier individuals attached is considered as the activity vector. The frontier individuals attached to the same reference vector are sorted in ascending order using the proximity and diversity metric (PDM).

156

5 Weight Vector Adjustment-Based Multi-objective …

(2.2) The frontier individuals attached to the same reference vector are clustered into clusters. Also, the frontier individual with the best PDM is used as the center of the corresponding cluster. (2.3) Each non-boundary individual is assigned to the cluster corresponding to its nearest cluster center. For each cluster, the non-frontier individuals are sorted in ascending order by their Euclidean distances to the corresponding cluster centers. In this way, the nearest frontier individuals are used to guide the evolution of non-frontier individuals. With the dual guidance of frontier individuals and cluster centers, non-frontier individuals are under great pressure to get closer and diversify. After CC, two intra-class sorted queues are created for each cluster: the sorted frontier queue and the sorted non-frontier queue. (3) Round-robin selection: In order to distribute the selected individuals evenly around the current PF to ensure the inheritance of desirable boundary individuals and to maintain diversity, a round-robin selection method is used. For each cluster, a selection queue can be created in advance by joining the sorted non-boundary queues after the sorted boundary queues. In each round, the head of each selection queue pops up and is added to the next generation until the size of the next generation reaches . N . It should be noted that unless the number of clusters exceeds . N or is bettered by the frontier individuals, all cluster centers will remain. In addition, the selected non-frontiers are the closest to the cluster centers, which means they are expected to be the non-frontiers with the closest and most diversity. The essence of cascade clustering is that a similar number of individuals are selected from clusters that are uniformly distributed on the current PF. The selection ensures that the selected high quality individuals (either bounded or unbounded) are uniformly distributed near the current PF. Thus, it can obtain populations with good proximity and diversity.

5.6.4 Incremental SVM-Based Weight Vector Learning and Tuning In the process of population evolution, those weight vectors that do not have intersection points with the true PF play little role. In the process of population individuals moving closer to the true PF, those weight vectors that are too far away from the true PF are difficult to be activated. This is because they are too far away to be connected to the boundary individuals. In this case, there will be less than N reference vectors to guide the evolution. causing a weight vector to have multiple overlapping individuals, while having a large number of unactivated weight vectors is ineffective. Therefore, the adjustment of the weight vectors is necessary. This is illustrated in Fig. 5.9. The truly instructive weight vectors are those located in the projection region of the true PF, i.e., in the effective region. If we create a more uniformly distributed distribution of weight vectors in the effective region and reduce the outer regions, more weight vectors will cross the true PF, thus reducing needless

5.6 Effective Area Incremental Learning and PDM Based Weight …

157

Fig. 5.9 Incremental learning based weight vector adjustment

consumption and maintaining efficiency. However, the true PF does not have a priori knowledge to estimate and lacks a generalized model. Therefore, this section uses the distribution of the activity vectors to estimate the projected effective area of the PF on the simplex. and generates a higher density of weight vectors in the corresponding area. As shown in Fig. 5.9, the distribution of activity vectors is used to train a classifier to identify valid and invalid weight vectors to estimate the valid region on the unit simplex. A point will score high if it lies within the region of a positive training sample, and vice versa. When higher density reference points are generated, the trained classifier will evaluate their validity by scoring them. Only when the current PF is close enough to the true PF and the individuals are stably distributed among the existing clusters, the corresponding activity vector is a valid sample for training at this point. Moreover, even though the vast majority of the weight vectors have historically been active at one time or another, for problems with discontinuous PFs, the number of continuously active weight vectors remains below the population size .n as the population gradually reaches the true PF. For this reason, this section deals with a solution based on a state sampler. It compares the historical activity of the weight vectors with the historical activity of the current vectors. The state sampler will report a “stable” state at this point only if the activity of all reference vectors has not changed in successive generations. In this state, we consider that the current PF has been stably distributed around the active reference vectors and observe no significant reduction or expansion of the current PF range. This stability is considered to be an indicator of appropriate learning timing. After performing steady state sampling, we train a classifier as described above. After training, a more dense weight vector on the simplex of the unit will be generated using the same method as NSGA-III [20] and scored by the trained classifier. Then,

158

5 Weight Vector Adjustment-Based Multi-objective …

potentially valid individuals with scores greater than a threshold .δ will be retained. Where .δ should not be a fixed constant, we set .δ to keep .n = 2N best scoring weight vectors. Usually, the effective area is smaller than the whole unit area. Therefore, a onetime learning of reference points does not guarantee accurate identification. As the density of reference points increases, the distribution of samples will more clearly reflect the boundaries of the effective area. In addition, increasing the generation density only once may not produce enough active reference vectors. These suggest the need for multiple runs of the reduced sampling learning procedure. If we simply store all the samples and retrain the classifier with all the data each time, this will lead to a dramatic increase in the demand for computational resources. In this book, we use an incremental learning strategy to train the model based on a priori knowledge [22]. In each learning iteration, the training samples retained near the edge samples of the previous learning iteration participate in a new iteration along with the new training samples, and the prior knowledge is learned iteratively. This method improves the accuracy of boundary recognition and increases the training speed when there are data packages that need to be learned.

5.7 Experimental Results and Analysis 5.7.1 Crowding Degree Based Adaptive Weight Vector Adjustment This section performs image segmentation on the reconstructed thermal image .G 1 , as shown in Fig. 5.10. A multi-objective evolutionary optimization algorithm is performed on the reconstructed thermal image .G 1 . The experimental parameters are set as follows: the fuzzy

Fig. 5.10 Infrared reconstructed image to be segmented .G 1

5.7 Experimental Results and Analysis

159

Fig. 5.11 Pareto Front

index .α = 2 in each FCM segmentation function. The size of the domain window in . f 1 (c) is .k1 = 3. The parameters .i and .e for the non-local space information in . f 2 (c) are set to 5 and 30, respectively, while the algorithm uses .ω1 and .ω2 are .16 and .1, respectively. The size of the neighborhood window for the calculation of the coefficient of variation in . f 3 (c) is .k3 = 3, and the parameter .i = 5 for the non-local space information. In the multi-objective parameter setting for each of the three damage categories −−→ − → set population. N = 100, the weight vector is. ρ 1 , . . . , ρ 100 , and the number of weight vectors of the domain is set to .W = N × 20% for .20, and the maximum number of evolutionary iterations is set to .tmax = 50. Parameter for adjusting the weight vector is .u = 6 and the outlier threshold parameter is .θ = 1.5. As shown in Fig. 5.11, due to the complexity of hypervelocity impact, the Pareto front appears discontinuous. If the weight vector is not adjusted in due time, the distributivity and uniformity of the obtained optimal solution set will be affected. A large number of subproblems cannot find Pareto solutions, resulting in a waste of resources. The weight vector adjustment strategy in this book adjusts and reinitializes the weight vectors for individuals with large sparsity at every .u = 6 generations. Thus, although the PFs are discontinuous, the obtained optimal populations have more uniform PFs dispersed in each segment. The weight vector adjustment process is shown in Fig. 5.12 for the first time when the average Manhattan distance measurement is performed, i.e., when the number of iterations.t = 6. In the adjustment stage, the neighborhood for calculating the average Manhattan distance is set to .12. The box plot is shown in Fig. 5.12 after calculating the average Manhattan distance for all individuals in this iteration. The overall

160

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.12 Weight vector adjustment process

average Manhattan distance of the current population is.0.1984. After setting.θ = 1.5, there are a total of .4 outliers not located between .[Q L − θ . I Q R, QU + θ . I Q R]. For these individuals, the corresponding weight vectors are adjusted to generate the ) ( new weight vectors. Based on the formula . K K =K a + rand • K b −K c , the newly generated individuals are obtained and the original ones are replaced. The iteration optimization can be continued, as similar as the above process. The information of individuals with partially adjusted weight vectors is shown in Table 5.1. The population information in the table may be derived from different iterations, with the leading subscripts representing the number of iterations taken from. In the table, the Manhattan distance of .0 means that there are overlapping individuals in the neighborhood. Select a compromise solution .cm = c92 from the obtained ( pareto optimal solution set, and its corresponding function value is . FC 92 = 1.9396 × 104 , 3.4703× ) 104 , 4.9718 . ×104 . Its corresponding weight vector is.β q = β 92 = [0.6923; 0.0769; 0.2307].

5.7 Experimental Results and Analysis

161

Table 5.1 Part of the weight vector adjustment process Threshold parameter QL .= 0.1122; QU .= 0.2597; MaxValue .= 0.4809; MinValue .= .−0.1089 Individuals .6 c39

Neighborhood Manhattan distance 0.5894

0.5894

.· · ·

0.5861

0.5829

0.5772

.· · ·

0.5000

0.5894

0.5894

.· · ·

0.5861

0.5829

0.5772

.· · ·

0.5000

0.5524

0.5524

.· · ·

0.5490

0.5459

0.5402

.· · ·

0.4630

0.5203

0.5203

.· · ·

0.5169

0.5137

0.5081

.· · ·

0.4309

0.4504

0.4504

.· · ·

0.4618

0.4418

0.4670

.· · ·

0.4232

0.4370

0.4370

.· · ·

0.4484

0.4285

0.4536

.· · ·

0.4098

Before adjustment

After adjustment

[0.1538; 0.8462; 0]

[0.0255; 0.0058; 0.9686;]

[0.2308; 0.7692; 0]

[0.0255; 0.0058; 0.9686]

[0.3077; 0.6923; 0]

[0.0250; 0.0100; 0.9650]

[0.3846; 0.6154; 0]

[0.0247; 0.0149; 0.9604]

[0.4615; 0.5385; 0]

[0.0248; 0.0213; 0.9539]

[0.5385; 0.4615; 0]

[0.0248; 0.0263; 0.9489]

0.5727 .6 c50

0.5727 .6 c60

0.5356 .6 c69

0.5035 .12 c77

0.4517 .12 c84

0.4383

Fig. 5.13 The algorithm results in this section

Based on the Multi-objective optimization with weight vector adjustment strategy, the clustering operation and image segmentation are carried out, and the result is shown in Fig. 5.13. It can be seen from the segmented image that the image segmentation technology combined with multi-objective optimization can completely detect all defects of real

162

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.14 FGFCM

Fig. 5.15 FCM_S1

hypervelocity impact. Including the back peeling contour, surface impact perforation and sputtering perforation defects caused by hypervelocity impact, the tiny damage defects in the upper right corner can also be separated. It shows that the defect detection completeness rate of the detection algorithm is high. Image segmentation can clearly separate different types of defects with clear contours, and at the same time, the segmented image contains less noise information, which can effectively detect complex types of defects. In addition, in order to further intuitively illustrate the superiority of FCM segmentation algorithm combined with multi-objective optimization in the segmentation performance of infrared thermal images with complex defect types. Other FCM segmentation algorithms and individual segmentation of each objective function in multi-objective are compared. The results are as shown in Figs. 5.14, 5.15, 5.16, 5.17, 5.18, 5.19 and 5.20.

5.7 Experimental Results and Analysis Fig. 5.16 FCM_S2

Fig. 5.17 . f 1

Fig. 5.18 . f 2

163

164

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.19 . f 3

Fig. 5.20 Proposed algorithm

By comparing Figs. 5.14, 5.15, 5.16, 5.17, 5.18, 5.19 and 5.20, it can be seen that common FCM algorithms such as basic FCM [13], FCM_S1 [16], FCM_S2 [16] and FGFCM [15] are used to segment infrared thermal images with complex defects. There are some problems, such as too much noise information in circular and rectangular red frames, the boundary of defect contour is not obvious, and the details are blurred. Our segmentation objective functions have their own advantages and disadvantages when they are segmented separately. For example, when the . f 1 function is divided separately, the noise information of the image is well controlled, and the noise information in the circular frame is effectively eliminated, but the defect information in the rectangular frame is also erased. When . f 2 is segmented separately, the defect details are available, but the boundary of defect contour is not obvious, and there is noise influence. When . f 3 is segmented separately, the outline of defect details is good, but there is also too much noise information. It can be seen

5.7 Experimental Results and Analysis

165

that the defect contour is clear, the details of potholes and tiny defects are kept well, and the noise situation is greatly improved in the segmented image after doublelayer multi-objective optimization. Various defect features can be clearly identified and quantified. Therefore, our algorithm has a good segmentation effect on infrared thermal images with complex types of defects, and can detect complex types of defects.

5.7.2 Effective Area and PDM Based Adaptive Weight Vector Adjustment In this section, the reconstructed thermal image.G 2 shown in Fig. 5.21 is segmented by the multi-target image segmentation based on effective region incremental learning and the weight vector adjustment method of PDM. The IR reconstructed image .G 2 is produced by the carbon fiber specimen after hypervelocity impact. The damage includes the central perforation damage and the surrounding raised defects of carbon fiber material. The reconstructed thermal image .G 2 is modeled with each multi-objective function, and after similar parameter settings as in Sect. 5.7.1, a multi-objective evolutionary optimization algorithm is performed on the reconstructed thermal image .G 2 . to obtain the pareto-optimal segmentation strategy for the images. The number of multi-objective population individuals is set to . N = 50 and the maximum number of objective function evaluations .max F E = 500. The kernel of the SVM classifier is a Gaussian kernel with size .0.056.

Fig. 5.21 Image to be segmented .G 2

166

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.22 Frontier individual clustering

The algorithm is initialized and the children are generated using the evolution operator. The individuals of the parent generation are subjected to dominant solution selection based on cascade clustering together with the individuals of the children. The cascade clustering process for the first generation is shown below. The merged population has. P = 100 individuals. After the first level of frontier surface identification, the number of frontier is obtained as. N um f = 16 and the number of non-frontier is . N um n f = 84. Their distribution is shown in Fig. 5.22. After that the activity vector needs to be established. Each frontier individual is assigned to the weight vector with the smallest sine of its angle. The vector with the frontier individuals is identified as the activity vector. The variation of the sine between each frontier individual to its nearest weight vector is shown in Fig. 5.23. Among them, the individual farthest from the weight vector corresponds to a sine value .sinmax = 0.0530, which corresponds to a frontier individual with the ordinal number .66, and the individual closest to the weight vector corresponds to a sine value .sinmin = 0.00089, which corresponds to a frontier individual with the ordinal number .29. The mean value of the minimum sine of the overall frontier individual is .sinm ean = 0.0050. After the above analysis, the number of activity vectors of the first generation population was established as .1, and its ordinal number was .55. Then all frontier individuals were clustered, i.e., .16 frontier individuals were clustered into .1 class. PDM was calculated and ranked for all frontier individuals within the clusters. This is shown in Fig. 5.22. The frontier individual with the smallest PDM value is used as the cluster center. The smallest PDM is . P D Mmin = 1.77 × 109 , and its corresponding frontier individual serial number is .1.

5.7 Experimental Results and Analysis

167

Fig. 5.23 Frontier individual clustering

Fig. 5.24 Non-frontier individual clustering

After that, the non-frontier individuals are attached to clusters. A total of. N um n f = 84 non-frontier individuals were attached to a total of .1 cluster and sorted based on their Euclidean distance to the cluster center. The sorting results are shown in Fig. 5.24. Among them, the nearest non-frontier individual with the ordinal number .2 from the cluster center has the corresponding Euclidean distance of . Dmin = 2.681 × 108 , and the farthest non-frontier individual with the ordinal number .100 from the cluster center has the corresponding Euclidean distance of . Dmax = 2.987 × 109 , and the average Euclidean distance is . Dm ean = 1.865 × 109 . When both frontier and non-frontier individuals are clustered, the dominant solution selection operation is performed based on Round-Robin Selection. The selection results are shown in Fig. 5.25. The iterative information of frontier individuals and

168

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.25 Round-Robin picking

non-frontier individuals in the subsequent iterative process is shown in Table 5.2. It is shown that the PDM distance and Euclidean distance values corresponding to frontier and non-frontier individuals and clustering centers during the iterative process are shown, respectively. The information within the last cluster is shown when the total number of clusters exceeds .1. The dominant solution sequence number selected by Round-Robin Selection is shown in the Picking column. During each iteration of the state sampler report stabilization, the SVM classifier is used to learn the distribution pattern of valid regions and adjust the distribution of the weight vectors. After generating more dense weight vectors, we retain the potentially valid individuals with scores greater than a threshold .δ, where .δ is set to .n = 2N . The process of weight vector generation and deletion is shown in Figs. 5.26, 5.27, 5.28 and 5.29. After the multi-objective optimization, the final segmentation image obtained is shown in Fig. 5.30. It can be seen that the segmentation function considering multiple objectives has better results in targeting complex impact damage. The damage of the carbon fiber material in the center of the impact can be accurately segmented, and both the overall damage area and the heat diffusion area can be segmented and extracted. Both the details of the damage area and the overall edge contour of the damage are well preserved, while the noise information is fully suppressed. The results of the segmentation method considering multiple objectives are compared with the ordinary FCM-derived algorithm, and the comparison results are shown in Figs. 5.31, 5.32, 5.33, 5.34, 5.35, 5.36 and 5.37. From comparing Figs. 5.31, 5.32, 5.33, 5.34, 5.35, 5.36 and 5.37, it can be seen that the basic FCM [13], FCM_S1 [16] and FGFCM [15] are less effective in segmenting reconstructed thermal images with complex defect types. The broken detail situation of the impact center material and the edge contour are not clear enough and there is defect loss. . f 1 function when segmented alone has severe detail wear of the center perforation because it focuses on the noise information control of the

5.7 Experimental Results and Analysis

169

Table 5.2 Clustering of frontiers and non-frontiers in iterative process PDM of frontiers D of non-frontiers iter1 Picking iter2 Picking iter3 Picking iter4 Picking iter5 Picking iter6 Picking iter7 Picking iter8 Picking

1.77E+09 2.07E+09 1 29 1.77E+09 2.02E+09 1 23 2.69E+09 1.94E+09 2 63 2.69E+09 1.73E+09 6 59 2.69E+09 1.77E+09 8 54 2.69E+09 1.73E+09 1 46 1.57E+09 1.79E+09 2 29 1.85E+09 1.73E+09 5 31

.· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · ·

1.98E+09 2.15E+09 4 24 1.98E+09 2.04E+09 8 25 1.77E+09 2.06E+09 10 68 2.31E+09 1.87E+09 5 68 2.91E+09 1.87E+09 16 61 2.91E+09 1.83E+09 2 47 1.73E+09 1.83E+09 1 43 1.65E+09 1.79E+09 14 35

2.68E+08 1.01E+09 11 66 2.41E+08 5.97E+08 11 63 3.71E+08 4.52E+08 34 74 1.6E+08 3.95E+08 26 3 2.36E+08 1.85E+08 33 69 2.36E+08 3.05E+08 8 57 6.36E+08 5.66E+08 17 55 6.36E+08 4.79E+08 3 4

.· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · · .· · ·

8.35E+08 1.08E+09 18 5 2.68E+08 8.86E+08 14 68 3.95E+08 5.31E+08 38 85 3.43E+08 4.07E+08 53 76 1.74E+08 3.04E+08 2 74 1.74E+08 3.43E+08 38 67 2.39E+08 6.77E+08 22 58 2.39E+08 5.5E+08 28 40

170

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.26 Effective regional learning process 1

Fig. 5.27 Effective regional learning process 2

Fig. 5.28 Effective regional learning process 3

image. . f 2 function when segmented alone has too much detail of the center perforation defect and the carbon fiber background material is retained, which affects the defect recognition situation. . f 3 function has a better contour of defect details when segmented alone, but the noise information is also too much. It can be seen that the segmented image after considering the multi-objective optimization has a clear outline of the protruding defects, and the details of the perforated defects and fine defects are well maintained, while the noise situation is greatly improved.

5.7 Experimental Results and Analysis Fig. 5.29 Effective regional learning process 4

Fig. 5.30 Final result

Fig. 5.31 FCM

171

172

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.32 FCM_S1

Fig. 5.33 FGFCM

5.8 Summary This section discusses the segmentation technology for complex damage thermal images. The strategy of adaptive adjustment based on weight vector is discussed emphatically. Firstly, the challenges and difficulties of image segmentation in reconstructed thermal images with complex damage are analyzed. In order to solve the above problems, three kinds of infrared image segmentation objectives for complex objects are proposed, which consider noise elimination target, detail preservation target and edge preservation target respectively. Then, based on them, multi-objective modeling is carried out and a multi-objective segmentation framework is further proposed. On this basis, the characteristics of irregular Pareto Front and the adaptive adjustment requirements of weight vector are analyzed. And put forward two

5.8 Summary

173

Fig. 5.34 . f 1

Fig. 5.35 . f 2

kinds of weight vector adjustment strategies. Weight vector adjustment method based on congestion degree adaptation and weight vector adjustment method based on effective area incremental learning and PDM. Experimental results show that the multi-objective infrared image segmentation method combined with weight vector adjustment strategy has better performance than the traditional single-target FCM algorithm.

174

5 Weight Vector Adjustment-Based Multi-objective …

Fig. 5.36 . f 3

Fig. 5.37 Proposed algorithm

References 1. Lei, G., Yin, C., Huang, X., Cheng, Y. H., Dadras, S., Shi, A.: Using an Optimal Multi-Target Image Segmentation Based Feature Extraction Method to Detect Hypervelocity Impact Damage for Spacecraft. IEEE Sensors Journal, 21(18), 20258-20272 (2021) 2. Yang, X., Yin, C., Dadras, S., Lei, G., Tan, X., Qiu, G.: Spacecraft damage infrared detection algorithm for hypervelocity impact based on double-layer multi-target segmentation. Frontiers of Information Technology & Electronic Engineering, 23(4), 571-586 (2022) 3. Wang, B., Chen, L. L., Cheng, J.: New result on maximum entropy threshold image segmentation based on P system. Optik, 163, 81-85 (2018) 4. Li, L., Zhang, X., Tian, B., Wang, C., Pu, L., Shi, J., Wei, S.: A Flexible Region of Interest Extraction Algorithm with Adaptive Threshold for 3-D Synthetic Aperture Radar Images. Remote Sensing, 13(21), 4308 (2021)

References

175

5. Zeng, X., Wei, S., Wei, J., Zhou, Z., Shi, J., Zhang, X., Fan, F.: CPISNet: delving into consistent proposals of instance segmentation network for high-resolution aerial images. Remote Sensing, 13(14), 2788 (2021) 6. Jin, Y., Xu, W., Zhang, C., Luo, X., Jia, H.: Boundary-aware refined network for automatic building extraction in very high-resolution urban aerial images. Remote Sensing, 13(4), 692 (2021) 7. Luo, S., Sarabandi, K., Tong, L., Guo, S.: An improved fuzzy region competition-based framework for the multiphase segmentation of SAR images. IEEE Transactions on Geoscience and Remote Sensing, 58(4), 2457-2470 (2019) 8. Zhu, X., Zhang, S., Zhang, J., Li, Y., Lu, G., Yang, Y.: Sparse graph connectivity for image segmentation. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(4), 1-19 (2020) 9. Su, Y., Cheng, J., Wang, W., Bai, H., Liu, H.: Semantic segmentation for high-resolution remote-sensing images via dynamic graph context reasoning. IEEE Geoscience and Remote Sensing Letters, 19, 1-5 (2022) 10. Wang R, Xu F, Pei J, et al.: Ship Target Segmentation for SAR Images Based on Clustering Center Shift. IEEE Geoscience and Remote Sensing Letters, 19, 1-5 (2022) 11. Miao, J., Zhou, X., Huang, T. Z.: Local segmentation of images using an improved fuzzy C-means clustering algorithm based on self-adaptive dictionary learning. Applied Soft Computing, 91, 106200 (2020) 12. Wang, Z., Wan, L., Xiong, N., Zhu, J., Ciampa, F.: Variational level set and fuzzy clustering for enhanced thermal image segmentation and damage assessment. Nondestructive Testing and Evaluation International, 118, 102396 (2021) 13. Graves, D., Pedrycz, W.: Fuzzy c-means, gustafson-kessel fcm, and kernel-based fcm: A comparative study. Analysis and Design of Intelligent Systems Using Soft Computing Techniques, 41, 140-149 (2007) 14. Krinidis, S., Chatzis, V.: A robust fuzzy local information C-means clustering algorithm. IEEE Transactions on Image Processing, 19(5), 1328-1337 (2010) 15. Cai, W., Chen, S., Zhang, D.: Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recognition, 40(3), 825-838 (2007) 16. Chen, S., Zhang, D.: Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 34(4), 1907-1916 (2004) 17. Zhang, S. X., Zheng, L. M., Liu, L., Zheng, S. Y., Pan, Y. M.: Decomposition-based multiobjective evolutionary algorithm with mating neighborhood sizes and reproduction operators adaptation. Soft Computing. 21, 6381-6392 (2017) 18. Ge, H., Zhao, M., Sun, L., Wang, Z., Tan, G., Zhang, Q., Chen, C. P.: A many-objective evolutionary algorithm with two interacting processes: Cascade clustering and reference point incremental learning. IEEE Transactions on Evolutionary Computation, 23(4), 572-586 (2018) 19. Tian, Y., Wang, H., Zhang, X., Jin, Y.: Effectiveness and efficiency of non-dominated sorting for evolutionary multi-and many-objective optimization. Complex & Intelligent Systems, 3, 247-263 (2017) 20. Deb, K., Jain, H.: An evolutionary many-objective optimization algorithm using referencepoint-based nondominated sorting approach, part I: solving problems with box constraints. IEEE Transactions on Evolutionary Computation, 18(4), 577-601 (2013) 21. Tian, Y., Cheng, R., Zhang, X., Jin, Y.: PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [Educational Forum], IEEE Computational Intelligence Magazine, 12(4), 73-87 (2017) 22. Diehl, C. P., Cauwenberghs, G.: SVM incremental learning, adaptation and optimization. In Proceedings of the International Joint Conference on Neural Networks, 4, 2685-2690 (2003)

Chapter 6

Defects Positioning Method for Large Size Specimen

After the panoramic image stitching, fusion and segmentation in Chaps. 3, 4 and 5, the information of the segmented defect from the stitched reconstructed thermal images was obtained. The stitched reconstructed thermal image, as a visual image that can reflect the overall defect, can visually demonstrate the defective regions of large scale specimen. At this point, the morphological characteristics of the defects can be initially determined by qualitative analysis. Further, the defective region information in the stitched reconstructed thermal image needs to be analyzed precisely. From the defect positioning information in the image, it is possible to identify the defective region and determine the distribution parameters. In this chapter, the defect positioning method based on the whole-local view conversion and the infrared image defect positioning method based on the inverse heterogeneous sources will be introduced, respectively. Finally, the two reconstructed thermal images defect positioning methods are verified using various artificial and actual damaged specimens.

6.1 Introduction After inspecting the overall stitched reconstructed thermal images of the specimen, it is necessary to further obtain the defect localization information for the entire stitched image, as shown in Fig. 6.1. On the one hand, the accurate defective region based on the overall view can guide the secondary local detection of small or complex defects, thus improving the detection efficiency for complex defects. On the other hand, the accurate defect positioning results can facilitate the defect repair and help find the defect form and defective region in the specimen quickly [1, 2]. Regarding the above two aspects, in this chapter, the defective region of infrared thermal images will be introduced based on both whole and local view, respectively. The overall defective positioning will guide the fine positioning as well as the secondary detection. In addition to the positioning of defects in infrared thermal images, which is based on inverse heterogeneous sources, which starts from the stitching relationship of the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 C. Yin et al., Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft, https://doi.org/10.1007/978-981-99-8216-5_6

177

178

6 Defects Positioning Method for Large Size Specimen

Fig. 6.1 Quantitative detection based on stitched reconstructed thermal images for spacecraft

image and goes back to the original video stream to do accurate detection. In this chapter, the focus is on the positioning of defective region information.

6.2 Defect Positioning Based on Whole and Local View Conversion for Reconstructed Thermal Images As described in Chap. 4, the single defect feature in the stitched reconstruction thermal image is obtained by processing and stitching of infrared thermal image sequences from different detection locations. Considering the single detection resolution factor, so even the overall stitched result image formed after stitching still cannot show the details of the defective region clearly. In the result of the overall stitched image, the defective regions are marked using a color space-based image segmentation method. Further, these marked defective regions are analyzed to provide guidance for the subsequent secondary photography. Next, for a particular damage defect of interest, the IR camera detection position and focal length are adjusted, and high-resolution infrared image sequence data acquisition is further carried out. The second reconstructed thermal image of the detected damage defect can be obtained. By segmenting the stitched reconstructed thermal image, the defective region and morphology quantification data of the defect are obtained. That is, the conversion of the overall stitched image to the local view is realized. This conversion is based on using the overall stitched image to obtain an accurate image containing the target

6.2 Defect Positioning Based on Whole and Local View Conversion …

179

defective regions. This provides a better guide for the quantification of the defective regions.

6.2.1 Global Defect Location Labeling for Stitched Reconstructed Thermal Images Defect shape recognition in reconstructed thermal images can be achieved by a clustering process. In this subsection, the defect feature reconstructed image is converted to . L ∗ a ∗ b color space [3]. The ‘.a’ and ‘.b’ chromaticity values of each pixel point in the image are used as constituent pixel object samples. The Euclidean distance [4] in . L ∗ a ∗ b color space is used as the metric criterion for a K-means based clustering segmentation algorithm, which utilizes the defect feature. The image itself color feature and temperature feature information. Furthermore, it maps the correlation relationship with the defective feature region to achieve the accurate segmentation and extraction of the defective feature region. That is, the clustering segmentation algorithm based on the color space clustering metric is implemented. The segmented images are divided into several different color domains according to the different color information in the reconstructed images. Clustering segmentation results are obtained for the background region, the heat diffusion region and the defect feature region [5]. In turn, the binarization segmentation algorithm with a double threshold is used. That is, pixel-based calculations and statistics are performed on the binarized image. Double-threshold segmentation processing is then applied perform secondary segmentation on the image of the defective feature region segmentation result based on . L ∗ a ∗ b color space segmentation. The binarized segmentation extraction results for the defective feature region are obtained. Compared with traditional clustering and classification methods, more accurate segmentation results of defective feature regions can be extracted. This can better realize the automatic judgment function of image information and provide better quality processing objects for the subsequent automatic labeling of defective and automatic quantification of defective morphology.

6.2.1.1

.

L ∗ a∗ b Color Space Based Damage Positioning Segmentation

(1) . L ∗ a ∗ b Color Space: The selection of an appropriate color space is the basis for effective segmentation. The device-oriented RGB color space is not suitable for direct segmentation of color images due to the high linear correlation between its components. Converting the RGB non-uniform color space of an image to the . L ∗ a ∗ b uniform color space allows the computer to identify feature regions that highlight infrared defect information. The . L ∗ a ∗ b color space (also known as CIE-LAB or CIE . L ∗ a ∗ b), compared to other common color spaces (such as RGB, CMYK, etc.), has the largest number

180

6 Defects Positioning Method for Large Size Specimen

Fig. 6.2 . L ∗ a ∗ b color space diagram

of colors defined, a larger color gamut than that of human vision, and is the fastest 3D color space for processing. The most important feature of this color space is the uniform color perception, which contains the photometric layer . L ∗ , the chromaticity layer .a ∗ (indicating that the color falls along the red-green axis) and the chromaticity layer .b∗ (indicating that the color falls along the blue-yellow axis) and all the color information is in the layers .a ∗ and .b∗ , whose spatial model is shown in Fig. 6.2. As can be seen in Fig. 6.2, the luminance component. L has values ranging from.0 to .100, which indicates the color brightness from black to white. The two chromaticity components .a and .b have values ranging from negative to positive, specifically from .−a to .+a and .−b to .+b, respectively. Their colors fade from green to red and from blue to yellow, respectively. Since the defective feature images obtained from the infrared sequence image reconstruction experiments and the image stitching experiments processed as described in previous sections are RGB color space images. Therefore, before performing . L ∗ a ∗ b color defect segmentation on the images, a chromatic space conversion of the images is required. Since RGB images cannot be directly converted to CIE-Lab images, the CIE-XYZ chromaticity space is used as the transition color space to convert RGB images to XYZ images before . L ∗ a ∗ b conversion. The specific conversion process is listed in the following ⎡

⎤ ⎡ ⎤⎡ ⎤ X 0.4124 0.3575 0.1804 Red . ⎣ Y ⎦ = ⎣ 0.2126 0.7151 0.0721 ⎦ ⎣ Gr een ⎦ , Z 0.0193 0.1191 0.9502 Blue ⎤ ⎡ ⎤⎡ ⎤ Red 3.2404 −1.5371 0.4985 X . ⎣ Gr een ⎦ = ⎣ −0.9692 1.8759 0.0415 ⎦ ⎣ Y ⎦ . Blue 0.05564 −0.2040 1.0573 Z

(6.1)



(6.2)

6.2 Defect Positioning Based on Whole and Local View Conversion …

181

Then, the luminance . L ∗ can be calculated by Eq. (6.3), and the chromaticity .a ∗ and .b∗ can be computed by Eqs. (6.4)–(6.5) ⎧ ( )1 ⎪ ⎪ ⎨ 116 f Y /3 − 16, Y > 0.0088, Y0 Y0 ∗ .L = 1/ ) ( ⎪ ⎪ ⎩ 903.3 f Y 3 − 16, Y ≤ 0.0088, Y0 Y0

(6.3)

( )] [ ( ) Y X − f , a ∗ = 500 f X0 Y0

(6.4)

( )] [ ( ) Z Y − f . .b = 200 f Y0 Z0

(6.5)

.



In each of the above equations: ⎧ 1 ⎨ . f (t) = t3, t > 0.0088, ⎩ 7.787t + 116, t ≤ 0.0088,

(6.6)

where, . X 0 , .Y0 , . Z 0 denote the reference white point corresponding to . X , .Y , . Z respectively. The RGB values are known, and the chromaticity values of the . L ∗ a ∗ b color space can be obtained by the above equations. Since the . L ∗ a ∗ b color space separates the color attribute from the light intensity attribute, we can ignore the value of. L ∗ axis when measuring the color difference. This reduces the computational complexity, and after converting the input image object to . L ∗ a ∗ b color space, it becomes a set of color feature objects .C(a, b) with ‘.a ∗ ’ and ‘.b∗ ’ (values.) For any two pixel objects .C (ai , bi ) and .C a j , b j in the set, we can use the Euclidean distance metric to measure the difference between the two colors and thus quantify these visual differences. In the . L ∗ a ∗ b color space, the Euclidean distance metric is defined as follows ∗ .ai j = ai − a j , (6.7) b∗ = bi − b j ,

. ij

.

Disi j =

√ ai∗2j + bi∗2j ,

(6.8) (6.9)

where.ai∗j and.bi∗j are the formulae for calculating the chromaticity difference between ) ( any two color feature objects .C (ai , bi ) and .C a j , b j in the ‘.a ∗ ’ and ‘.b∗ ’ axes, and . Disi j is the total chromaticity distance metric. For stitching result image of large size test specimen, based on the characteristics of the reconstructed thermal image, the defective region can be intuitively distinguished by human eyes, as shown in Fig. 6.3. It is seen that the yellow-orange highlighted area is the defect feature area reflecting the main distribution of defect

182

6 Defects Positioning Method for Large Size Specimen

Fig. 6.3 Typical reconstructed thermal color image

information (i.e., the defect feature area). Meanwhile, the dark blue area is the background area of the test specimen (i.e., the background area without defect information). There is a transition area of bright blue-green edge. The color difference distance metric, expressed by the Euclidean distance, can not only meet the sensitivity of human eyes to the image, but also accurately measure the small difference between color features. This metric can effectively measure the color feature difference between the defective feature region, as well as its surrounding bright edge transition region and the background region. It serves as a basis for the subsequent clustering segmentation algorithm as a classification discriminator. For defective regions with large geometric dimensions, the segmentation result of defective region can remove the interference of background noise and edge interference caused by lateral thermal diffusion. Based on the segmentation results, the location of each defect feature area can be accurately labeled and its morphology quantified. For the defective feature regions with small geometric size, they are difficult to detect due to their small size and insufficient influence by thermal radiation. They may exist in the bright color edge transition region segmentation results or defective feature region segmentation results. Because of their tiny size, the quantification of morphological distribution information is not particularly meaningful. According to the location of the tiny defect areas in the segmentation results, the location labeling calculation and the quantification of geometric size information can be performed. (2) Color space image segmentation based on clustering algorithm After converting the image to. L ∗ a ∗ b color space, based on the color feature objects in the color space, according to the mapping link between color features and temperature features, as well as the defective regions reflected by the temperature features, the next step is to achieve the segmentation and extraction of the defective regions through the measurement and division of color features.

6.2 Defect Positioning Based on Whole and Local View Conversion …

183

Fig. 6.4 . L ∗ a ∗ b chromaticity layer clustering results

We take the color feature objects in the color space as the sample data set and prepare to divide the data into three clusters, that is, the defective feature region, the edge transition region due to lateral thermal diffusion and the background region. The Euclidean distance can be calculated by the K-mean clustering algorithm as the intercategory metric for accurate classification of color features, in order to further achieve accurate segmentation of the defective feature region. K-mean clustering algorithm is an unsupervised clustering algorithm [6, 7]. The essence of the algorithm is to find the .W best clustering centers by iteratively dividing all samples into .W classes, so that the sum of Euclidean distances from all samples to the class centers of the classes can be minimized. In other words, the sum of color difference distances is minimized. As shown in Fig. 6.4, three color classes can be obtained by clustering .a and .b as the features of pixel points for a certain reconstructed thermal image region. Let the input defect feature image be transformed into the . L ∗ a ∗ b color space and noted as the set of .n color feature objects consisting of C (ai , bi ) , i = 1, . . . , n,

.

(6.10)

in which, .a and .b denote the converted values of the temperature feature information of the pixel points in the chromaticity layer .a ∗ and .b∗ , respectively. The goal of K-mean clustering is to classify all .n color feature objects .C (ai , bi ), .i = 1, . . . , n into . W classes and constitute the defective feature dataset . De f ect = {deω |ω = 1, 2, . . . , W }, where the cluster center of .deω is .oω .

184

6 Defects Positioning Method for Large Size Specimen

As in Eq. (6.11), the Euclidean distance metric is defined by [ | nω |∑ . Ed [C (ai , bi ) , oω ] = √ (C (ai , bi ) , oω )2 ,

(6.11)

i=1

where.n ω sample points of color feature objects.C (ai , bi ),.i = 1, . . . , n, are classified into the cluster denoted as .deω . Then, the sum of Euclidean distances is given in the following ∑ ' . Ed (deω ) = Ed (C (ai , bi ) , ok ) . (6.12) C(ai ,bi )∈deω

Equation (6.12) represents the sum of the Euclidean distances from the points in the cluster .deω to its own class center. Therefore, each cluster in the clustering process is counted once. The sum of the Euclidean distances of all color feature objects to the centers of the classes they belong to is obtained. This is defined as the mean square error function, i.e.

.

Ed '' (C (ai , bi ) , ok ) =

W ∑ ω=1









Ed (C (ai , bi ) , oω )⎦

(6.13)

C(ai ,bi )∈deω

It can be seen that, for the above equation to obtain the minimum value, it is necessary to take the average value of all sample points in the cluster .deω as the class center .oω . This average value is calculated by iterative update, defined by o =

. ω

∑ 1 · C (ai , bi ) , n ω C(a ,b )∈de i

i

(6.14)

ω

where .n ω is the number of color feature objects of .C (ai , bi ) ∈ deω . When the error sum-of-squares criterion function . Ed '' (C (ai , bi ) , oω ) obtains a minimum value, the sum of the distances from each sample data to its cluster centroid in the sample data set of color feature objects achieves to its minimum value. Then, the algorithm is judged to be terminated. The defective feature data set .de f ect is taken as the final clustering result. Accordingly, we obtain the image segmentation algorithm of K-mean clustering of defective features in this subsection.

6.2.1.2

Morphological Processing and Positioning of Defective Region

Based on the segmentation result image of the defective feature region, morphological processing [8, 9] is further applied to form the connected region of defect features, count their number and label them with the smallest bounding boxes. The labeling of the defective region is achieved by labeling the defective region of interest and labeling the defective region with the smallest rectangular box that can cover the defect.

6.2 Defect Positioning Based on Whole and Local View Conversion …

185

In order to obtain the basic information such as the number, size, shape and location of the damage defects, the defect features extracted from the defect feature segmentation image need to be labeled first. The segmentation image including defective regions is a binary image, and these defective regions are labeled in the reconstructed thermal image. The pixels in the defective regions are represented by .1, which is highlighted in white, and the pixels in the background region are represented by .0, which is in black. Each defect in the image can be regarded as a separate connected region by labeling the defective regions in the binarized images. Each individual connected region forms an identified block, and the number of the resulting connected regions is equal to the number of defective regions. The binarized segmentation results of the reconstructed thermal image, which were obtained in the previous section, are further subjected to morphological operations to join adjacent elements or separate them into independent elements. For some edge discontinuities, the number of discrete tiny regions is too high, which can cause trouble for the defect location labeling function. The purpose of morphological processing is to combine the discontinuous regions at the edges of the defective regions, in order to facilitate the statistics and labeling of the defective regions. The structural element is an important tool for accomplish the morphological processing of the image. It is similar to the ‘filter window’ or ‘convolutional template’ in the signal processing process. The process of morphological processing uses structural elements to traverse the entire defect feature image, determining the connections between its various parts. Then, a structural analysis is performed to filter out the tiny discontinuous regions, facilitating the statistics of location regions and thus extracting useful defect information [10]. Suppose that the reconstructed thermal image is . I and the structural element is . S. The process of operating on the reconstructed thermal image . I using . S is called morphological processing. Among them, the corrosion and the expansion are the two most basic morphological operations. (1) Corrosion operations The corrosion operation, also known as shrinkage operation, is denoted by the symbol .Θ. .Co is defined as the set of reconstructed thermal images . I resulting from corrosion with . S elements, i.e., Co = I ΘS = {s|(S) I ⊆ I } ,

.

(6.15)

where .(S) I denotes the structural element . S and the subscript . I = (i 1 , i 2 ) denotes the coordinates of the reference point of the structural element. S in the image. The process of checking if the reconstructed thermal image . I is corrupted by . S involves moving . S point by point on the reconstructed thermal image . I and observing whether the structure element. S is completely contained within the reconstructed thermal image. I after each move. If. S is completely included in. I , the pixel in the reconstructed thermal image . I at the location of the reference point of . S is retained. The point belongs to the corrupted set .Co. Otherwise, the point does not belong to the corrupted set .Co, i.e., it is not corrupted.

186

6 Defects Positioning Method for Large Size Specimen

(2) Expansion operations The expansion operation (known as the dilation operation) is denoted symbolically by the set of reconstructed thermal images . I that have been expanded with structural elements . S. This can be expressed as follows: .

E x = I • S = {i|(S)i ∩ I /= ∅} ,

(6.16)

where.∅ denotes the empty set. From the perspective of the set, the set. E x is composed of all reference points of . S where the structural element . S intersects with the set . I of reconstructed thermal images that are not empty. And the reconstructed thermal image . I is inflated by . S is the structural element . S retains the position information of the reference point, which causes the structural element to overlap with the image during the traversal of the image. (3) Open and closed operations The corrosion operation refines the target in the reconstructed thermal image, while the expansion operation thickens and lengthens it. Although the two operations have opposite effects, expansion and corrosion do not constitute reciprocal inverse operations. However, they can be used in cascade, where the open and closed operations are the simplest complex morphological operations. The open operation involves first corrupting the reconstructed thermal image with structural elements and then to expand it, and is denoted by the symbol ‘.◦’. The closed operation, on the contrary, involves first expanding the image with structural elements and then to corrode it, and is denoted by the symbol ‘.•’. They are defined as follows .

I ◦ S = (I ⊕ S)ΘS,

(6.17)

I • S = (I ΘS) ⊕ S.

(6.18)

.

(4) Labeling of defective regions Marking of defective regions is achieved by labeling the defective region of interest and marking the defective regions with the smallest rectangular box that can cover the defect. Furthermore, the center of mass of each defective region is calculated to determine its location distribution. Based on the above process, the function of automatic labeling of defect locations can be realized. In order to understand the basic information such as the number, size, morphology and location of defects in the test specimen, it is necessary to first mark the defect regions extracted from the defect feature of segmentation image. The defective region in segmentation image is a binarized image, and by labeling the defective region in the binarized images, the pixels in the defective part are represented by .1, which is highlighted in white, and the pixels in the background area are represented by.0, which is in black. Each defect in the image can be regarded as a separate connected region. By labeling the target pixels in the binarized images, each individual connected region forms a marked block, and resulting number of connected regions to the number of defective regions.

6.2 Defect Positioning Based on Whole and Local View Conversion …

187

A connected region in a reconstructed thermal image is a set of all two adjacent pixel points. The common adjacency relations are .4-adjacency and .8-adjacency. .4adjacency means that if a pixel is adjacent to any one of the four pixel points above, below, left, or right of the pixel point, then these two pixel points can form a connected relationship. .8-adjacency adds the adjacency relation with four pixel points at the diagonal to the original adjacency relation of .4-adjacency. If the smallest pixel in a grayscale image . H corresponding to the reconstructed thermal image as a cell, there exists a connected domain . L, a pixel ) ( is considered point(. p = )px , p y in the connected domain and the pixel neighborhood in which .q = q x , q y . The .8-adjacency of . p is denoted as . N er , which can be expressed by the following equation containing . p’s .8 neighborhoods. That is, .

{ | |) √ } ( N er ( p) = q ∈ H |max |sx − ex | , |s y − e y | ≤ 2 .

(6.19)

If there is a sequence of neighborhoods in . L which starts with .s and ends with .e, then .s is connected to .e. The connected domain has transferability, if there exists .s connected to .e and .e connected to . A, then .s is connected to . A. The connected domain has symmetry, if .s is connected to .e, then .e is connected to .s. The connected domain is marked with gray value 0 as background and other gray values as foreground, as shown in the following equation. } { ( D F = { s ∈ A|BG (s) = 1}, . (6.20) BG = s ∈ A|BG (s) = 0 , ( .

D F ∪ BG = A, D F ∩ BG = ∅,

(6.21)

where . B is the binarized reconstructed thermal segmented image. . A is all the pixel points. . D F is the defective feature region. . BG is the background region. .T is any pixel point on the image. From the definition of a connected region, we know that a connected domain is a collection of pixels consisting of neighboring pixels with the same pixel value. Therefore, we can use these two conditions to find connected regions in an image. For each connected domain found, we give it a unique identifier to distinguish it from other connected domains. A starting point is chosen to traverse each pixel of the image in a .4-adjacency or .8-adjacency, and all points in its adjacency are labeled with the same value. The background region is labeled as.0, the first connected region is labeled as .1. The second connected region is labeled as .2. The rest of the regions are labeled similarly.

188

6 Defects Positioning Method for Large Size Specimen

6.2.2 Precise Positioning of Defective Regions The connected domain of the marked defective feature region is extracted after morphological processing to further determine the location information of the defect. The grayscale value of the pixels in the connected domain is used to realize the calculation of the location of the center of mass of the defective feature region. Assuming that the detection area range is . M × N , the defective feature segmentation image with resolution .m × n obtained by algorithm processing. .G (x, y) is the grayscale value of any pixel point, if its center of mass is represented by .G 0 (x0 , y0 ), the calculation is shown in the following equation: ∑m ∑n ⎧ ⎪ i=1 j=1 i G (x, y) ⎪ ⎪ ⎨ x0 = ∑m ∑n G (x, y) , ∑mi=1∑nj=1 . ⎪ j=1 j G (x, y) ⎪ ⎪ y0 = ∑i=1 . ⎩ m ∑n i=1 j=1 G (x, y)

(6.22)

For each marked defective feature region, the minimum and maximum values of the coordinates of the pixel points are calculated. The minimum value is used as the coordinate of the upper left corner of the outer rectangular box .G 1 (x1 , y1 ), and the difference with the coordinate of the maximum value .G 2 (x2 , y2 ) yields the width . W, (W = x 2 − x 1 ) and the height . H, (H = y2 − y1 ) of the rectangular box. Therefore, the actual shooting range. M × N of each calibration area is known. The image resolution .m × n is obtained by the reconstruction stitching algorithm. The coordinates of the upper-left spatial position of the shooting range and the coordinates of the upper-left pixel point in the defect image are set as the starting point .(1, 1). Then, based on the calculated coordinates of the center of mass of the defective region as .G 0 (x0 , y0 ), the values of the coordinates of the upper-left corner of the external rectangular frame .G 1 (x1 , y1 ) with the width .W and the height . H , the pixel resolution and spatial position coordinates can be converted in equal proportions to locate each defective region in the detection area and grasp the distribution of each defect. For the center-of-mass coordinates .G 0 (x0 , y0 ) of a defective feature region in the segmented marker image, the equiproportional relationship equation ( conversion ) can be utilized to obtain the center-of-mass coordinates .G 0 ' x0 ' , y0 ' of the spatial location of the defect in the specimen detection region, i.e., y0 x0 × M, y0' = × N. (6.23) m n ( ) Similarly, the coordinates .G 1 ' x1 ' , y1 ' of the spatial position of the upper left corner of the external rectangular frame and the width .W ' and the height . H ' of the defects within the detection range can be obtained. This allows for the determination of the distribution range of the defects. x' =

. 0

6.2 Defect Positioning Based on Whole and Local View Conversion …

.

W' =

W H × M, H ' = × N. m n

189

(6.24)

Based on the binarization results of the previously obtained defect feature segmentation images, the discontinuous edge contours of the defect feature regions are removed using morphological processing. And the connected domains of the defect feature regions are extracted. Further, by adding the smallest rectangular box that can cover each defect feature region and calculating the coordinates of the center of mass of each connected domain and converting them to the actual inspection area in equal proportion, the spatial location information and distribution range of the defects can be determined, and the automatic labeling function of the defect location can be realized accordingly. Figure 6.5 shows the defect location diagram, reflecting the mapping of the actual spatial location information of the defects of the test specimen based on the equiproportional conversion estimation of the image pixel information. This was done after applying the automatic defect location labeling function to the reconstructed (stitched) image within the detection range. The location information obtained from the segmented image annotation results can be estimated in equal proportion to obtain the spatial location information of the specimen. This information is of guiding significance for the positioning of each defective feature region. For the global stitching inspection results of large-size specimens, if the defect features are not obvious or do not meet the sensitivity of the quantification index, a second local detection is considered to obtain the accurate quantification results for each defect feature region.

6.2.3 Re-inspection of Defective Regions After Precise Positioning The inspection process up to now has been aimed to obtain as many overall panoramic images of the defective regions as possible, in order to complete the segmentation and detection of the damaged area in an efficient manner. As shown in Fig. 6.5, based on the size and location information of the smallest external rectangular box of the specific damage defect area, the specific spatial location information of each defective region can be estimated based on the global stitching detection for labeling. The positioning of each defect can be realized based on the calculated spatial location information, which guides the secondary local inspection process to determine the location and scope of the shot. Adjust the position of the IR camera so that it is vertically aligned with the specific damage defect, reduce the relative distance between the IR camera and the defective region, adjust the focal length of the IR camera so that it can capture the defect completely and clearly, and ensure that the shooting frame should cover the smallest outside rectangular frame of the defective region. Referring to the steps of local area testing, a clear high-resolution reconstructed thermal image of the damage defect is finally obtained.

190

6 Defects Positioning Method for Large Size Specimen

Fig. 6.5 Precise positioning of local defects and secondary photography

The size of the defective region is determined by the distance factor . K . Therefore, the distance factor . K directly affects the accuracy of infrared detection, and the distance factor . K = S : D is the ratio of the distance . S from the IR camera to the test specimen and the actual width of the test specimen . D. The smaller the factor . K , the higher the optical resolution of the test image. The distance coefficient . K is used to determine the detection area’s width and detection sensitivity, so that defects in specimens of different sizes can be located and quantitatively evaluated. For large-size specimens, it is necessary to increase the distance coefficient . K , in order to achieve the purpose of wide-range detection. However, this will make the sensitivity of detection decrease. In order to ensure the detection sensitivity, the shooting area can be reduced by reducing the distance coefficient. K , but the detection efficiency will also be reduced. In order to meet the detection efficiency and accuracy requirements, the first step is to perform global detection of a large range of defective regions at a larger distance . S1 and obtain the detection results of the global stitching image. Apply the automatic defect location labeling function to the global detection results to obtain the spatial location information of all defects within the detection range of the region. Further, according to the demand for higher detection sensitivity, the distance from the camera to the specimen is reduced, and under the new distance factor . S2 , the second local area detection is then performed based on the defect location labeling information obtained from the global detection, and the individual defective regions are located. The detection sensitivity requirements that cannot be achieved under the global detection range can be achieved with higher detection sensitivity for accurate quantification of defects under the conditions of secondary detection in the local area.

6.3 Defect Positioning Based on Inverse Heterogeneous Source …

191

For each marked defective feature region, the minimum and maximum values of the coordinates of the pixel points in them are counted, where the minimum value is used as the upper-left corner, denoted as the coordinate .C1 (x1 , y1 ) of the external rectangular frame. Meanwhile, the maximum value is denoted as the coordinate .C2 (x2 , y2 ). The difference between .C1 (x1 , y1 ) and .C2 (x2 , y2 ) is applied to obtain the external rectangular frame with the width .W and the height . H . Utilizing the known shooting range of the specimen with . M × N and the resolution of the stitched reconstructed thermal image with .m × n, the coordinates of the upper left corner of the detection range and the coordinates of the upper left pixel point in the defect image are set as the starting points. Then, the equiproportional conversion relationship between resolution of images and position coordinates of specimen can be established. This relationship allows us to locate each defective region in the detection area and to grasp the distribution of each defect, based on the obtained the mass center coordinates .C0 (x0 , y0 ) of the defective region, .C1 (x1 , y1 ), .W and . H . Guided by the above coordinates, the position of the IR camera is adjusted so that it is vertically aligned with the specific damage defect. The relative distance between the IR camera and the defective region is reduced, and the focal length of the IR camera is adjusted so that the defect can be captured completely and clearly.

6.3 Defect Positioning Based on Inverse Heterogeneous Source for Reconstructed Thermal Images In the process of infrared detection and evaluation of spacecraft, it is necessary to divide the defective regions into different local regions for detection, in order to present a comprehensive damage image while satisfying the detection accuracy. The reconstructed thermal images are then stitched together using the features of the overlapping regions within these images. These regions are acquired separately. During the stitching of the reconstructed thermal images, the collocated images undergo a reversible projection matrix transformation [1]. And the stitched overlapping region, which is attributed to a certain damage can be determined using the pixel position transformation method described in the previous section.

6.3.1 Pixel Conversion of Stitched Reconstructed Thermal Images In Sect. 6.2, the focus was on the overall stitched image, and a second detection was taken based on the transformation of the viewpoint. Unlike the method in Sect. 6.2, the object of study in this subsection is the overall reconstructed thermal stitched image of the object under test, and the coordinates where the pixel points of the stitched image are located are obtained by the transformation of the original reconstructed

192

6 Defects Positioning Method for Large Size Specimen

image. From the principle of the stitching method in Chap. 4, it is known that for two 2D reconstructed thermal images (reference image, collocation image), which are obtained from the same 3D scene detection object, a 2D affine transformation matrix . M H can be established by the matching point pairs obtained from the feature point descriptors. In this subsection, . M H will be utilized to devise a method for converting pixel positions within the reconstructed thermal stitching line image. First, the transformation matrix . M H is considered to be related to the reference image and( the alignment image. Specifically, the reconstructed thermal image . P1 , ) 1 1 that is . P1 xi , y j , is chosen as the reference image. Meanwhile, the reconstructed ( ) thermal image . P2 , that is . P2 xi2 , y 2j , is selected as the alignment image. For . P1 and ( ) 12 12 . P2 , their stitched image is denoted by . P12 , that is . P12 x hx , yhy . Using the obtained corresponding transformation relations . M H can express the correspondence between the overlapping regions in the reference image . P1 and the collocated image . P2 in the reconstructed thermal image of the same position region test)specimen. ( ( ) in the actual For a point . A in the detected specimen, .a1 = xi1a , y 1ja and .a2 = xi2a , y 2ja represent the coordinates of the corresponding positions of the reconstructed images . P1 and . P2 , respectively, which satisfy .a1 = M H · a2 , (6.25) where the matrix . M H is the two-dimensional projective transformation matrix of the reconstructed images . P1 and . P2 , i.e., the image transformation model-the homography matrix. The homography matrix matrix is an invertible perspective transformation matrix .∈ R 3×3 ⎡ ⎤ m1 m2 m3 (6.26) .MH = ⎣ m4 m5 m6 ⎦ . m7 m8 1 ( ( ) ) For the points .a1 = xi1a , y 1ja and .a2 = xi2a , y 2ja , then one has [ .

xi1a y 1ja

]



⎡ ⎤ ⎤ m1 m2 [ 2 ] m3 x = ⎣ m 4 m 5 ⎦ i2a + ⎣ m 6 ⎦ . y ja m7 m8 1

(6.27)

For the detection region for a large-size spacecraft, Eq. (6.27) expresses a position in the overlapping region of the baseline reconstructed thermal image . P1 , and the aligned reconstructed thermal image . P2 . The projection relation . M H can be used to perform the corresponding transformation to obtain the coordinates of the corresponding position. ( ) From the above, it can be seen that the pixel coordinates . P2 xi2 , y 2j in the colloimage can be transformed using . M H to obtain the pixel positions cated ( reconstructed ) 12 12 . P12 x hx , yhy in the stitched image. According to the correspondence between the

6.3 Defect Positioning Based on Inverse Heterogeneous Source …

193

two pairs of feature points in the reconstructed image reflected by Eq. (6.27), the reconstructed thermal images . P1 and . P2 can be transformed to the same coordinate system using the homography matrix, that is, .

( 12 12 ) ( ) [ ( )] P12 x hx , yhy = P1 xi1 , y 1j ∪ M H · P2 xi2 , y 2j ,

(6.28)

( ) ( 12 12 ) P2 xi2 , y 2j = M H−1 · P12 x hx , yhy ,

(6.29)

.

Equations (6.28) and (6.29) are used to obtain the stitched reconstructed thermal image . P12 and the inverse transformation of this forward coordinate ( transforma) tion relationship. The stitched image . P12 is obtained by adding . P1 xi1 , y 1j and the ( ) ( ) transformation of . P2 xi2 , y 2j under . M H together. Meanwhile, . P2 xi2 , y 2j can be ( ) 12 12 , yhy in the stitched reconrestored as the coordinates of the pixel points . P12 x hx structed thermal image . P12 by applying the inverse transformation of . M H . According to Eqs. (6.28) and (6.29), the feasibility that the stitched image can be used to obtain the corresponding reduced coordinates by using the invertibility of the homography matrix of the reconstructed thermal image is verified. In this subsection, the following steps are described: the inverse affine transformation matrix finding method, and the exact pixel point conversion coordinates method, and the construction method of pixel position conversion for the stitched reconstructed thermal image, respectively.

6.3.1.1

Inverse Affine Transformation Matrix Finding

For the invertible perspective transformation matrix . M H , the inverse matrix exists as follows: M H∗ −1 .MH = (6.30) , |M H | where .|M H | is the determinant and . M H∗ is the accompanying matrix: ⎡

⎤ M11 M21 M31 ∗ . M H = ⎣ M12 M22 M32 ⎦ , M13 M23 M33

(6.31)

where . Mi j , .i = 1, 2, 3, . j = 1, 2, 3 are the corresponding algebraic consubstantiation in the affine transformation matrix accompaniment matrix of the stitched reconstructed thermal image. | | |m m | Taking . M11 as an example, . M11 = (−1)1+1 || 5 6 || = m 5 . For the determinant 0 1 .|M H |, one has

194

6 Defects Positioning Method for Large Size Specimen .

|M H | = m 1 × m 5 × 1 + m 2 × m 6 × 0 + m 3 × m 4 × 0

(6.32)

− m2 × m4 × 1 − m1 × m6 × 0 − m3 × m5 × 0 . = m1 × m5 − m2. .

For two reconstructed thermal images with certain overlapping areas, the images are stitched by the reconstructed thermal image stitching algorithm. Among them, in order to adapt to the problem of rotational declination that occurs during data acquisition of reconstructed thermal images, the affine transform is used for image alignment in this subsection, i.e., ⎡

⎤ m1 m2 m3 .MH = ⎣ m4 m5 m6 ⎦ . 0 0 1

(6.33)

Then, the inverse matrix of the homography transformation matrix corresponding to Eq. (6.33) is derived as

.

M H−1

⎤ ⎡ m −m 2 m 2 × m 6 − m 3 × m 5 M H∗ 1 ⎣ 5 −m 2 m 1 m 3 × m 4 − m 1 × m 6 ⎦ . = = |M H | |M H | |M H | 0 0

(6.34)

Further, the invertibility of the affine transform is used while considering the problem of coordinate system changes that occur in the pixel position transformation in practical applications. The obtained inverse matrix of the affine transform is applied in the exact pixel point conversion coordinate method, in order to accurately restore the original coordinates of the pixel points of the stitched reconstructed thermal image. The exact pixel point transformation coordinate method will be elaborated on in the next subsection.

6.3.1.2

Precise Pixel Point Conversion Coordinates

The set of pixel points of an image may have a negative calculation result with respect to the origin of the world coordinate system after affine transformation. Considering that in the field of image processing, the coordinates of image pixel points do not have a negative values. If the coordinate values of pixel points calculated as a result of stitched images are used directly, then the correctly transformed coordinate values wouldn’t be obtained. Thus, it is necessary to perform calculations to determine the supplementary . X -axis and .Y -axis values, . X + and .Y + , by means of the affine transformation matrix . M H , and the length and width pixels .[m, n] of the collocated image, i.e., .

X + = 0| xmin > 0, X + = xmin | xmin ≤ 0,

(6.35)

6.3 Defect Positioning Based on Inverse Heterogeneous Source … .

Y + = 0| ymin > 0, Y + = ymin | ymin ≤ 0,

195

(6.36)

Furthermore, Eqs. (6.35) and (6.36) can be expressed as x

. min

= min (M H × [1, n] , M H × [1, 1]) , x

ymin = min (M H × [1, 1] , M H × [m, 1]) . y

(6.37)

Equations (6.35) and (6.36) represent coordinate offset values of the transfor) ( the 12 , y located in the overlapping region of mation of a certain pixel point . p1 = x 12 p1 p1 . P12 . By transforming a specific pixel point the stitched reconstructed thermal image ( ) ( 12 12 ) 2 2 . x p , yp into the alignment image . P2 x i , y j , and utilizing the affine transforma1 1 tion inverse matrix (6.34) along with the world coordinate offset values of Eqs. (6.35) and (6.36), the pixel coordinate transformation relationship of stitched reconstructed thermal images can be constructed as follows ) ( 12 12 ) ( 12 ∈ P12 x hx ∀ x 12 , yhy , p , yp

.

[ .

x 2p' y 2p'

] =

M H−1

] + x 12 p + X · + . y 12 p +Y

(6.38)

[

(6.39)

For the defect edge data set, the coordinates in the stitched reconstructed thermal image can be used to obtain the positioning of the defective region in the coordinates 12 or in the of the aligned image when it is located in the overlapping area . Poverlap area located in the stitched image after the alignment image transformation, using Eqs. (6.38) and (6.39). In turn, the transient thermal response curve corresponding to the infrared image sequence of the collocated image can be obtained and used to determine the size of the defective region. Thus, after discussing the inversion of the projection transform in this section, the next subsection will discuss how to obtain the overlapping regions in a stitched image for determining the source location of the defective features.

6.3.2 Determining Method of Image Overlap Area As shown in Fig. 6.6, after the stitching process of the reconstructed thermal images, the problem of determining overlaps in the stitched reconstructed thermal image can be regarded as a problem of determining polygon overlap regions. Considering the rotational and translational effects of the affine transform on the image, the case of convex polyhedra as overlapping regions will be introduced.

196

6 Defects Positioning Method for Large Size Specimen

Fig. 6.6 Method for determining the image overlap area

According to the principle of the affine transformation matrix acting on the alignment image, it is possible to transform the pixel points with .m × n to the alignment image consisting of four vertices. These vertices are: .(xmax , ymax ), .(xmax ( , ymin)), 2 2 .(x min , ymin ), .(x min , ymax ) based on the projection transformation . M H · P2 x i , y j , .i = 1, . . . , m, . j = 1, . . . , n. These four vertex coordinates satisfy respectively xmin = 1| xmin > 0, xmin = xmin | xmin ≤ 0, xmax = m| xmax < m, xmax = xmax | xmax ≥ m, . ymin = 1| ymin > 0, ymin = ymin | ymin ≤ 0, ymax = n| ymax < n, ymax = ymax | ymax ≥ n.

(6.40)

In the world coordinate system, which is composed of these four fixed points, the transformed coordinates can be used to obtain the final stitched reconstructed thermal image . P12 . This is done by corresponding the transformed coordinates to the pixel points in the reconstructed thermal reference image . P1 . To obtain the magnitude of the world coordinate values, the pixel value magnitude of the reconstructed image with the affine transformation matrix . M H is calculated as follows: x

. max

= max (M H ∗ [m, n] , M H ∗ [m, 1]) , x

xmin = min (M H ∗ [1, n] , M H ∗ [1, 1]) , x

ymax = max (M H ∗ [m, n] , M H ∗ [1, n]) , y

ymin = min (M H ∗ [m, 1] , M H ∗ [1, 1]) . y

(6.41)

As shown in Fig. 6.6, where .max represents the maximum value of the .x-axis x coordinates of .q22 and .q24 that are the corresponding points by(applying ) affine trans' 2 2 formation of. p22 : (m, n),. p24 : (m, 1) in the aligned image. P2 xi , y j ..min denotes x the minimum value in the .x-axis coordinates of .q21 and .q23 that are the correspond-

6.3 Defect Positioning Based on Inverse Heterogeneous Source …

197

ing points by applying affine ( ) transformation of . p21 : (1, n) and . p23 : (1, 1) in the ' 2 2 aligned image . P2 xi , y j . Furthermore, .max denotes the maximum value of the y

y-axis coordinates of .q21 and .q22 that are the corresponding points by applying ( )affine ' 2 2 transformation of . p21 : (1, n) and . p22 : (m, n) in the aligned image . P2 xi , y j . .min

.

y

denotes the minimum value of the . y-axis coordinates of .q23 and .q24 that are the corresponding points by applying affine ( ) transformation of . p23 : (1, 1) and . p24 : (m, 1) in the alignment map . P2' xi2 , y 2j . After affine transformation, the reconstructed thermal alignment image . P2 can be ( ) represented as . P2' xi2 , y 2j , .i = 1, . . . , xmax − xmin , . j = 1, . . . , ymax − ymin . Similarly, the reconstructed thermal reference image. P1 is placed in the same world coordi( ) ' 2 2 nate system to obtain . P1 xi , y j , .i = 1, . . . , xmax − xmin , . j = 1, . . . , ymax − ymin . As described, the positions of the vertices of the reconstructed thermal alignment image can be obtained. The coordinates of the vertices of the alignment image, which are located in the reference image can be obtained by judging the coordinate positions. Thus, the steps of the stitched reconstructed thermal image overlap region determination method are shown in Algorithm 6.1. Algorithm 6.1: Method for determining the overlap region of stitched reconstructed thermal image: ( ) Input: Reconstructed thermal reference image: . P1 xi1 , y 1j , .i = 1, . . . , m, . j = ( ) 1, . . . , n; Reconstructed thermal alignment image: . P2 xi2 , y 2j , .i = 1, . . . , m, . j = 1, . . . , n; Affine transformation matrix: . M H . Output: Overlap region of stitched reconstructed thermal image. Step 1: Determine the rectangular search region in the overlapping area. Step 1.1: Substituting the pixel parameters .m × n of the aligned image . P2 , and the affine transformation matrix . M H into Eq. (6.41), the coordinates of the four vertices in the world coordinate system are calculated as follows ([

x

. max

=

xmin = ymax = ymin =

] [ ] [ ] [ ] [ ] [ ]( m1 m2 m m3 m1 m2 m m3 max · + , · + , m4 m5 m6 m4 m5 m6 n 1 x ([ ] [ ] [ ] [ ] [ ] [ ]( m1 m2 1 m3 m1 m2 1 m3 min · + , · + , m4 m5 m6 m4 m5 m6 n 1 x ([ ] [ ] [ ] [ ] [ ] [ ]( m1 m2 m m3 m1 m2 1 m3 max · + , · + , m4 m5 m6 m4 m5 m6 n n y ([ ] [ ] [ ] [ ] [ ] [ ]( m1 m2 m m3 m1 m2 1 m3 min · + , · + . (6.42) m4 m5 m6 m4 m5 m6 1 1 y

Step 1.2: Determine the value of the coordinates of the vertices of the world coordinate system:

198

6 Defects Positioning Method for Large Size Specimen .

I f xmin > 0, xmin = 1, else xmin = xmin , I f ymin > 0, ymin = 1, else ymin = ymin , I f xmax > m, xmax = xmax , else xmin = m, I f ymax > n, ymax = ymax , else ymax = n.

(6.43)

Step 1.3: Search for a rectangular area with a size of .[xmax − xmin , ymax − ymin ]. Step 2: Transformation of the image to a rectangular search area. Step 2.1: Transform the reconstructed thermal reference image . P1 into the searched rectangular area. Use ‘0’ as an interpolation to complement the positions where there are(no corresponding values, to obtain the reconstructed thermal refer) ' 1 1 ence image . P1 xi , y j , .i = 1, . . . , xmax − xmin , . j = 1, . . . , ymax − ymin . ( ) Step 2.2: Using the affine transformation . M H · P2 xi2 , y 2j , the alignment image . P2 is transformed into the searched rectangular region. Use ‘0’ as an interpolation to complement the positions where there are(no corresponding values, to obtain the ) ' 2 2 reconstructed thermal alignment image . P2 xi , y j , .i = 1, . . . , xmax − xmin , . j = 1, . . . , ymax − ymin . Step 3: Iterate through the searched rectangular area to obtain the overlapping area coordinate values. The ( ) ( points ) of the overlapping region should consist of two ' 1 1 ' 2 2 parts: . P1 xi , y j and . P2 xi , y j . Step 3.1: Set overlapping area traversal parameters: initialize .i = 1, . j = 1. Step 3.2: coordinate value, determine whether the points ( ) Under(the current ) ' 1 1 . P1 x i , y j and . P2' xi2 , y 2j are non-zero values at the same time. If yes, this point is located in the overlapping area of the splicing result and its coordinate value .xi , y j 12 is stored in the array . Poverlap . Step 3.3: .i = i + 1. Step 3.4: If.i ≥ xmax − xmin ,. j = j + 1, else, repeat Step 3.2 until the entire search rectangle is traversed. 12 Step 4: The search of the overlap area is completed, and the array . Poverlap consisting of the pixel coordinates in the overlap area is output. Through the above steps, the position of the overlapping region can be obtained, and then ( the set ) of pixel point coordinates in( the overlapping ) ( region: ) 12 12 12 12 12 12 12 12 x 12 ∈ P x hx . Poverlap x p ' , y p ' can be obtained. It satisfies.∀Poverlap , y , yhy , p' p' i.e., all the pixel points in the overlap region can be converted to the pixel points corresponding to the alignment image . P2 . The stitched reconstructed thermal image . P12 can be divided into three parts: 12 1 the overlap region . Poverlap , the reference image part . Premain , and the collocated 2 image part . Premain . By extracting the defects in the reconstructed thermal image and identifying the region to which they belong, the corresponding reconstructed thermal image source can be determined.

6.3 Defect Positioning Based on Inverse Heterogeneous Source …

199

Based on the above inverse transformation method of stitched reconstructed thermal image and the overlap region determination algorithm, the image source to which the defects in the stitched results belong can be determined. Further, the corresponding transient thermal response curves can be accurately obtained, and a judgement can be made on the thermal diffusion region. The problem of how to extract the heat diffusion region in the stitched reconstructed thermal image and, consequently, obtain the transient thermal response curves corresponding to the defect features using the above algorithm will be investigated in the next section.

6.3.3 Defect Contour Positioning for Regional Determination Results After the previous sections, we obtained the specific extent of the overlapping region of the stitched reconstructed thermal images. Based on the image segmentation and edge extraction algorithms, the defect region can be extracted and its edge pixel points can be obtained. And by assessing the overlapping area of the stitched result image, the video source of the reconstructed thermal image can be traced back to obtain the transient thermal response curve corresponding to the defect region and its edge pixel points. As shown in Fig. 6.7, the heat flow is affected by the heat conduction region inside the defect, and an edge heat diffusion effect is formed on the boundary of the defect region in the segmented image. This thermal diffusion effect is reflected in the reconstructed thermal image, which potentially causing the segmented defect image inaccurately represent the quantitative information of the defect.

Fig. 6.7 Schematic illustration of the thermal diffusion area of reconstructed thermal images

200

6 Defects Positioning Method for Large Size Specimen

Fig. 6.8 Similarity measure of transient thermal response curves of the same infrared image sequence

Further, in order to obtain the accurate thermal diffusion region size, a similarity metric needs to be designed for the defect edge pixel TTR curves. In this section, we discuss the method to discriminate the number of pixels in the thermal diffusion region and the number of overlapping defect pixels in the overlapping region, i.e., the correlation condition of the TTR curves. Taking into account the physical properties of the TTR itself, the TTR similarity measure is designed for the actual region of the defect. Subsequently, the edge TTR curve is judged whether it belongs to the thermal diffusion region. For TTR curves that belong to the thermal diffusion region, they are excluded from the defect region. Thereby, the actual number of pixel points of the defective features, i.e., the quantitative information corresponding to the defective region, can be obtained. As shown in Figs. 6.8 and 6.9, we will consider two cases: (1) The similarity measure of the transient thermal response of the defect edges within the same infrared image sequence. (2) The similarity measure regarding the number of pixels in the overlapping region of the defect features across different infrared image sequences.

6.3.3.1

Algorithm for Measuring the Similarity of TTR Curves of the Same Infrared Image Sequence

After obtaining the TTR curves corresponding to the defect edge pixels, it is necessary to evaluate the similarity between the edge pixels and the actual area of the defect feature, i.e., to perform the corresponding proximity measure. The TTR dataset is a dense and continuous dataset, so a distance metric is chosen to measure the proximity of the corresponding thermal spread region. The commonly used distance metrics are Euclidean metric, Manhattan distance, and Chebyshev distance. The use of distance measures is based on the degree of proximity between objects and does not consider the internal nature of the variables that originally exist. For

6.3 Defect Positioning Based on Inverse Heterogeneous Source …

201

Fig. 6.9 Similarity measure of transient thermal response curves of different infrared image sequences

TTR curves, each of its dimensions has a different degree of importance. So they are assigned corresponding weights to express their corresponding roles to achieve a better metric. For two transient thermal response curves with . F frames: T T Ri = ti,1 + ti,2 + · · · + ti,F ,

(6.44)

T T R j = t j,1 + t j,2 + · · · + t j,F .

(6.45)

.

.

Measuring the distance between them, then, there is the following equation [4]: (

dis T T Ri , T T R j

.

)

⎡ ⎤ 2 1/ 2 F ∑ || | | || = ||T T Ri − T T R j ||2 = ⎣ ω f |ti, f − t j, f | ⎦ , (6.46) f =1

where .ω f corresponds to the weight of each variable. In the process of judging the TTR curves, the variability of the TTR curve due to the different physical properties is first considered. After that, the weights are assigned using the physical properties of the defect region. In this subsection, the following physical properties are constructed from the temperature values of every TTR curve. (1) Overall energy of TTR curve. The overall energy of the transient thermal response curve is obtained from the square of the 2-norm of the corresponding curve, as shown in Eq. (6.47):

202

6 Defects Positioning Method for Large Size Specimen .

2 2 2 E ioverall = ||T T Ri ||22 = ti,1 + ti,2 + · · · + ti,F ,

(6.47)

where . F is the number of frames corresponding to the number of infrared sequences and .T T Ri represents a transient thermal response curve. (2) Temperature peaks of TTR curves. The peak temperature of the .i-th TTR curve is .Tmax T

. max −i

= max ti, f .

(6.48)

f =1,...,F

At the same time, the TTR curve reflects the characteristics of the temperature change of the specimen in different regions with time. Thus, considering the temperature-time variation trend, the following physical properties are constructed based on the rate of change of the TTR curve temperature. (3) Temperature rise rate of TTR curve. The temperature rise rate reflects the rate of change from the initial temperature to the peak temperature. It can be expressed by the slope of the TTR curve between the initial frame and the frame where the temperature peak is located. ΔRrise =

.

ti, f max −i − ti, f0 f max −i − f 0

,

(6.49)

where . f max−i is the number of frames corresponding to the peak .Tmax−i . (4) Temperature reduction rate of TTR curve. The cooling process starts from frame . f max−i and ends at the frame where the initial value of temperature regression is located. The cooling rate of the TTR curve can be expressed by the slope of the curve obtained between the frame where the temperature peaks and the frame where the temperature drops to its initial value. ΔRdown =

.

ti,F − ti, f max −i F − f max −i

.

(6.50)

The above physical properties can clearly divide the defective and non-defective regions, but a similarity metric is needed to distinguish between the heat diffusion region and the actual region of the defect for small differences. Since the object of the similarity measure is the TTR curve. Thus, combining the physical properties of the TTR curves, the correlation between the physical properties of the TTR curves in different regions can be found, which constitutes i the convergence degree .Con Da f . The significance of the values corresponding to the different convergence calculations is shown in Table 6.1. The above definition of convergence describes how it can amplify the morphological characteristics of the TTR curve by a certain extent. When the difference between the trends of two TTR curves becomes larger, their corresponding weighting factors also increase as a result. This amplification can in turn increase the divergence between them.

6.3 Defect Positioning Based on Inverse Heterogeneous Source …

203

Table 6.1 Corresponding meaning of convergence values Numerical Corresponding meaning value results 1

The actual defect area has the same temperature trend and equal values compared to the TTR curve of its edge pixels The actual defect area has the same trend of temperature change compared to the TTR curve of its edge pixels, but the values are different The actual defect area has an opposite trend of temperature change compared to the TTR curve of its edge pixels

2 3

Combined with the above description: Eq. (6.46) can be written as (

dis T T Ri , T T R j

.

)

⎡ ⎤1/ 2 F ∑ || || | | 2 Con nDai |ti, f − t j, f | ⎦ . = ||T T Ri − T T R j ||2 = ⎣ f =1

(6.51) In Eq. (6.51), each dimension in the TTR curve is treated as equal. However, for different defective regions, their initial and peak temperatures are what specifically need to be focused on. For the dimension. f max−i in which the temperature peak.Tmax −cen−i = max ti, f f =1,...,F

is located in the TTR curve at a point in the actual defect region, the weight of ' .ω f max −i = 1.5 is added to the dimension where that temperature point is located. The weights of the remaining dimensional variables are kept consistent at .ω'f = 1.0, where . f = 1, . . . , f max − 1, f max + 1, . . . , F. Thus, Eq. (6.51) is rewritten as follows || ) || ( dis T T Ri , T T R j = ||T T Ri − T T R j ||2

.

⎡ =⎣

F ∑

ω' f

⎤1/ 2 | | 2 Dai | | ⎦ . × Con f ti, f − t j, f

(6.52)

f =1

( ) Dam We choose the reference point .cen , y0Dam of the actual defect region, seg PDam x 0 ) ( Dam Dam Dam whose corresponding TTR curve is denoted as.T T Rcen = tcen−1 , tcen−2 , . . . . , t FDam . This curve is selected for the similarity measure with the TTR curve of the corresponding defect edge region. Among them, the defect actual region reference point is selected in the following way described. The connected domain where the defective . Dam corresponding to the defective ab is located is denoted as . pi xel Dam = feature segmentation image . Pseg

204

6 Defects Positioning Method for Large Size Specimen

[(

) ( )] Dam m , y x1Dam , y1Dam , . . . , xiDa , .k = 1, 2, . . . , Dam . And denote its actual i k k ( ) cen defect region reference point as .seg PDam x0Dam , y0Dam . As an example, the reference point is calculated as shown in Eqs. (6.53)–(6.56) for a defect that is approximately circular. || [| ] | Dam Dam Dam || . x i max ← max |yi − y = x x , (6.53) | | i i 2 1 i2 1 i 1 ,i 2

x , xi2 ∈ pi Dam =

[(

. i1

Dam . y0

) ( )] Dam m x1Dam , y1Dam , . . . , xiDa , y , k = 1, 2, . . . , Dam , ik k (6.54) Dam m .x0 ← xiDa (6.55) max ,

= min

(

m m yiDa , yiDa 1 2

)

| | Da | y m − y Dam | | | i1 i2 +| |. | | 2

(6.56)

In this subsection, the distance between the TTR curve of the edge point of the defective region and the TTR curve of the reference point of the actual defect area is calculated to determine whether an edge point is the corresponding point of the thermal diffusion area. Considering that the temperature rise and fall corresponding to different detection objects cannot be consistent, and the difference between the change of each defective actual region and the thermal diffusion region may be different, and a uniform threshold cannot be determined, the judgment rule for the thermal diffusion region is displayed in the following. (1) Calculate the .i dis, .i = 1, . . . , i Dam between the edge point pairs and the reference points of the actual area of the feature. (2) Calculate .dismax and .dismin . (3) If an edge point pair satisfies .i dis ≥ εttr (dismax , dismin ), it is judged to be a thermal diffusion region and will be removed.

6.3.3.2

Algorithm for Measuring the Similarity of TTR Curves of Different Infrared Image Sequences

If a defect in the stitched reconstructed thermal image has a part of the region distributed in two separate thermal reconstructed thermal images individually, the thermal diffusion region determination method used in the previous subsection cannot be used.To address this situation, the thermal diffusion region is judged by simultaneously considering the thermal diffusion of two different infrared image sequences corresponding to a certain defect. By analyzing the defects in both parts, the size of the thermal diffusion region in that part is judged separately. Then, the number of damaged pixels in the overlapping region is calculated to determine the number of TTR curves in the thermal diffusion region, so as to obtain the size of the actual region corresponding to the defect representation.

6.3 Defect Positioning Based on Inverse Heterogeneous Source …

205

For multi-view IR sequence acquisition of large size materials of spacecraft, it is not guaranteed that the TTR curves obtained from multiple inspections of the same region are identical. However, the physical parameters such as thermal conductivity and thermal resistance corresponding to the same defect region remain consistent, as shown in Fig. 6.9. In other words, for the same feature region in two reconstructed thermal images, the corresponding two TTR curves do not align perfectly on the same X-axis, but they are similar in the overall variation. In such a complex situation, the distance (or similarity) between the two transient thermal responses cannot be efficiently calculated using the conventional Euclidean distance metric. Therefore, to solve this problem, one of the TTR curves needs to be warped and twisted on the X-axis to obtain the correspondence between the two TTR curves. As shown in Fig. 6.9, the dashed lines between the TTR curves connect the points that are similar between the two. The dynamic time warping (DTW) algorithm uses the sum of the Euclidean distances between these points to compare the similarity between the two TTR curves, and this distance sum is obtained in a way called the warp path distance [11, 12], which is constructed as follows. First, for the same pixel position of the same defect feature ) ( region, the correspondDam Dam Dam m ing TTR curves are divided into TTR curve . P1 T T Ri = Da P1 t1 , P1 t2 , . . . , P1 t p ) ( Dam Dam Dam m from the reference image, and TTR curve . Da P2 T T Ri = P2 t1 , P2 t2 , . . . , P2 t p from the aligned image, and the corresponding Euclidean distance matrix is constructed for them, that is, ( ) ( ) ( ) ⎤ Dam Dam Dam Dam Dam m d Da d · · · d t , t t , t t , t 1 p 2 p p p P2 P2 P2 ⎢ ( P1 ) ( P1 ) ( P1 )⎥ ⎥ ⎢ Dam Dam Dam Dam Dam Dam d · · · d t , t t , t t , d ⎢ P1 1 P2 p−1 p−1 p P2 t p−1 ⎥ P1 2 P2 P1 ⎥ ⎢ =⎢ ⎥ .. .. .. .. ⎥ ⎢ . . . . ⎣ ( ) ( ) ( ) ⎦ Dam Dam Dam Dam Dam Dam d P1 t2 , P2 t1 · · · d P1 t p , P2 t1 d P1 t1 , P2 t1 ⎡

.

D E i,Daj m

(

)

Dam Dam P1 ti , P2 t j Dam m points . Da P1 ti and . P2 t j ,

where, the element .d

,

p× p

(6.57) is the distance between the two transient ther-

mal response which characterizes the matching relationship between the two TTR curves. In order to obtain the corresponding warping path, the matching relationship in matrix . D E i,Daj m needs to be constrained by the following conditions: ) ) ( ( Dam Dam Dam m (1) The warping path starts at .d Da P1 t1 , P2 t1 and ends at .d P1 t p , P2 t p . (2) The warping path contains elements that need to travel forward along the neighboring points, so that the paths are continuous and the corresponding distance lengths are incremented. The DTW algorithm uses the principle of dynamic programming to adjust the correspondence between two TTR curves to obtain the optimal warping path, i.e., the minimum distance matrix:

206

6 Defects Positioning Method for Large Size Specimen

⎧ ( ) ⎫ Dam m ⎪ ⎪ T Da ⎪ ⎪ P1 ti , P2 t j−1 ⎪ ⎨ ( ⎬ ) ( ) ) ⎪ ( Dam Dam Dam Dam Dam Dam . = d + min .T t , t t , t t , t T i j i j i−1 j P1 P2 P1 P2 P2 ⎪ )⎪ ( P1 ⎪ ⎪ ⎪ ⎪ Da Da m m ⎩T ⎭ P1 ti−1 , P2 t j−1

(6.58)

) ( Dam m When .T Da P1 ti , P2 t j ≥ ε DT W min (T ) is satisfied, the point is considered to be the correct overlap region pixel point. The final number of pixels in the overlap region can be obtained by two-by-two comparison of the heterogeneous TTR curves for all the defective pixel points located in the overlap region, respectively.

6.4 Experiment and Analysis In this section, the defect positioning analysis of stitched reconstructed thermal images of carbon fiber composite specimen .#1 and .#2 with artificial damage defects, HVI damaged specimen Hyper-1 will be performed using the above defect positioning algorithm. In Sect. 6.4.1, the defect positioning experiments will be performed using the defect region positioning method, which is based on the whole-local view of the stitched reconstructed thermal image. In Sect. 6.4.2, defect positioning experiments will be performed using the inverse heterogenous stitched reconstructed thermal image defect region positioning experimental method.

6.4.1 Experiment and Analysis of Defect Positioning Based on Whole-Local Perspective 6.4.1.1

Defect Positioning Experiment for Specimen .#1

The stitched reconstructed thermal image corresponding to specimen .#1 was used for the experiments of defect positioning and quantitative analysis based on the wholelocal view of the reconstructed thermal image. Firstly, the stitched reconstructed thermal image of specimen .#1 was converted to . L ∗ a ∗ b color space, and based on this, the image region is segmented using the clustering method. The stitched reconstructed thermal images corresponding to the carbon fiber composite specimen .#1 are shown in Fig. 6.10. The following three images containing different image feature information were obtained: Fig. 6.11a shows the defect feature damage region. Figure 6.11b shows the background region 1. And Fig. 6.11c shows the background region 2. Among them, Fig. 6.11a reflects the damage morphology and distribution information in specimen .#1. And Fig. 6.11b, c reflects the background information of this specimen. The segmented image of the defective feature region obtained in the above step is used as a new segmentation processing object. First the segmented image of the

6.4 Experiment and Analysis

Fig. 6.10 Stitched reconstructed thermal image corresponding to specimen .#1

Fig. 6.11 Results of color segmentation of specimen .#1

207

208

6 Defects Positioning Method for Large Size Specimen

Fig. 6.12 Grayscale and binarization of segmented images of defective regions of specimen .#1

defective feature region is grayed out to obtain the image graying result as shown in Fig. 6.12a. Further, the noise in the image is removed using Gaussian blur as in Fig. 6.12b, and then it is binarized using a threshold to obtain the binarized segmentation result of the segmented image of the defective feature region, as shown in Fig. 6.12c. Then, for the defective region of specimen .#1, a total of eight marked connected areas were obtained by the above experimental steps, and each defective region was enclosed by a rectangular box, which was the smallest rectangle containing each defective region, and a number was used as a marker for the statistical identification of each defective region. This is shown in the Fig. 6.13. To guide the accurate secondary photography of the defective regions, the coordinates of the center of mass of each defective region and the coordinates of the upper left corner of the external rectangular box are obtained, as shown in Table 6.2. Based on the information about the distribution of defective regions in the defective regions of specimen .#1, high-resolution inspection is performed on the specific defective region of interest, as shown in Fig. 6.13.

6.4 Experiment and Analysis

209

Table 6.2 The coordinates of the center of mass of each defective region of specimen .#1 and the coordinates of the upper left corner of the outer rectangular box 1 2 3 4 5 6 7 8 Defect serial number Center-of-mass coordinates Coordinates of the upper left corner of the external rectangle

.x

967.5

965.2

963.6

961.4

597.6

595

599.6

107.5

.y .x

670.5 964.5

547.6 958.5

425 953.5

302.4 949.5

490.8 547.5

313.3 511.5

686.1 486.5

496 104.5

.y

667.5

540.5

415.5

290.5

440.5

248.5

610.5

492.5

Fig. 6.13 Linkage domain annotation of defective regions for specimen .#1

6.4.1.2

Defect Positioning Experiment for Specimen .#2

For the previously described defect positioning algorithm based on whole-local view transformation, in this section, the defect positioning algorithm will be illustrated using the stitched result image of infrared NDT of specimen .#2 for verification. For specimen .#2, the stitched reconstructed thermal image is shown in Fig. 6.14. Firstly, the stitched reconstructed thermal image corresponding to specimen .#2 is converted to . L ∗ a ∗ b color space, and the image regions are segmented using the clustering method. As shown in Fig. 6.15, the following three images containing different image feature information regions are obtained, which characterize different feature regions of specimen .#2. Among them, Fig. 6.15a reflects the damage morphology and distribution information of specimen .#2, while Fig. 6.15b and c reflect the background information and thermal diffusion area information of the specimen. The segmented image of the defective feature region obtained in the above step is used as a new segmentation object, and the segmented image of the defective feature

210

6 Defects Positioning Method for Large Size Specimen

Fig. 6.14 Stitched reconstructed thermal image corresponding to specimen .#2

region is first grayed out to obtain the image graying result as in Fig. 6.16a. Further, the noise in the image is removed using Gaussian blur, as displayed in Fig. 6.16b. And then it is binarized using the threshold to obtain the binarized segmentation result of the segmented image of the defective feature region, as shown in Fig. 6.16c. Then, for the defective regions of specimen .#2, a total of 15 marked connected regions were obtained by the above experimental steps. Each defective region was enclosed by a rectangular box, which was the smallest rectangle containing each defective region. A number was used as a marker for the statistical number of each defective feature region. This is shown in Fig. 6.17. To guide the accurate secondary photography of the defective region, the coordinates of the mass center of each defective region and the coordinates of the upper left corner of the external rectangular box are obtained, as shown in Table 6.3. Based on the information about the distribution of defective regions in the defective regions of specimen .#2, a secondary high-resolution infrared photographic inspection of the specific defective regions of interest can be guided.

6.4.1.3

Defect Positioning Experiments of Specimen Hyper-1

In the previous two subsections of the experiment, the defect positioning analysis of specimens .#1 and .#2 were presented respectively. For a more practical scenario, in this section, the defect positioning algorithm will be illustrated and verified using the stitched images of the infrared NDT results of the HVI damaged specimen Hyper-1.

6.4 Experiment and Analysis

211

Fig. 6.15 Results of color segmentation of specimen .#2

According to the aforementioned section, the split infrared detection was done for specimen Hyper-1, resulting in the acquisition of original infrared image sequences corresponding to different regions were obtained. After the infrared image reconstruction and reconstructed image stitching, the corresponding stitched reconstructed thermal image of specimen Hyper-1 is obtained, as shown in Fig. 6.18. Firstly, the stitched reconstructed thermal images corresponding to specimen Hyper-1 were converted to . L ∗ a ∗ b color space. Subsequently, the image regions were segmented using the clustering method based on this conversion. As a result, three images containing different image feature information regions are obtained, as displayed in Fig. 6.19b, c and d, which characterize different feature regions of specimen Hyper-1. Among them, Fig. 6.19b, c reflect the background information and

212

6 Defects Positioning Method for Large Size Specimen

Fig. 6.16 Grayscale and binarization of segmented images of defective regions of specimen .#2

thermal diffusion sputtering area information of the specimen, and Fig. 6.19d reflects the core damage morphology and distribution information of specimen Hyper-1. For actual spacecraft flat plate type specimens, the specific damage region and the thermal diffusion sputtering region are the parts that receive focused attention after the hypervelocity impact. The segmented images of the defective feature region and the heat diffusion feature region obtained in the above step are used as the new processing objects, respectively. First, the segmented images of the two feature regions obtained are grayed out to obtain the image graying results and further, the noise in the images is removed using Gaussian blur, as shown in Figs. 6.20a and 6.21a. And then they are binarized using thresholding to obtain the binarized segmentation results of the segmented images of the two feature regions, as shown Figs. 6.20b and 6.21b.

6.4 Experiment and Analysis

213

Fig. 6.17 Linkage domain annotation of defective regions for specimen .#2 Table 6.3 The coordinates of the center of mass of each defective region of specimen .#2 and the coordinates of the upper left corner of the outer rectangular box Defect serial number 1 2 3 4 5 6 7 8 Center-of-mass coordinates Coordinates of the upper left corner of the external rectangle

.x

873.76 817.88 676.51 625.57 716.17 670.72 476.32 328.58

.y

477.52 477.14 476.14 260.2 850.5 789.5 622.5 619.5

.x

.y Defect serial number Center-of-mass .x coordinates .y Coordinates of the .x upper left corner of the external rectangle .y

224.7 616.5

436.5 436.5 419.5 253.5 167.5 9 10 11 12 13 279.35 230.95 240.35 228.68 97.26

726.48 475.18 474.95 613.5 434.5 317.5

668.5 432.5 420.5 14 15 / 109.19 109.24 /

475.85 476.16 229.32 728.87 776.05 503.77 447.61 / 267.5 217.5 180.5 141.5 94.5 70.5 69.5 /

419.5

418.5

175.5

672.5

773.5

479.5

417.5

/

214

6 Defects Positioning Method for Large Size Specimen

Fig. 6.18 Stitched reconstructed thermal image corresponding to specimen Hyper-1

Then, morphological and connectivity domain labeling processes were performed for the Hyper-1 defect feature region and the thermal diffusion sputtering region of specimen Hyper-1, respectively, and .6 marked connectivity regions were obtained in the defect feature region and .32 marked connectivity regions were obtained in the thermal diffusion feature region. The individual feature regions were enclosed with rectangular boxes, which were the smallest rectangles containing each feature region, and numbers were used as markers for the statistical number of each feature region. This is shown in Figs. 6.22 and 6.23. To guide the accurate secondary photography of the defective regions, the coordinates of the mass center of each defective region and the coordinates of the upper left corner of the external rectangular box are obtained, as shown in Tables 6.4, 6.5 and 6.6. Further, based on the information on the distribution of damage defects, thermal diffusion sputtering locations in the defective regions of the hypervelocity impacted specimen Hyper-1, a secondary high resolution infrared detection can be conducted to detect the specific defective regions of interest.

6.4.2 Experiment and Analysis of Defect Positioning Based on Inverse Heterogeneous Sources In this subsection, the experiment on inverse heterogeneous source-based reconstructed thermal image defect positioning will be conducted for the stitched reconstructed thermal image corresponding to specimen .#1. The reconstructed thermal splicing image of specimen .#1 is shown in Fig. 6.10.

6.4 Experiment and Analysis

215

Fig. 6.19 Results of color segmentation for specimen Hyper-1

Fig. 6.20 Grayscale and binarization of segmented images of specimen Hyper-1, when highlighting defective regions of specimen Hyper-1

216

6 Defects Positioning Method for Large Size Specimen

Fig. 6.21 Grayscale and binarization of segmented images of specimen Hyper-1, when highlighting thermal diffusion feature region of specimen Hyper-1

Fig. 6.22 Defective regions of the connected component labeling

6.4 Experiment and Analysis

217

Fig. 6.23 Marking of the connectivity domain between heat diffusion areas Table 6.4 The coordinates of the center of mass in each defective region of specimen Hyper-1 and the statistics in the upper left corner of the outer rectangular box Defect serial number 1 2 3 4 5 6 Center-of-mass coordinates Coordinates of the upper left corner of the external rectangle

.x

726.92

701.38

627.64

608.17

650.77

587.71

.y .x

403.84 722.5

400.76 697.5

552.41 615.5

540.84 600.5

467.54 592.5

543.5 585.5

.y

399.5

398.5

536.5

531.5

405.5

541.5

According to the image stitching algorithm in Chap. 4, the homography transformation matrix . M HA1 corresponding to specimen .#1 has been obtained. The homography matrix is used to perform the affine transformation on the alignment image to obtain the transformation results corresponding to the world coordinates. Combining the pixel overlap relationship of the reference image in the current world coordinates, of the corresponding pixel points in the overlap region ( the coordinates )] [ A1 12 12 12 . Poverlap x p ' , y p ' are obtained. The coordinates will be used to judge the 167360×2

defect positions in the stitched reconstructed thermal image, and further assign them to the appropriate infrared image sequence as listed in Tables 6.7 and 6.8.

218

6 Defects Positioning Method for Large Size Specimen

Table 6.5 The coordinates of the center of mass in each region of the thermal diffusion sputtering region of specimen Hyper-1 Defect serial number 1 2 3 4 5 6 7 8 Center-of-mass coordinates

.x

852.70 837.39 826.21 822.31 778.39 771.64 771.00 761.14

.y Defect serial number Center-of-mass .x coordinates .y Defect serial number Center-of-mass .x coordinates .y Defect serial number Center-of-mass coor- .x dinates .y

570.00 605.06 469.37 592.58 660.90 418.09 318.00 437.19 9 10 11 12 13 14 15 16 758.62 742.93 739.98 735.25 735.86 734.61 710.32 628.53 330.62 515.47 534.07 449.25 320.17 498.02 526.91 608.13 17 18 19 20 21 22 23 24 626.00 598.71 570.00 571.80 564.00 561.00 553.85 551.00 496.00 394.68 418.00 463.85 428.10 382.50 129.25 439.50 25 26 27 28 29 30 31 32 552.07 538.72 651.82 504.95 504.22 495.32 421.07 402.00 417.68 390.48 488.32 639.26 668.19 341.94 283.30 497.50

The defect features were first extracted for Fig. 6.24d. The stitched reconstructed thermal image of specimen .#1 was used to extract the defect features using clustering algorithm based on . L ∗ a ∗ b color space as shown in Fig. 6.25. The experimental present good defect feature extraction results, the defect region retains the defect features intact, and is separated from the detection background region. The connected domain corresponding to the defect feature region in the detection specimen is accurately extracted. The segmentation feature extraction of the defect feature image achieves the judgment of the defect reconstruction image information, removes the interference brought by the background noise, and the segmented defect feature region and edge heat diffusion region can reflect the contour and morphological distribution of the defect region. This provides a better pave the way for the next defect feature region location labeling. The segmented extracted image of the defect region shown in Fig. 6.25 is used as the new processing object. First, the defective feature region image is binarized, and the defective feature region is marked as highlighted white. And this is morphologically processed to form the connected domain. Further, through the statistical and labeling algorithm applied to the connected domain, we get the labeled image of the connected domains reflecting the morphological information about the defect, and get the labeled image of the overall pixel points corresponding to the defective feature. Subsequently, we obtain the set of defective feature pixel points A1 . Pseg : {Da1 , Da2 , . . . , Da8 } corresponding to specimen .#1, along with the count of overall pixel points corresponding to the defect, as shown in Fig. 6.26. As shown in Fig. 6.26b, the pixel point marker map corresponding to the edge extraction of the defect feature and the segmented edge pixel point coordinates are

6.4 Experiment and Analysis

219

Table 6.6 Statistics of the coordinates of the upper left corner of the external rectangular box of each area of the thermal diffusion sputtering region of specimen Hyper-1 Defect serial number Coordinates of the upper left corner of the external rectangle

1

2

3

4

5

6

7

8

.x

846.5

833.5

823.5

818.5

774.5

769.5

768.5

757.5

.y

564.5

602.5

466.5

587.5

656.5

416.5

315.5

433.5

9

10

11

12

13

14

15

16

.x

756.5

739.5

735.5

733.5

733.5

730.5

707.5

626.5

.y

328.5

511.5

529.5

448.5

315.5

492.5

524.5

605.5

Defect serial number Coordinates of the upper left corner of the external rectangle Defect serial number Coordinates of the upper left corner of the external rectangle

17

18

19

20

21

22

23

24

.x

624.5

591.5

568.5

563.5

559.5

559.5

550.5

548.5

.y

494.5

388.5

416.5

457.5

425.5

381.5

126.5

437.5

25

26

27

28

29

30

31

32

.x

547.5

529.5

513.5

502.5

496.5

491.5

418.5

400.5

.y

413.5

380.5

322.5

636.5

661.5

337.5

280.5

496.5

Defect serial number Coordinates of the upper left corner of the external rectangle

Table 6.7 The affine transformation corresponding to specimen .#1 Single-strain transformation matrix World coordinate system ⎡ ⎤ 0.9955 0.0067 247.2184 ⎢ ⎥ A1 . M H = ⎣ −0.0022 0.9963 −5.6316 ⎦ . x max = max (M H ∗ [m, n] , M H ∗ [m, 1]) x 0 0 1 . x min = min (M H ∗ [1, n] , M H ∗ [1, 1]) x

. ymax

= max (M H ∗ [m, n] , M H ∗ [1, n])

. ymin

= min (M H ∗ [m, 1] , M H ∗ [1, 1])

. x max

= 758.5288, xmin = 1.00 = 640.00, ymin = −5.3675

. ymax

y

y

220

6 Defects Positioning Method for Large Size Specimen

Table 6.8 The inverse affine transformation corresponding to specimen .#1 Overlapping areas Single-strain transformation matrix ⎡ ⎤ [ ( A1 )] 1.0045 −0.0068 −248.3703 [ ] A P P 1 ⎢ ⎥ A1 −1 A1 12 . MH = ⎣ 0.0022 1.0037 x p'12 , y p'12 5.1041 ⎦ . Poverlap 167360×2 0 0 1

Fig. 6.24 Infrared splicing process and overlap area judgment for specimen .#1

obtained. After obtaining the overlapping area, the image area belonging to the defective edge coordinates needs to be judged accordingly. The results corresponding to the defective features are shown in Fig. 6.26. As shown in Table 6.9, all the defects in specimen .#1 correspond to the connected domain markers and edge markers as depicted in Fig. 6.26, respectively. Each defective region corresponds to the affiliated area in view 1 image . PA11 , view 2 image . PA21

6.4 Experiment and Analysis

221

Fig. 6.25 Segmentation feature extraction results of stitched reconstructed thermal image for specimen .#1

Fig. 6.26 Defect feature overall and edge extraction pixel point marker map

overlap

of specimen .#1 , and the overlapping area . PA1 of the two views, as shown in Fig. 6.27. Search the TTR curves in the reconstructed thermal image . PA21 affiliated with the } { defective edge pixel point. P I A2 1 = pi Da1 , pi Da2 , pi Da3 , pi Da4 . And search the TTR curves in the reconstructed thermal image . PA11 to which the defect edge pixel point } { 1 . P I A = pi Da5 , pi Da6 , pi Da7 , pi Da8 , pi Da9 belongs. The same sequence similarity 1 measure is applied to them to determine the size of the thermal diffusion region, as displayed in Fig. 6.28. Taking defect .#a4 located in the reconstructed thermal image . PA21 as an example, the process of determining the thermal diffusion region of the defect and the validity

222

6 Defects Positioning Method for Large Size Specimen

Table 6.9 Defect regions judgment of specimen .#1 Defect No. Defect characteristics Defective edge feature Affiliation region chart in the diagram in the corresponding number corresponding number #a1 #a2

Fig. 6.26a-1 Fig. 6.26a-2

2

Fig. 6.26b-1

. PA

Fig. 6.26b-2

2 . PA 1 2 . PA 1 2 . PA 1 overlap . PA 1 overlap . PA 1 overlap . PA 1 1 . PA 1

#a3

Fig. 6.26a-3

Fig. 6.26b-3

#a4

Fig. 6.26a-4

Fig. 6.26b-4

#a5

Fig. 6.26a-5

Fig. 6.26b-5

#a6

Fig. 6.26a-6

Fig. 6.26b-7

#a7

Fig. 6.26a-7

Fig. 6.26b-6 and 8

#a8

Fig. 6.26a-8

Fig. 6.26b-9

Fig. 6.27 Results of defective region classification connection map

1

6.4 Experiment and Analysis

223

Fig. 6.28 Defective region 4

of the algorithm are illustrated for verification. Applying the inverse matrix of the homography transformation to the coordinates . pi Da4 of the edge point of defect .#a4, one can derive ⎡ ⎤ 1.0045 −0.0068 −248.3703 [ ]−1 A1 5.1041 ⎦ , . MH = ⎣ 0.0022 1.0037 (6.59) 0 0 1 and use the world coordinate differences . X + = 0 and .Y + = −5.3675 to obtain the calculated results of the coordinate transformation. As shown in Fig. 6.29, the two coordinates are reflected in the stitched reconstructed thermal image and the aligned image to represent the correspondence between the obtained conversion coordinates. To demonstrate the effectiveness of the image pixel position conversion method designed in Sect. 6.3.1 to restore pixel point positions of the stitched reconstructed thermal image, in order to obtain the corresponding transient thermal response curves. For defect .#a4, the corresponding information before and after conversion are provided in Table 6.10. 4 ' Using the convergence .Con Da f , the corresponding weight of .ω f max = 1.5 is added in the 95th dimension in combination with the dimension where the highest temperature point of the TTR curve is located. The remaining corresponding weights are.ω'f = 1,.n = 1, 2, . . . , 94, 96, . . . , 100. The similarity measure weights of defect .#a4 under the same sequence are obtained as shown in Fig. 6.30 with the following equation.

224

6 Defects Positioning Method for Large Size Specimen

Fig. 6.29 Pixel reduction of the stitched reconstructed thermal images to matching code images

.de5

|| ) || ( dis T T Ri , T T R j = ||T T Ri − T T R j ||2 ⎡ ⎤1/ 2 F ∑ | | 2 5 =⎣ ω' f × Con Da × |ti, f − t j, f | ⎦ . (6.60) f f =1

Using the above equation to obtain the relative distances of all the defect edge feature contrast curves, the number of pixel points in the heat diffusion region of Da4 = 12.1598. defect .#a4 is obtained as .24 using a contrast threshold of .εttr dismax After the connected domain statistics of the binary image, the number of pixels in the defect area .#a4 is 172, so the number of defective feature pixels corresponding to defect .#a4 is the number of pixels in the binary image minus the number of pixels in the heat diffusion area, and the actual number of pixels to remove the heat diffusion area is 148.

6.4 Experiment and Analysis

225

Table 6.10 Stitched infrared defect coordinate inversion results for specimen .#1 Stitched images Aligning images Pixel number

Stitched image coordinates .Y 611 199 618 196 623 206

.X

17 54 88

Restore image Aligning image coordinates pixel number . X .Y 17 366 209 54 376 222 88 375 209

Fig. 6.30 TTR curves and convergence values of the defective region

According to the description of the above process, all the defect features in specimen .#1 are traversed, and the actual number of damage pixels and damage area of each damage area of specimen .#1 are obtained. Among them, the number of pixels in each damage area of the specimen .#1 was counted as well as the thermal diffusion area contrast threshold, as shown in Table 6.11. Further, a proportional conversion relationship between the area of the actual damage area and the number of pixels in the thermal reconstruction image is established based on the ratio of the area of the specimen to the area of the frame of the reconstructed thermal images. The number of pixels in each damage region before removing the heat diffusion region pixels is converted to the actual specimen area . S2 . And the area difference .Δ1 = |S1 − S2 | is calculated based on the area parameter . S1 when the manual specimen is made. The extent of the damage area calculation error can be reflected by calculating the / area difference as a percentage of .Δ1 S1 . The parameters are shown in Table 6.12. Finally, the actual number of pixels in each damage area after removing the heat diffusion area pixels is converted to the actual specimen area . S3 . The/change of the area difference value .Δ2 = |S1 − S3 | and the area difference ratio .Δ2 S1 can reflect the improvement of the positioning results by the inverse heterogenous damage positioning analysis. The parameters are shown in Table 6.13.

226

6 Defects Positioning Method for Large Size Specimen

Table 6.11 Calculation results of defective heat diffusion areas of specimen .#1 Defect mark Number of Number of Number of Number of pixels at the defective area pixels defective area edge of the pixels (before belonging to pixels (after defective area removing heat the thermal removing heat diffusion diffusion diffusion pixels) region pixels) #a1 #a2 #a3 #a4 #a5 #a6 #a7 #a8

19 54 72 100 655 677 822 1103 24

8 48 105 172 3976 2250 6733

3 8 13 24 53 321 507 975 6

14

5 40 92 148 3923 1929 5251 8

Thermal diffusion area contrast threshold

3.6793 8.1545 6.1089 12.1598 66.5890 15.8794 74.9729 83.3178 9.1283

Table 6.12 Comparison table of defect area calculation results of specimen .#1 (before removing heat diffusion area) Defect mark Defective The actual Number of Defective Defect area Difference actual area area defective region area difference percentage Δ 2 corresponds area pixels . S2 /mm2 . S1 /mm .Δ1 = . 1 (%) S1 |S1 − S2 | to the number of pixels #a1 #a2 #a3 #a4 #a5 #a6 #a7 #a8

3.00 16.56 37.50 59.70 1500.00 750.00 2100.00 4.50

7.68 42.39 96.00 152.83 3840.00 1920.00 5376.00 11.52

8 48 105 172 3976 2250 6733 14

3.13 18.75 41.02 67.19 1553.13 878.91 2630.08 5.47

0.13 2.19 3.52 7.49 53.13 128.91 530.08 0.97

4.17 13.22 9.38 12.54 3.54 17.19 25.24 21.53

As can be seen from Tables 6.11, 6.12 and 6.13, for the damage area .#a4 of specimen .#1 specifically analyzed in the previous section, before removing the heat diffusion pixels, the calculated value of the damage area was obtained as 67.19.mm2 , and the difference between the area and the actual specimen was 7.49.mm2 , accounting for 12.54%. After removing the heat diffusion pixel area, the calculated value of the actual damage area was obtained as 57.81.mm2 , and the area difference with the actual specimen was 1.89.mm2 , accounting for 3.16%. It can be seen that the region positioning based on inverse heterogenous of the stitched thermal reconstruction image has obvious effects.

6.5 Summary

227

Table 6.13 Comparison table of defect area calculation results of specimen .#1 (after removing the heat diffusion area) Defect mark Defective Defective Number of Defective Defect area Difference actual area region area defective region area difference percentage 2 2 Δ area pixels . S3 /mm2 after . S1 /mm . S2 /mm . 2 (%) S1 removal of heat diffusion pixels .Δ2 = |S1 − S3 | #a1 #a2 #a3 #a4 #a5 #a6 #a7 #a8

3.00 16.56 37.50 59.70 1500.00 750.00 2100.00 4.50

3.13 18.75 41.02 67.19 1553.13 878.91 2630.08 5.47

5 40 92 148 3923 1929 5251 8

1.95 15.63 35.94 57.81 1532.42 753.52 2051.17 3.13

1.05 0.93 1.56 1.89 32.42 3.52 48.83 1.38

34.90 5.65 4.17 3.16 2.16 0.47 2.33 30.56

However, due to the possibility of judging a small number of pixel values of the actual area as the heat diffusion area when judging the TTR curve, the number of pixel points where the defects appear to end up is less than the number of pixel points corresponding to the actual area. In the specimen .#1, as shown in Tables 6.12 and 6.13, the defect areas .#a1 and .#a8 showed an increase in the difference between the calculated value of the removed thermal diffusion area and the value before removal. This is due to the smaller area of defect areas .#a1 and .#a8, which leads to a smaller number of overall pixel points in imaging as well as reconstruction. The thermal accumulation is lower than other defects and is less affected by thermal diffusion. Therefore, a small change can cause a larger percentage of difference, which is more influenced by the algorithm’s accuracy. However, in terms of the positioning and detection results of the conventional region, the algorithm described in this chapter extracts the defect information with a certain degree of accuracy, demonstrating the effectiveness of its algorithm.

6.5 Summary In this chapter, a detailed inspection of the stitched reconstructed thermal image is performed to obtain information on the location of their damaged areas. This chapter describes the whole-local view-based reconstructed thermal image defect positioning, and the inverse anisotropic source-based reconstructed thermal image defect positioning methods, respectively. Among them, the whole-local view-based

228

6 Defects Positioning Method for Large Size Specimen

method is based on the calculation and labeling of the center of mass of the damage area of the stitched image, and guides the accurate second detection to obtain an accurate reconstructed thermal image of the damage area. The accurate quantitative analysis of defects is achieved based on the second detection results. The inverse heterogeneous source-based method determines the overlapping regions and their locations in the stitched reconstructed thermal images based on the stitching relationship, and returns them to the respective original infrared video streams for further calculation. The method designs a TTR curve similarity measure to distinguish the thermal diffusion region of defects in the stitched reconstructed thermal image, and finally obtains the number of actual defect pixels in the characteristic defect region to analyze and eliminate the thermal diffusion region of defects. Accurate damage areas are obtained. In the next section, the quantitative analysis of defects in the stitched reconstructed thermal images will be performed for the defect positioning results using a second detection or an inverse heterogenous thermal profile similarity analysis.

References 1. Cheng, Y., Yin, C., Yang, X., Chen, K., Huang, X., Qiu, G., Wang, Y.: Method for quantitatively identifying the defects of large-size composite material based on infrared image sequence: Washington, DC: U.S. Patent and Trademark Office, 11,587,250, 2023-2-21, (2023) 2. Lei, G., Yin, C., Huang, X., Cheng, Y., Dadras, S.: Using an Optimal Multi-Target Image Segmentation Based Feature Extraction Method to Detect Hypervelocity Impact Damage for Spacecraft. IEEE Sensors Journal, 21(18), 20258-20272 (2021) 3. Recky, M, Leberl, F.: Windows detection using k-means in cie-lab color space. 2010 20th International Conference on Pattern Recognition. IEEE: 356-359 (2010) 4. Malkauthekar, M. D.: Analysis of Euclidean distance and Manhattan distance measure in Face recognition. Third International Conference on Computational Intelligence and Information Technology (CIIT 2013). IET, 503-507 (2013) 5. López, F., Valiente, J. M., Baldrich, R., Vanrell, M.: Fast surface grading using color statistics in the CIE Lab space. In Pattern Recognition and Image Analysis: Second Iberian Conference, IbPRIA 2005, Estoril, Portugal, Proceedings, Part II 2 (pp. 666-673). Springer Berlin Heidelberg (2005) 6. Hartigan, J. A., Wong, M. A.: Algorithm AS 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1), 100-108 (1979) 7. Ahmed, M., Seraj, R., Islam, S. M. S.: The k-means algorithm: A comprehensive survey and performance evaluation. Electronics, 9(8), 1295 (2020) 8. Dougherty, E.: Mathematical morphology in image processing. CRC press (2018) 9. Dougherty, E. R.: An introduction to morphological image processing. SPIE. Optical Engineering Press (1992) 10. Soille, P.: Morphological image analysis: principles and applications. Berlin: Springer (1999) 11. Muda, L., Begam, M., Elamvazuthi, I.: Voice recognition algorithms using mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques. arXiv preprint arXiv1003.4083 (2010) 12. Müller, M.: Dynamic time warping. Information retrieval for music and motion, 69-84 (2007)

Chapter 7

Defect Edge Detection and Quantitative Calculation of Reconstructed Thermal Images

In Chap. 6, two methods are provided to obtain the defect localization information of stitched reconstructed thermal images. In this chapter, pixel-level edge detection, subpixel-level edge detection, and quantitative calculation of defect region parameters of reconstructed thermal images based on damage region markers are introduced, respectively. As shown in Fig. 7.1, in terms of defect quantification, fast and accurate quantification of defect features can visually represent the characteristic information of defects. This is indispensable in the process of damage assessment and defect structure analysis. The characteristic parameters of the defective region mainly include geometric parameters and morphological distribution parameters. The calculation results of the specific parameters will provide specific numerical references for assessing the damage of the specimen. Based on the location of the damaged region, the edge extraction of the defective region and the calculation of the parameters will be discussed.

7.1 Introduction In order to achieve automatic quantitative evaluation of defect morphological information of large-size inspection specimens of spacecraft, it is necessary to quantify regional parameters of defective regions. The quantification indexes can include edge contours, regional geometric morphological parameters, regional distribution parameters, etc. Edge detection is an important research element in reconstructed thermal image analysis. It mainly locates the boundary of objects in the image by the variation of image pixel values. In the analysis process of large-size reconstructed thermal images of spacecraft, edge detection plays a crucial role in achieving defect information extraction and analysis. Among the traditional edge detection algorithms, the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 C. Yin et al., Infrared Thermographic NDT-based Damage Detection and Analysis Method for Spacecraft, https://doi.org/10.1007/978-981-99-8216-5_7

229

230

7 Defect Edge Detection and Quantitative Calculation …

Fig. 7.1 Defect edge detection and quantitative calculation for reconstructed thermal images

commonly used methods include various differential operators and composite operators. In addition, in edge detection, sub-pixel detection is an important technique to improve the accuracy of edge detection. Traditional edge detection algorithms usually can only detect edges at the pixel level, but cannot provide more accurate position estimation of edges. In contrast, sub-pixel detection techniques achieve more accurate edge location by detecting the distribution of image gray values [1]. Based on the pixel value statistics method for defect feature segmentation image after binarization, the feature parameters of each defective region are calculated to describe the geometric structure feature information and morphological distribution feature information of defects. By counting the number of binary pixels equal to ‘1’ in the connected domain to estimate the size of the defective region. The confidence interval of the size of the defective region can be obtained by converting the pixel ratio to the actual size through an empirical formula. This allows for the quantitative evaluation of the defect size. Morphological distribution characteristics parameters such as dispersion are used to describe the tightness and looseness of the defect shape. Roundness is used to describe the extent to which the defective region is close

7.2 Pixel-Level Edge Detection of Defective Regions …

231

to a theoretical circle, etc. These morphological characteristics parameters are used to describe the morphological distribution characteristics of the defect, thus realizing the automatic quantification function of the defect shape.

7.2 Pixel-Level Edge Detection of Defective Regions in Reconstructed Thermal Images Edges are one of the essential features of reconstructed thermal images and contain most of the information of reconstructed thermal images. The edges reflect the sudden changes in the local characteristics of the image, i.e., the places in the image where the grayscale changes are more dramatic. In the reconstructed thermal image, the junction between one type of defective region and another type of defective region is reflected in the image edge. After the reconstructed thermal image is segmented, a more accurate picture of the defective region is obtained. To obtain a further contour of the defective region of the reconstructed thermal image, edge detection is required for the segmented image. After determining the edge location, the location parameters and morphological parameters of the defect are calculated statistically for the defect area and the edge contour [2]. The location and morphology quantification data of the spacecraft damage defects are obtained to complete the damage assessment and defect structure analysis process [3, 4]. The essence of pixel-level edge detection of defective regions of reconstructed thermal images is to use some algorithm to extract the junction line between the damaged target features and the background in the image. The variation of grayscale image corresponding to the reconstructed thermal image can be reflected by the gradient of the image’s grayscale distribution, as shown in Fig. 7.2. The corresponding grayscale curve . f (x) produces large abrupt changes where the grayscale changes drastically. This is reflected in the differential curve of the grayscale curve, which

Fig. 7.2 Illustration of the principle of edge detection

232

7 Defect Edge Detection and Quantitative Calculation …

displays a more significant spike. Therefore, we can use the differential information of image localization to obtain the edge detection operator. The general criteria of edge detection for reconstructed thermal images include: (1) detecting the edges of the damaged area with a low error rate, which means that as many edges as possible in the reconstructed thermal image need to be captured as accurately as possible. (2) The detected edges of the damaged area should be as close as possible to the actual edges in the reconstructed thermal image. (3) The edges of a given damaged area in the reconstructed thermal image should be marked only once, and no possible image noise should be detected as edges. The classical edge detection method involves constructing an edge detection operator for a certain small neighborhood of pixels in the original reconstructed thermal image. Common first-order differential edge detection operators such as Roberts, Sobel, and Prewitt operators can perform edge extraction more effectively in scenarios with a uniform background. Based on the first-order differential, there are second-order differential operators such as Laplace operator, LoG operator, and difference of Gaussian (DOG) operator. A composite operator that combines multiple image signal processing tools such as Canny operator is more ideal for the acquisition of defective edge pixels by selecting appropriate Gaussian filtering parameters and appropriate thresholds, which will be described separately in this subsection.

7.2.1 Pixel-Level Edge Detection Based on Differential Operators For reconstructed thermal images, Roberts operator, Prewitt operator and Sobel operator are three commonly used differential edge detection operators. All three operators are based on first-order derivatives, and the gradient matrix of the reconstructed thermal image is first calculated by a suitable differential operator. That is, the gradient amplitude of the reconstructed thermal image is recorded, and then the gradient matrix is binarized. The points with gradient amplitude greater than a threshold are marked as edges. Thus, the edges of the reconstructed thermal image are obtained, and we can also optionally refine the edges to a pixel width. (1) Roberts edge detection operator [5]. The Roberts edge detection operator is based on the principle that the difference in any pair of mutually perpendicular directions is available to calculate the gradient. And the difference of adjacent pixels in the diagonal direction is used for gradient magnitude detection. And its performance of detecting edges in the horizontal and vertical directions is better than that of edges in the diagonal direction. Though its detection and positioning accuracy is relatively high, but it is sensitive to noise. The computational process of Roberts operator can be described using a convolutional template, as shown in Fig. 7.3.

7.2 Pixel-Level Edge Detection of Defective Regions …

233

Fig. 7.3 Roberts edge detection operator

Fig. 7.4 Prewitt edge detection operator

Fig. 7.5 Sobel edge detection operator

(2) Prewitt edge detection operator [6]. The Roberts operator is a .2 × 2 detection template, and the contours of the reconstructed thermal image details can be obtained by the calculation of the detection operator template. Considering that the Roberts operator is of even order, it is not convenient to find the centroid to calculate the gradient and is affected by noise. The size of the Prewitt operator template is .3 × 3, symmetric about the centroid, carrying more information about the reconstructed thermal image and reducing the effect of noise [7], as shown in Fig. 7.4. (3) Sobel edge detection operator [8]. The Sobel edge operator is an odd-order omnidirectional edge template operator similar to the Prewitt edge detection operator.The Sobel operator introduces weighted local averaging to weight the effect of pixel position on the reconstructed thermal image, which can better suppress (smooth) the noise and reduce the blurring of the reconstructed thermal image edges, as displayed in Fig. 7.5. (4) LoG edge detection operator [9]. The edges of reconstructed thermal images can be expressed not only by using the first-order derivatives but also by using the second-order derivatives. Corresponding to the characteristics of the pixel distribution of grayscale images, the edges of the image are the extreme value points of the first-order derivatives of the grayscale distribution, which correspond to the over-zero points of the second-order derivatives of the grayscale distribution. That is, finding the zero-crossing point of the secondorder derivative of the grayscale distribution of the reconstructed thermal image will lead to the exact edge point of the defective region in the reconstructed thermal image. The common edge detection operators based on second-order differentiation are Laplace operator, LoG operator, and DOG operator. Among them, the Laplace oper-

234

7 Defect Edge Detection and Quantitative Calculation …

Fig. 7.6 .5th order LoG operator template

ator template characterizes the second-order differentiation of the grayscale distribution of the reconstructed thermal image. Therefore, the Laplace operator has an unacceptable sensitivity to noise. At the same time, Laplace operator does not provide the edge direction information of the grayscale distribution of the reconstructed thermal image and cannot detect the direction of the edge of the defect region. Although Laplace operator addresses the difficulty of determining the threshold value of the first-order differential operator, it cannot overcome the interference from noise. Therefore, the LoG operator will be selected for introduction here [9]. The LoG operator combines the Laplace operator with Gaussian low-pass filtering. It is done by first smoothing the reconstructed thermal image. It is handled with by applying Gaussian operator to suppress the noise, and then using Laplace operator on the smoothed image. Therefore, the LoG operator is equivalent to: Gaussian smoothing .+ Laplace operator. The common .5th order LoG operator template is shown in Fig. 7.6. The two-dimensional Gaussian function is expressed by x 2 + y2 − 1 2 . Gauss (x, y) = · e 2σ . 2π σ 2

(7.1)

Based on the properties of the convolution operation mentioned before, the LoG operator is represented as .

LoG = ∇ 2 Gauss (x, y) =

∂ 2 Gauss (x, y) ∂ 2 Gauss (x, y) + . ∂x2 ∂ y2

(7.2)

Combining Eq. (7.1) with Eq. (7.2), one has x 2 + y2 x 2 + y 2 − 2σ 2 − 2 · e 2σ . . LoG = ∇ Gauss (x, y) = 2π σ 6 2

(7.3)

Among them, different templates can be built depending on the parameter .σ and the .3σ principle. .σ is a scale parameter, which is introduced in the reconstructed

7.2 Pixel-Level Edge Detection of Defective Regions …

235

Fig. 7.7 Different values of .σ , and different Gaussian blurred images and edge detection results

thermal image processing as well as the creation of a multi-scale space. Different values of .σ affect the result of edge detection. The larger .σ is, the more blurred the image is the better the noise filtering effect, but part of the defective edge details will be filtered out. The smaller .σ is, the opposite effect occurs: more details of the reconstructed thermal image will be retained, but the image detection is more vulnerable to noise interferences. As shown in Fig. 7.7, when the parameter .σ is gradually increased from 1.0 to 10.0, the grayscale image of the reconstructed thermal image gradually becomes blurred and the interference of background noise gradually decreases. Correspondingly, the detail is also lost in the subsequent edge detection of the damaged region. When the parameter .σ becomes too large, as shown in the last column Fig. 7.7 with .σ = 10.0, the distribution of grayscale is disturbed, which in turn generates detail points that should not exist in the process of edge detection in the damaged region, i.e., a distortion phenomenon is generated. If the Gaussian blurring as well as edge extraction algorithms are used directly on the stitched reconstructed thermal images, the desired edge extraction results are often not obtained because of the contradiction between the complexity of the image feature regions and the blurring. Thus, the edge detection is to be performed for the stitched reconstructed thermal image obtained as described in the previous section, for the defect feature segmentation extraction result . Pseg . A more complex and accurate Canny operator is considered to calculate the edges to achieve edge extraction of the defective feature regions in the stitched reconstructed thermal image. Then, use the edges of the regional contours are used to guide the target judgment of the thermal diffusion region.

236

7 Defect Edge Detection and Quantitative Calculation …

7.2.2 Pixel-Level Edge Detection Based on Canny Composite Operator The Canny edge detection operator is a composite operator that can extract useful structural information from different visual objects of a reconstructed thermal image and greatly reduce the amount of data to be processed [10]. Among the currently used edge detection methods, the Canny edge detection algorithm is with a strictly defined process that better satisfies the three criteria for edge detection and has the advantage of a simple implementation process. In this subsection, high and low thresholds will be used to obtain the Canny operator for strong and weak edges, and edge extraction will be performed on the segmented result of the reconstructed thermal image to extract the resultant image . Pseg . The Canny operator enables better access to detailed edge information of the overall defect visualization image of the detected object [11]. Firstly, a Gaussian filter is used to process the reconstructed thermal stitching defect segmentation image. As in Eq. (7.4), a Gaussian filter is employed to filter out the noise points present in the reconstructed thermal stitching segmentation result image and smooth the image to reduce the edge extraction error rate of the reconstructed thermal image )⟨ ( − x 2 + y 2 2σ 2 2π σ . G f (x, y, σ ) = e .

(7.4)

The image is obtained by convolution operation using Eq. (7.4) with the reconstructed thermal stitching defect segmented image . P12 (x, y) .

P1 (x, y) = G f (x, y, σ ) ∗ P12 (x, y) .

(7.5)

The first-order partial derivatives are calculated separately to obtain the.x-direction gradient and the . y-direction gradient, respectively [10]. .

i x = P1 (x, y) − P1 (x + 1, y + 1) , i y = P1 (x + 1, y) − P1 (x, y + 1) ,

(7.6)

where the gradient amplitude is: | | i (x, y) = |i x | + |i y | .

.

(7.7)

The 8-neighborhood pixel points in the gradient magnitude Eq. (7.7) are compared with the magnitude .i a (x, y) corresponding to the intersection of the gradient direction at that point. If .i (x, y) > i a , the value of .i (x, y) is kept; if .i (x, y) ≤ i a , the magnitude of .i (x, y) is set to 0. The image . P2 (x, y) is obtained after using non-extreme suppression. The non-extreme value suppressed reconstructed thermal stitched defect segmenhigh tation image . P2 (x, y) is processed using a high threshold .ξcanny and low threshold

7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed …

237

ξ low . The purpose of setting two thresholds is that a high threshold cannot obtain closed feature edges, and the presence of a low threshold can be used to deal with this problem. When the endpoints of the processed edges with a high threshold appear, the low-threshold points are searched in the neighborhood of the endpoints until all edges are closed. The following three cases are obtained for this step: If a pixel of the reconstructed thermal stitched defective segmentation image high . P2 (x, y) corresponds to position . Pp > ξcanny , where . Pp is a pixel point of the defective segmentation image . P2 (x, y), then this pixel point is retained as an edge pixel of the segmentation results. If a pixel of the reconstructed thermal stitched defective low , the pixel is not an segmented image . P2 (x, y) corresponds to position . Pp < ξcanny edge pixel of the segmentation result. If a pixel of the reconstructed thermal stitching defect segmentation image high low . P2 (x, y) corresponding to a position satisfies the condition .ξcanny < Pp < ξcanny , the 8-neighborhood of the pixel point is further discerned whether there is a point high larger than the .ξcanny , and if so, the point is retained as an edge. . canny

7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed Thermal Images In the process of camera imaging, according to the imaging principle of the camera imaging sensor, the sensor discretizes the actual acquisition scene into the acquired image data. Therefore, in the final image data, each pixel represents only a nearby color. In fact, the area located between two pixels is not blank in the actual scene, and these pixels, which exist between two actual physical pixels, are called ’sub-pixels’. Subpixels are able to subdivide the microscopic gaps between pixels to obtain subpixel edges that are less than 1 pixel unit in the resulting image. This results in more accurate edge detection results [12]. As shown in Fig. 7.8, when the defect area is imaged on the pixels of the sensor at the same time, if the whole-pixel approach to edge localization is still used, the true edge of the defect will be localized to pixel point B. After sub-pixel localization, the defect edge contour can be accurate to a fractional pixel, as shown in edge point A in Fig. 7.8. Compared with the pixel edge point B, the sub-pixel edge point A is significantly closer to the true edge of the defect, and the sub-pixel-based defect localization is more accurate. Moreover, the defect edge contour obtained by subpixel fitting is a continuously changing curve, which can solve the problem that the discrete defect pixel contour will be distorted after enlargement and affect the determination of the defect area. Therefore, defect localization by sub-pixel is of great significance for defect identification. The commonly used methods for sub-pixel localization are sub-pixel fitting method based on edge pixel positions, and methods based on image moments. They can all improve the accuracy of defect localization to different degrees, but different

238

7 Defect Edge Detection and Quantitative Calculation …

Fig. 7.8 Actual defect profile and defect pixel localization

detection algorithms have different applicability, noise immunity and localization accuracy. In this subsection, each of these approaches will be introduced.

7.3.1 Sub-pixel Fitting Method Based on Edge Pixel Position Compared with the method of locating defects by calibrating their external rectangles, the sub-pixel fitting method can improve the accuracy of defect localization by subdividing the microscopic gaps between pixels and obtaining continuous defect edge profile curves based on the discrete defect boundary pixel features. Considering the irregular shape of the defect profile formed by most hypervelocity impacts, a cubic Bézier curve can be used to segment and fit the closed defect profile shape. The Bézier curve [13, 14] is uniquely defined by a set of control polygons consisting of control points. The starting and ending control points of the control polygon are in the Bézier curve, and the remaining control points control the Bézier curve trend. The relationship between the Bézier curve . B (t) and its control polygon can be considered as follows: the control polygon . B0 B1 · · · Bk is an outline of the general shape of . B (t), where . B (t) is an approximation to the control polygon . B0 B1 · · · Bk , as shown in Fig. 7.9. It shows that the mathematical expression for the .tth Bézier curve is given in Eq. (7.8) t ∑ .s ( pi ) = Bn Wn,t ( pi ), 0 ≤ pi ≤ 1, (7.8) n=0

7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed …

239

Fig. 7.9 Bézier curve describing the defect profile schematic

where .s ( pi ) is the interpolation point at the parameter value . pi , .t is the order of the Bézier curve, and . Bn is the .nth control point, .Wn,t ( pi ) is a Bernstein polynomial t ∑ with . Wn,t ( pi ) ≡ 1, and the .Wn,t ( pi ) is defined as shown in Eq. (7.9). n=0

t! · (1 − pi )t−n · pin = Ctn · (1 − pi )t−n · pin , 0 ≤ pi ≤ 1. n! (t − n)! (7.9) The Bézier curve advances along its control polygon and its geometric properties are influenced by the location of the control points. When there are two control points, the Bézier curve is a linear equation. When there are three control points, the Bézier curve is a quadratic curve. The number of Bézier curves increases with the number of control points. And when the number increases, the derivative zeros may increase, causing an increase in the extreme value points of the curve, curve oscillation, and a decrease in smoothness. Therefore, in this section, the two-point fitting method is chosen in the process of fitting the defect contour. First, the boundary pixel points are fitted by using two segments of the cubic Bézier curve, and then the segmented curves are connected to finally form the closed defect contour. After the boundary tracking algorithm detects the . K boundary pixel points of the defect, it may be noted that these boundary pixel points are . L i (xi , yi ), .i = 0, 1, . . . , K . To complete the cubic Bézier curve fitting of the defect contour, first connect any two adjacent boundary pixel points with a cubic Bézier curve segment in order, and finally connect the pixel points . L 0 and . L K with a cubic Bézier curve as well. In the process of connect, the Bézier curves between adjacent boundary pixel points need to be guaranteed to be continuous and smooth. Under arbitrary conditions, three adjacent boundary pixel points . L i , . L i+1 and . L i+2 are fitted to obtain two cubic Bézier curves . B1 ( p) and . B2 ( p). As shown in Fig. 7.10, . B11 and . B12 are the control points of curve . B1 ( p), and . B21 and . B22 are the control points of curve . B2 ( p), as listed in Fig. 7.11. .

Wn,t ( pi ) =

240

7 Defect Edge Detection and Quantitative Calculation …

Fig. 7.10 Three adjacent defect boundary pixel point fits: Individual segment fitting

Fig. 7.11 Three adjacent defect boundary pixel point fits: Overall smooth fit

As shown in Fig. 7.10, in the process of two-by-two fitting of boundary pixel points, the connection points of Bézier curves between adjacent boundary pixel points are the same boundary points. Therefore, the defect contour obtained by segmental fitting can directly satisfy the continuity condition. In the process of fitting the defect contour, it is also necessary to ensure that the Bézier curves of adjacent defect boundary pixel points are smoothly connected. According to the characteristics of the Bézier curve, the Bézier curves . B1 ( p) and . B2 ( p) obtained by fitting the three adjacent defect boundary pixels in Fig. 7.10 with two segments are smoothly connected at the connection point, and it is necessary to ensure that . B12 , . L i+1 and . B21 are co-linear. As displayed in Fig. 7.10, since . B12 , . L i+1 and . B21 are co-linear, the tangent slopes of the left and right sides of the connection point F of the two Bézier curves . B1 ( p) and . B2 ( p) are the same. Therefore, the same tangent slope of the control points of the different Bézier curves on both sides near the connection point is a sufficient condition for the smooth connection of the two Bézier curves obtained by two-by-

7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed …

241

Fig. 7.12 Segmented continuous fitting of closed defect profile curves

two segmented fitting of the three adjacent boundary pixel points. Based on the above principle of smooth connection of curve segments, a schematic diagram of the smoothly connected closed defect contour curve obtained by segmental fitting is shown in Fig. 7.12. As shown in Fig. 7.12, the three Bézier curve segments connecting the defect boundary pixel points . L i and . L i+1 are defined by four points . L i , . L i+1 , . Ri and . E i . Where . L i is the starting point of the Bézier curve segment, . L i+1 is the end point of the Bézier curve segment, . Ri is the internal control point close to . L i , and . E i is the internal control point close to . L i+1 . . L i and . L i+1 are defect boundary pixel points, and after boundary extraction, the exact locations of . L i and . L i+1 are known. Therefore, the starting and ending points of each Bézier curve segment are known, and only the internal control point location is unknown. According to the condition that two adjacent Bézier curves obtained by fitting the three adjacent defect boundary pixel points shown in Fig. 7.10 are smoothly connected, the tangents of the curves are made over each defect boundary pixel point . L i , respectively. Then, the control point . E i−1 located in front of . L i and the control point . K i behind . L i are taken to be on the tangent line made at the point . L i , so that the three points . E i−1 , . L i and . Ri can be guaranteed to be co-linear and the Bézier curves located on both sides of the defect boundary pixel point . L i are smoothly connected at . L i . Since the true tangent direction at the defect boundary pixel point is not easily available, the tangent direction of the over-edge pixel point . L i is approximated here in this section as the direction parallel to the vector . L i−1 L i+1 , and the approximate tangent direction is used to replace the true tangent direction. Based on this, the coordinates of the control points . Ri and . E i , .i = 0, 1, . . . , K are shown in the following equations .

[ ] Ri = xi + αi (xi+1 − xi−1 ) , yi + αi (yi+1 − yi−1 ) ,

(7.10)

242

7 Defect Edge Detection and Quantitative Calculation … .

[ ] E i = xi+1 − βi (xi+2 − xi ) , yi+1 + βi (yi+2 − yi ) ,

(7.11)

where .αi and .βi are two arbitrary given positive numbers. .(xi , yi ) is the position parameter of the defect boundary pixel point . L i . As the . K boundary pixel points of the defect are detected by the boundary tracking algorithm as . L i (xi , yi ), .i = 0, 1, . . . , K . Therefore, pixel points . L −1 , . L K +1 , and . L K +2 do not exist in the set of defect boundary pixel points, and their corresponding position parameters .(x−1 , y−1 ), .(x K +1 , y K +1 ), and .(x K +2 , y K +2 ) do not exist. Considering that the defect contour is closed, the value of .(x K , y K ) is used as the value of .(x−1 , y−1 ), the value of .(x0 , y0 ) as the value of .(x K +1 , y K +1 ), and the value of .(x1 , y1 ) as the value of .(x K +1 , y K +1 ). After coarse pixel localization, . K defect boundary pixel points are extracted and . K defect boundary pixel points form the set of pixel points . L i , .i = 0, 1, . . . , K . Cubic Bézier curves are fitted to the neighboring defect boundary pixel points . L i and pixel point . L i+1 two by two, respectively. Since the maximum value of .i is . K , the pixel point . L K +1 does not exist. Since the maximum value of .i is . K , the pixel point . L K +1 is not present. Considering that the defect contour is a closed graph, a cubic Bézier curve is fitted using . L 0 as . L K +1 with . L K . The specific process of fitting the defect boundary pixel points . L i and . L i+1 by using the cubic Bézier curve to obtain the sub-pixel defect closed contour is listed in the following. (1) Between the pixel points . L i and . L i+1 of the edge contour to be fitted, construct control points by the following equations. [ ] Ri = xi + αi (xi+1 − xi−1 ) , yi + αi (yi+1 − yi−1 ) , 0 < i < K , R0 = [x0 + α0 (x1 − x K ) , y0 + α0 (y1 − y K )] , i = 0, [ ] R K = x K + α K (x0 − x K −1 ) , y K + α K (y0 − y K −1 ) , i = K . .

(7.12)

(2) For the defect boundary pixel points . L i and . L i+1 , cubic Bézier curve fitting is performed and the expression of the fitted .Wii+1 ( p) is obtained as shown in Eq. 7.13. Wii+1 ( p) = L i (1 − p)3 + 3Ri p(1 − p)2 + 3E i p 2 (1 − p) + L i+1 p 3 , p ∈ [0, 1] (7.13) (3) All the pixel points in the defect boundary pixel point set . L i , .i = 0, 1, . . . , K are traversed, and two-by-two fitting is performed to obtain the fitted curves between adjacent pixel points, respectively. When .i = K , cubic Bézier curve fits were performed using . L 0 as . L K +1 with . L K . (4) All the defect boundary pixel points are traversed to obtain . K cubic Bézier curves. . K Bézier curves are smoothly connected to obtain sub-pixel defect closure contours. Finally, the defect contour is obtained based on the fitted sub-pixel fine localization results. .

7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed …

243

7.3.2 Sub-pixel Detection Method Based on Image Zernike Moments Moment methods are hot in the research of sub-pixel level edge detection, such as using image grayscale discretization, Zernike operator moments, Franklin operator orthogonal moments, etc. Among these methods, Zernike orthogonal moments are widely used. Zernike moments are orthogonalized functions of Zernike polynomials, which have the rotational invariance of general moments. In other words, the calculation of the moments remains unchanged after the image is rotated [15]. This property holds for Zernike moments of any order. In this chapter, the Zernike moment model of the image is calculated, and the gray value and gray distribution of its edges and background are analyzed, and the sub-pixel coordinates of the image edges are found by combining the vertical distance from the center of the unit circle to the edge of the image and the angle of the edge vertical line. For the damage target region with completed defect location labeling in Chap. 6, convert the grayscale image into a binary image using Gaussian filtering to smooth the image and Otsu double-threshold segmentation. The foreground background separation binarization of the image is realized. Finally, the Zernike operator convolution template is established, and the convolution operator template is calculated by .7 × 7 pixel unit circle [15]. Using the convolution template to traverse the whole image, the sub-pixel coordinates of the image edges are obtained by the calculation of trigonometric functions, and they are labeled and represented in the image. Specifically, an ideal step-edge model is created on the unit circle, and the parameters in the figure represent the distribution of the grayscale values of the pixels at this edge. Where .k is the edge step gray height, .h is the background gray height, .l is the distance from the pixel center to the edge, and .ϕ is the angle between the pixel center and the edge vertical line and the axis. For the continuous function . f (x, y), as in Eq. (7.14), its . p + q order spatial moments are defined as: {{ .

M pq =

x p y q · f (x, y) dxdy.

(7.14)

For edge pixels, in order to reduce the dimensionality of the problem to simplify the computation, this ideal edge model is rotated by an angle of .ϕ, and the spatial moments after the rotation are: .

M 'pq =

p q ∑ ∑

C rp Cqs (−1)q−s (cos ϕ) p−r +s (sin ϕ)q+r −s M p+q−r −s,r +s ,

(7.15)

r =0 s=0

where .C rp and .Cqs are the number of permutations. Based on the above equations, as in Eqs. (7.16) and (7.19), the rotational moments of each order can be expressed as: .

M '00 = M00 ,

(7.16)

244

7 Defect Edge Detection and Quantitative Calculation …

M '10 = M10 cos ϕ + M01 sin ϕ,

(7.17)

M '20 = M20 cos2 ϕ + M11 sin 2ϕ + M02 sin2 ϕ,

(7.18)

M '01 = M10 sin ϕ + M01 cos ϕ.

(7.19)

.

.

.

The angle .ϕ of rotation can be obtained by combining the above equations. ϕ = tan

.

−1

(

M01 M10

) (7.20)

For the ideal edge model, the spatial moments after rotation are related to the edge parameters as. ' . M pq

{ =2

1 −1

√ 1 − x2

{

{ hx y dxdy + 2

1

{

p q

l

0



1 − x2

kx p y q dxdy

(7.21)

0

From the above equation, the relationship between the actual model parameters and the spatial moments can be obtained as. √ kπ − ksin−1 l − kl 1 − l 2 , 2 /( )3 1 − l2 2h ' , . M 10 = 3 / )3 k √ k k ( h k = π + π + l 1 − l 2 − l 1 − l 2 − sin−1l. 4 8 2 4 4 .

.

M '20

M '00 = hπ +

(7.22)

(7.23) (7.24)

The parameters of the ideal edge pixel model are obtained by combining the above three equations: 4M '20 − M '00 .l = , (7.25) 3M '10 3M ' k = /( 10 ) , 3 2 1 − l2

(7.26)

.

.

h=

) ( √ 2M '00 − k π − 2sin−1l − 2l 1 − l 2 2π

.

(7.27)

Further, by using these model parameters with the center coordinates of the edge pixels, the sub-pixel level edge parameters of the edges can be obtained.

7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed …

245

Fig. 7.13 Unit circle step edge model before and after rotation

For a specific reconstructed thermal image, the two-dimensional Zernike moments of the image density function . f (ρ, θ ) are defined as: .

{ { n + 1 1 2π Vnm (ρ, θ ) f (ρ, θ )ρdρdθ π 0 0 { { n + 1 1 2π = Rnm (ρ) ejmθ f (ρ, θ )ρdρdθ, π 0 0

Z nm =

(7.28)

where .n is a positive integer or zero. .m is a positive or negative integer and satisfies the condition restriction that .n − |m| is an even number and .|m| ≤ n. .ρ is the vector length √ from the origin of the unit sampling circle to the point .(x, y). They satisfy .ρ = x 2 + y 2 and .−1 ≤ x, y ≤ 1. .θ is the angle between the vector and the xaxis; .Vnm (ρ, θ ) is an orthogonal complex polynomial which satisfies .Vnm (ρ, θ ) = Rnm (ρ) · ejmθ . . Rnm (ρ) denotes the radial polynomial of the point .(x, y), if based on a digital image characterized by discrete pixel points, which is defined as follows:

.

Rnm (ρ) =



n−|m|/ 2 s=0

(−1)2 (n − s)!ρ n−2s ) ( ). n−|m| s! n+|m| − s ! − s ! 2 2 (

(7.29)

Based on the ideal edge model shown in Fig. 7.13, rotating the template at the edge of the picture by an angle so that the edge is perpendicular to the x-axis, we can obtain: {{ . (7.30) f ' (x, y) ydyd x = 0, 2 2 x +y ≤1 where . f ' (x, y) is the distribution equation of the image after rotation, at which point the orthogonal complex polynomial can be written as:

246

7 Defect Edge Detection and Quantitative Calculation …

.

V00 = 1, V11 = x − j y, V20 = 2x 2 + 2y 2 − 1.

(7.31)

The relationship between the Zernike moment . Z nm corresponding to the original image data . f (x, y) and the Zernike moment . Z 'nm corresponding to the rotated image ' . f (x, y) can be written as: Z '00 = Z 00 , ' jφ . Z 11 = Z 11 · e , (7.32) ' Z 20 = Z 20 .

.

One can further calculate based on the unit circle model distribution function f ' (x, y) with Zernike moments:

.

Z '00 = 2h

{

−1

= hπ + ' . Z 11

' . Z 20

{{ =

{ dxdy + 2k

√ 1 − x2

{

dxdy d

0

1

0

√ kπ − ksin−1 (l) − kl 1 − l 2 , 2

(7.33)

)3 2 ( 2k 1 − l 2 / , f (x, y) (x − jy) dxdy = 3 x 2 + y2 ≤ 1

(7.34)

)3 2 ( ) 2kl 1 − l 2 / . f (x, y) x + y − 1 dxdy = 3 x 2 + y2 ≤ 1

(7.35)

{{ =

√ 1 − x2

{

1

'

(

'

2

2

Combining Eqs. (7.33), (7.34) and (7.35) and substituting into . Z '00 = Z 00 and = Z 20 , the parameters of the unit circle model based on Zernike moments can be obtained as follows [16]: Z 20 .l = , (7.36) Z '11

' . Z 20

k=

.

3Z '11

)3 ( 2 · 1 − l 2 /2

,

√ + ksin−1 (l) + kl 1 − l 2 Z 00 − kπ 2 . .h = π

(7.37)

(7.38)

Using the above parameters, the sub-pixel positions of the image edges can be obtained as: [ '] [ ] [ ] x x cos φ = +l . . (7.39) y' y sin φ

7.3 Sub-Pixel Level Edge Detection of Defective Regions in Reconstructed …

247

Table 7.1 . Z 00 template for the Zerkine moment operator . Z 00

0 0.0287 0.0686 0.0807 0.0686 0.0287 0

0.0287 0.0815 0.0816 0.0816 0.0816 0.0815 0.0287

0.0686 0.0816 0.0816 0.0816 0.0816 0.0816 0.0686

0.0807 0.0816 0.0816 0.0816 0.0816 0.0816 0.0807

0.0686 0.0816 0.0816 0.0816 0.0816 0.0816 0.0686

0.0287 0.0815 0.0816 0.0816 0.0816 0.0815 0.0287

0 0.0287 0.0686 0.0807 0.0686 0.0287 0

0.015 0.0466 0.0466 0.0466 0.0466 0.0466 0.015

0 0.0224 0.0573 0.069 0.0573 0.0224 0

Table 7.2 . Z 11 template for the Zerkine moment operator (real part) .Re [Z 11 ]

0

.−0.015

.−0.019

.−0.0224

.−0.0466

.−0.0233

.−0.0573

.−0.0466

.−0.0233

.−0.069

.−0.0466

.−0.0233

.−0.0573

.−0.0466

.−0.0233

.−0.0224

.−0.0466

.−0.0233

0

.−0.015

.−0.019

0 0 0 0 0 0 0

0.019 0.0233 0.0233 0.0233 0.0233 0.0233 0.019

Table 7.3 . Z 11 template for the Zerkine moment operator (imaginary part) .Im [Z 11 ]

0

.−0.0224

.−0.0573

.−0.069

.−0.0573

.−0.0224

0

.−0.015

.−0.0466

.−0.0466

.−0.0466

.−0.0466

.−0.0466

.−0.015

.−0.019

.−0.0233

.−0.0233

.−0.0233

.−0.0233

.−0.0233

.−0.019

0 0.019 0.015 0

0 0.0233 0.0466 0.0224

0 0.0233 0.0466 0.0573

0 0.0233 0.0466 0.069

0 0.0233 0.0466 0.0573

0 0.0233 0.0466 0.0224

0 0.019 0.015 0

Taking into account the computational speed and the ability to maintain edge details, this subsection performs the computation of convolutional templates based on a .7 × 7 pixel unit circle, and obtains the following templates for each order [17], as listed in Tables 7.1, 7.2, 7.3 and 7.4. The sub-pixel level edge parameter data of the reconstructed thermal image can be obtained by combining the edge parameter calculation formula and the convolution template to perform edge traversal of the image to be calculated.

248

7 Defect Edge Detection and Quantitative Calculation …

Table 7.4 . Z 20 template for the Zerkine moment operator . Z 20

0 0.0225 0.0394 0.0396 0.0394 0.0225 0

0.0225 0.0271 .−0.0128 .−0.0261 .−0.0128 0.0271 0.0225

0.0394

0.0396

0.0394

.−0.0128

.−0.0261

.−0.0128

.−0.0528

.−0.0661

.−0.0528

.−0.0661

.−0.0794

.−0.0661

.−0.0528

.−0.0661

.−0.0528

.−0.0128

.−0.0261

.−0.0128

0.03941

0.0396

0.03941

0.0225 0.0271 .−0.0128 .−0.0261 .−0.0128 0.0271 0.0225

0 0.0225 0.0394 0.0396 0.0394 0.0225 0

7.4 Quantitative Calculation of Defective Regions in Reconstructed Thermal Images In the step of Chap. 6, the calculation of the region center of mass location is performed in the process of marking the defective feature regions of the reconstructed thermal image. The number of defects in the image is visually marked and the location information of the defects is determined. In this subsection, based on the marking of the center-of-mass position, the morphology of each defect feature region is further quantified for the extracted defect feature regions with markers. In order to achieve quantitative evaluation of defect morphological information of large-size spacecraft damaged specimens, the characteristic parameters of each defect feature region will be calculated based on the statistical method of pixel values of defect segmentation images after binarization. The feature parameters will cover two main categories: geometric structure information describing the defects and morphological distribution information describing the defects. The calculation of such feature information will be described separately in the next two subsections.

7.4.1 Calculation of Geometric Feature Parameters Geometric feature information involves the area of the defective feature region. By counting the number of ‘1’ pixels in the connected domain to estimate the area of the defective feature region. The confidence interval of the size of the defective feature area can be obtained by converting the pixel ratio to the actual size area through an empirical formula, and the quantitative evaluation of the defect size can be realized.

7.4.1.1

Perimeter and Area

The perimeter and area are the most conventional and direct reflection of the dimensional information of the defect, while the numerical size of the area of the smallest

7.4 Quantitative Calculation of Defective Regions in Reconstructed Thermal Images

249

Fig. 7.14 Quantitative parameters: perimeter and area

bounding rectangle based on the defect location labeling can also reflect the dimensional information of the defect due to the irregularity of the defect feature area. Based on the numerical values of the geometric structural features, the size information of each defective feature area can be directly and quantitatively evaluated, which in turn reflects the differences in geometric size between each defective feature area. The perimeter of the defect can be obtained by using the boundary tracking method. First, select the starting point of the tracking, then traverse the pixel points of the defect area by a specific tracking method. Determine whether the point belongs to the boundary point, and if so, use that the point as the next starting point to continue the search until it coincides with the initial starting point. Finally, calculate the perimeter P by calculating the distance between each neighboring pixel pair around the area boundary, which is calculated by the following equation: .

Li =

/

n ∑

(x − x ' )2 + (y − y ' )2 ,

(7.40)

y))∈ Dai ((x, x ' , y ' ∈ Dai and where characterizes . Dai as the .ith defect ( ) . L i as the value of the perimeter feature parameter of the .ith defect. .(x, y), . x ' , y ' are a pair of adjacent pixel pairs, and .n is the total number of region boundary pixels. The pixel area of the image can be used to quantify the size of the defect area, and we use the sum of the number of pixels in the defect area as the area feature parameter, which can be defined as: S =

. i



n (Pi ) ,

(7.41)

where . Pi denotes the set of pixel points belonging to the .ith defective feature region as displayed in Fig. 7.14. The defective region . S ' refers to the region of the smallest bounding rectangle of the defect. After using the connected domain marker to mark the damage defect in

250

7 Defect Edge Detection and Quantitative Calculation …

the image, the smallest bounding rectangle of the defective region can be found. The length and width of the bounding rectangle can be denoted as .W and . H respectively. Then, one has ' . S = W × H. (7.42) Since the defective regions of the reconstructed thermal images are mostly irregular shapes, in order to make the quantitative evaluation of the feature parameters easier to identify and to make the evaluation parameters more generalizable, the numerical value of the diameter of the equivalent circle with the same area as the defective region should be counted ⟨∑ .

7.4.1.2

D=2

n (L i ) . Pi

(7.43)

Estimation of Actual Size of Micro-hole Type Defects

For tiny size hole defects (.φ ≤ 5 mm), since the number of pixel units used to describe their region is small, the equivalent circular diameter . D parameter is more effective and accurate than the region’s parameter . S to describe the size of hole defects, so the equivalent circular diameter . D of the defective region can be obtained based on quantitative feature parameter calculation, and the size of this value can achieve the estimation of the actual diameter size of tiny hole defects. The estimation of the actual size of micro-hole defects is achieved by taking the value of the equivalent circle diameter calculated based on pixel units, considering the thermal diffusion effect and the different effects of detection sensitivity due to the shooting distance, and introducing different conversion factors to establish a conversion formula, so as to find the actual size of micro-hole defects. Since the statistical values of the equivalent circle diameter . D are all larger than the actual size of the tiny hole-type defects, it is necessary to multiply the distance factor of the reduced transformation to obtain the converted actual size estimation results. After extensive experiments and analysis of the experimental results, we found that the primary factors influencing the transformation affecting the estimation results of the conversion formula are the distance factor . f of the IR camera from the photographed specimen and the correction factor .ϕ for correcting the heat diffusion effect of defects of different material types. First, the transformation factor that has the greatest influence on the conversion formula for the actual size estimation of tiny hole-type defects is the distance factor . f of the IR camera from the photographed specimen. Under global sensitivity, the shooting frame is larger, the detection range is wider, and the defective region obtains relatively less pixel resolution. Therefore we multiply the distance factor . f 1 under global sensitivity shooting, in order to ensure the accuracy of the actual size estimation of defects. Therefore, the shooting distance decreases, the defective region gets relatively higher pixel resolution and more pixel units are used to describe the

7.4 Quantitative Calculation of Defective Regions in Reconstructed Thermal Images

251

defective region. Consequently, it needs to be multiplied by a smaller distance factor f to ensure the accuracy of the actual size estimation of the defect. After the defect location labeling function, the actual location of the defect is obtained and local region detection is performed. We note that the distance factor under local sensitivity detection is . f 2 . Secondly, the thermal diffusion effects of defects of different material types (flat bottom hole defects, internal inclusions defects, internal delamination defects) are different. In the specific experimental process, the temperature changes (including the heating process and heat dissipation process) were recorded after heating the test specimens. Through the analysis of the experimental data, it is found that the heat dissipation process of flat bottom hole defects after heating is more rapid, while internal inclusions and internal delamination defects, because that the presence of internal heat dissipation process is slow. Hence, the heat diffusion effect in the reconstructed image is more obvious, the size of the defect feature area extracted will be larger. Therefore, we need to introduce different correction factors for different types of damage defects so that the estimated defect size obtained by the conversion meets the detection accuracy. Based on this, the following template for the conversion formula is given: ˆ = D × f + ϕ1 , .A (7.44)

.

where,. D is the equivalent circle diameter value obtained from the quantitative parameter calculation of the tiny defective region.. f ∈ (0, 1) is the distance influence factor. The value of . f decreases as the shooting distance decreases. The calculated equivalent circle diameter value is multiplied by the distance influence factor . f to obtain the reduced actual size estimation value. We represent distance that meet the global sensitivity detection as . f 1 . We also represent distance that meet the local sensitivity detection as . f 2 . .ϕ1 is the correction factor, for different defects of different materials, when considering the different thermal diffusion effects. Different correction factors are needed to compensate for the reduced value after applying the distance factor. As the flat bottom hole defect type heat dissipation process is faster, the statistically calculated equivalent circle diameter value is also smaller. Moreover, when this value is multiplied by the distance factor to reduce the actual size of the estimated value is small, the need to compensate for a larger value of the correction factor. Internal inclusions and internal delamination defect type heat dissipation process is slower, the calculated equivalent circle diameter value is also larger. Furthermore, the value multiplied by the distance factor to reduce the actual size of the estimated value is large, so the compensation correction factor is large. However, the value of the compensation correction factor is not as large as that of the correction factor for flat bottom holes.

252

7 Defect Edge Detection and Quantitative Calculation …

7.4.2 Calculation of Morphological Distribution Parameters Based on the statistical information of the pixel objects in the defect feature area, each of the above geometric structure feature parameters can be obtained. The geometric structure feature parameters can directly measure the geometric feature information of the defect with numerical magnitude, but they do not intuitively reflect the morphological distribution characteristic of the defect. Morphological distribution characteristics parameters such as dispersion are used to describe the tightness and looseness of the defect shape. Roundness is used to describe the extent to which the defect feature area is close to a theoretical circle. These morphological characteristic parameters are used to describe the morphological distribution characteristics of the damage region, in order to realize the automatic quantification function of the defect shape.

7.4.2.1

Squareness . Sq

As shown in Eq. (7.45), the squareness is expressed as the aspect ratio of the smallest bounding rectangle, which can be used to represent the shape characteristics of the defect W . . Sq = (7.45) H As shown in Fig. 7.15, the shape of the characterized defect is closer to being elongated when the squareness . Sq is larger. The smaller the squareness . Sq is, the shape of the defect is closer to being stubby. That is, the squareness of the defect area .1 is greater than the squareness of the defect area 3. When the bounding rectangle of the defect area is equal in length and width, that is, when .W = H , there is . Sq = 1, as shown in Fig. 7.15 defect area .2, which shows that the minimum bounding rectangle of the defect is a square box.

Fig. 7.15 Description of the shape of squareness and dispersion

7.4 Quantitative Calculation of Defective Regions in Reconstructed Thermal Images

7.4.2.2

253

Dispersion . Di

As shown in Eq. (7.46), the dispersion . Di measures the closeness and looseness of the defect shape. If the perimeter . L and area . S of the defect are known, the ratio of the square of the defect perimeter to the area is defined as the dispersion .

Di =

L2 , S

(7.46)

where, if the area . S is fixed, the smaller the perimeter . L is, the more concentrated the pixel distribution of the defect is and the tighter the shape is. In the limit case, the dispersion takes the minimum value when the defective region is closer to a perfect circle. However, considering that the statistics of both perimeter and area in the actual discrete image are based on discrete pixels, the limiting condition is difficult to reach. Conversely, the larger the dispersion, the more sparse and complex the defect shape is indicated. As shown in Fig. 7.15, with the same damage area, the damage shape of defective region 4 is more complex and dispersed compared to defective region 5, resulting in a large difference in the dispersion between the two regions.

7.4.2.3

Cohesiveness . Dc

As shown in Eq. (7.47), the cohesiveness . Dc is used to describe the degree of looseness of the defective region, expressed as the ratio of the defective region . S to its minimum external rectangular area . S ' , .

Dc =

S . S'

(7.47)

The same as the dispersion . Di , because the defect area . S is smaller than the minimum bounding rectangle area . S ' , the value of the cohesion is always greater than 1. When the area of minimum bounding rectangle area . S ' is fixed, a smaller value of . Dc indicates a more scattered defect distribution. Conversely, it indicates that the defect distribution is more compact. Only when . S = S ' , which means the area of the defect is equal to the area of the smallest bounding rectangle, indicating that the defect area completely filled the smallest bounding rectangle. As shown in Fig. 7.16, the areas of the bounding rectangular box of the defective region 6 and 7 are equal. However, it is obvious that the area of the outer rectangle of defective region 7 is filled more fully, i.e., the level of dispersion of defective region 7 is higher within the bounding rectangle, so . Dc6 < Dc7 < 1.

254

7 Defect Edge Detection and Quantitative Calculation …

Fig. 7.16 Description of the cohesiveness of the defective region

7.4.2.4

Roundness . Ro

As in Eq. (7.48), the feature parameter roundness . Ro represents the roundness of the object and is used to describe how close the defective region is to the theoretical circle. For a perfect circle, the roundness value is .1. However, considering that the statistics of both perimeter and area in the actual discrete image are based on discrete pixels, the limiting condition is difficult to reach. The formula for calculating the roundness value is .

Ro =

4Sπ . L2

(7.48)

7.5 Experiment and Analysis In the previous sections of this chapter, the edge detection and the quantitative calculation of defect regions of reconstructed thermal images are introduced, respectively. In the edge detection section, pixel-level and sub-pixel-level edge recognition algorithms are described. In the quantitative calculation of defect regions, the geometric feature parameters and morphological distribution parameters of the defects are calculated, respectively. Therefore, in the experimental part of this subsection, the experiments of pixel-level and sub-pixel-level edge detection of HVI damaged specimen Hyper-1 are designed for the specific sections, respectively. The experiments of area quantification calculation were conducted for artificial carbon fiber composite specimens .#1 and .#2, and the verification of area quantification calculation was done on specimen Hyper-1.

7.5 Experiment and Analysis

255

Fig. 7.17 Image segmentation result of specimen Hyper-1 Table 7.5 Threshold setting for Canny operator experiments Low threshold Threshold ratio

High threshold

.ξcanny

.ξcanny

low

= 22

.ratio

= 2.0

low

= 44

7.5.1 Edge Detection Experiments for Specimen Hyper-1 In order to verify the theory of pixel-level detection of defective regions in reconstructed thermal images in Sect. 7.2, experiments are designed for edge detection of partially damaged regions of specimen Hyper-1. The input image has been set up, and the input image is processed using various pixel-level edge detection operators. As shown in Fig. 7.17, the segmented reconstructed thermal damage region image is used as input. First, the damage region image is converted to a grayscale image. Then, the reconstructed thermal damage region image is Gaussian filtered to obtain a Gaussian blurred image, which filters out the background noise to some extent. After completing the binarization of the input image, the edge detection of the damaged region image is performed using various operators in Sect. 7.2. The edge detection results of the reconstructed thermal damage region are shown in Fig. 7.17. Among them, the high and low threshold settings of the canny edge detection operator are shown in Table 7.5. By analyzing the computational results of the experiments using various types of edge detection operators for the reconstructed thermal damage region in Fig. 7.18, we can see that the details of the image edges of the reconstructed thermal damage region are not very smooth in the detail plot of the detection results obtained by the Roberts operator. There are some small damage area contours are not detected, resulting in the contour curves that are not closed. In contrast, the edges of the detail images of the detection results of the other three operators are smoother. Among them, the Prewitt operator and Sobel operator both belong to the weighted average operator, which can obtain more detailed reconstructed thermal damage region edge

256

7 Defect Edge Detection and Quantitative Calculation …

Fig. 7.18 Comparison of detection results of various operators

image based on the pre-order Gaussian filter. However, to a certain extent, the defect localization accuracy is lost, which is reflected in the edge detail of the reconstructed thermal damage area image. There are still some small damage area edges that are not detected, resulting in incomplete edge contours. In turn, it will have an adverse effect on the subsequent localization of the damage. From a comprehensive viewpoint, the edge detection result obtained by the canny operator is better than the gradient operator, and some of the detailed edges and weak edges in the reconstructed thermal damage region images can be partially detected.

7.5.2 Sub-pixel Edge Detection Experiment for Specimen Hyper-1 In order to verify the theory of sub-pixel-level detection of defective regions in reconstructed thermal images in Sect. 7.3, experiments were designed for subedge detection of partially defective regions of Specimens Hyper-1, as shown in Fig. 7.19. Consider the unit circle step model for Zernike sub-pixel detection, as shown in Fig. 7.20. The selection of the sub-pixel edge position parameter .l and the subpixel edge intensity parameter .k will affect the determination of the edge pixel and, consequently, the calculation of the edge contour sub-pixel position. Since the image is converted to a binary image after input, i.e., the edge intensity parameter is the same for any edge pixel. Therefore, a parametric analysis is performed for the sub-pixel edge position parameter.

7.5 Experiment and Analysis

257

Fig. 7.19 Sub-pixel detection of the input image and Zernike operator to determine the edge result Fig. 7.20 Pixel unit circle inspection plane model

Table 7.6 Number of edge pixel points under different thresholds Threshold value and number of detection points Edge position parameter threshold .l Number of edge points detected in the defective regions Number of edge point detection in the thermal diffusion region

0.1

0.15

0.175

0.2

0.3

0.4

1476

2557

2846

3378

4444

4444

4469

7491

8382

10012

13083

13083

As shown in Table 7.6, different detection results using the Zernike operator can be obtained by setting different sub-pixel edge location parameter thresholds. Its specific detection image results are shown in Fig. 7.21. When the threshold value is less than 0.15, the activation area per pixel is too narrow to correctly judge the edge pixel points, which in turn leads to interruptions in the edge pixels. When the threshold value is larger than 0.20, more pixel points are mistakenly matched as edge pixel points, resulting in overlapping edge contours. Therefore, the final subpixel edge position parameter threshold is chosen as 0.175. Further, the sub-pixel

258

7 Defect Edge Detection and Quantitative Calculation …

Fig. 7.21 Detection results of defective regions at different thresholds Table 7.7 Sub-pixel coordinates of the defective regions in reconstructed thermal image No. .· · · 4605 4606 4606 4606 4606 4606 4606 X-coordinate Y-coordinate

.· · · .· · ·

.· · ·

468.935 469.834 476.891 478.049 487.000 500.007 503.993 .· · · 919.037 919.078 918.911 919.045 918.989 918.988 918.993 .· · ·

coordinates of the edge pixels are calculated as shown in Fig. 7.22. The specific sub-pixel coordinate detection results are shown in Table 7.7.

7.5.3 Quantitative Calculation of Defective Regions for Specimen #1 Based on the information about the distribution of defective regions in the defect feature area of the carbon fiber composite specimen .#1, high-resolution inspection is performed on the specific damage defect feature of interest. In this stage, we selected all the eight defective regions for high-resolution detection and obtained markers for these defects. The reconstructed thermal images of this damage defect were obtained by reducing the distance between the IR camera and

7.5 Experiment and Analysis

259

Fig. 7.22 Results of sub-pixel coordinate detection in defective regions of reconstructed thermal image

Fig. 7.23 Second detected results of defective regions for specimen .#1

the specimen for detection. Then, it was processed according to color-based feature segmentation and binarization segmentation, as shown in Fig. 7.23. Based on the segmentation extraction results, the exact geometric structure of this damage defect was obtained with the characteristic parameters shown in Table 7.8. Further, the morphological distribution feature parameters are calculated based on the segmentation extraction parameters of each region. Taking the defective regions 1 and 2 as examples, the defect’s distribution parameters are calculated by computing quantization parameters. Specifically, the squareness of the defective regions 1 and 2 are shown in the following.

260

7 Defect Edge Detection and Quantitative Calculation …

Table 7.8 Geometric structure characteristics of defective regions for specimen .#1 Region marker

Region

Circumference

.S

.L

Region

.S'

Region width .W

Region height .H

Equivalent circle diameter .D

1

32.0

16.656

36.0

6.0

6.0

6.38

2

151.0

41.004

195.0

13.0

15.0

13.87

3

316.0

60.910

380.0

20.0

19.0

20.06 25.05

4

493.0

75.788

576.0

24.0

24.0

5

5928.0

453.360

10100.0

101.0

100.0

86.88

6

10141.0

472.468

19824.0

168.0

118.0

113.63

7

17023.0

715.236

34352.0

226.0

152.0

147.22

8

38.0

18.616

42.0

6.0

7.0

6.96

W Sq A1−1 = H A1−1 = 66 = 1.00, A1−1 . W Sq A1−2 = H A1−2 = 13 = 0.87. 15 A1−2

(7.49)

The dispersions of the defective regions 1 and 2 are 16.6562 L2 = 8.669, Di A1−1 = S A1−1 = A1−1 32 2 . 2 41.004 L = 11.134. Di A1−2 = S A1−2 = A1−2 151

(7.50)

The cohesiveness of the defective regions 1 and 2 are S A1−1 = 32 36 = 0.889, S ' A1−1 S A1−2 = ' = 151 = 0.774. 195 S A1−2

Dc A1−1 = .

Dc A1−2

(7.51)

The roundness of the defective regions 1 and 2 are 4π S A1−1 = 4π × 32 = 1.450, L 2A1−1 16.6562 . 4π S A1−2 Ro2 = = 4π × 151 = 1.129. L 2A1−2 41.0042 Ro A1−1 =

(7.52)

For the other defective region of specimen .#1, the defect’s distribution parameters (i.e., squareness, dispersion, cohesion and roundness) are calculated and given in Table 7.9. It can be seen that squareness measures how close the width of the region is to its height. In other words, it measures how close the area of the defective region is to

7.5 Experiment and Analysis

261

Table 7.9 Geometric structure characteristics of defective regions of specimen .#1 Defective region 1 2 3 4 5 6 7 Squareness . Sq =

W H L2 S

Dispersion . Di = Cohesiveness . Dc = SS' S Roundness . Ro = 4π L2

8

1.00

0.87

1.05

1.00

1.01

1.42

1.49

0.86

8.67 0.89 1.45

11.13 0.77 1.13

11.74 0.83 1.07

11.65 0.86 1.08

34.67 0.59 0.36

22.01 0.51 0.57

30.05 0.50 0.42

9.12 0.90 1.38

a square. In the accurate results of the second detection for the defective regions of specimen .#1. The defective regions 1, 2, 3, and 4 are dotted damage, which is closer to 1.0 in the calculation of squareness. The defective region 5 is more symmetrical in the width direction and height direction, so the squareness is also closer to 1.0. When measuring dispersion, the defective regions 1, 2, 3, 4, and 8 have more concentrated damage areas, yielding lower dispersion values compared to defective regions 5, 6, and 7. The calculated results of dispersion show lower numerical values compared to the defective regions 5, 6, and 7. The cohesiveness is the ratio of the actual defective regions to the area of the bounding rectangle. The closer the value is to 1.0, the more adequately the selected damage areas are distributed within the bounding rectangle. Among the eight damage defect’s regions, the shape of defective region 8 is closer to a square, resulting in a fuller damage area inside its external rectangular frame. Hence, its calculated value is closest to 1.0. Considering the cohesiveness, it measures the ratio of the actual defect area to the external frame area, where the defective regions 5, 6 and 7 have more empty space inside their bounding rectangular frame, resulting in a lower cohesiveness value. The roundness measures the similarity of an area to a standard circle, and the closer its value is to 1.0, the closer the damage area is to a standard circle. Based on the quantitative parameter results, the defective regions 3 and 4 are closest to the standard circle.

7.5.4 Quantitative Calculation of Defective Regions of Specimen #2 Based on the information about the distribution of defective regions of specimen #2, high-resolution inspection is performed on the specific damage defect feature of interest. At this stage, all damage defect feature are inspected in high resolution. The distance between IR camera and the specimen is narrowed for detection, and obtained a reconstructed thermal image of this damage defect. Then, it was processed according to color-based feature segmentation and binarization segmentation, as shown in Fig. 7.24. Based on the segmentation extraction results, the exact geometric structure feature parameters of this damage defect were obtained, as shown in Table 7.10.

.

262

7 Defect Edge Detection and Quantitative Calculation …

Fig. 7.24 Results of defective regions for specimen .#2

Table 7.10 Geometric structure characteristics of defective regions of specimen .#2 Region marker

.S

Region

.L

Circumference Region .S'

Region width

1

2422.0

226.91

4264.0

52.0

82.0

55.53

2

2466.0

224.93

4212.0

52.0

81.0

56.03

3

6248.0

568.29

12768.0

112.0

114.0

89.19

4

134.0

38.27

156.0

12.0

13.0

13.06

5

7236.0

617.72

23115.0

201.0

115.0

95.99

6

8128.0

423.25

13221.0

113.0

117.0

101.73

7

4534.0

391.83

7395.0

85.0

87.0

75.98

8

2389.0

242.55

2553.0

23.0

111.0

55.15

9

2456.0

248.34

2688.0

24.0

112.0

55.92

10

2757.0

250.64

2990.0

26.0

115.0

59.25

11

5895.0

321.35

10165.0

95.0

107.0

86.64

12

8006.0

420.51

19549.0

173.0

113.0

100.96

.W

Region height Equivalent circle diameter .D

.H

13

28.0

15.55

36.0

6.0

6.0

5.97

14

2465.0

219.83

4212.0

78.0

54.0

56.02

15

2541.0

222.73

4345.0

79.0

55.0

56.88

7.5 Experiment and Analysis

263

Taking defective regions 1 and 2 as examples, the calculation of the defect distribution parameters is performed by calculating quantization parameters. The squareness of the defective regions 1 and 2 are W Sq Q1−1 = H Q1−1 = 52 82 = 0.634, Q1−1 . W Sq Q1−2 = H Q1−2 = 52 81 = 0.641. Q1−2

(7.53)

The dispersions of defective regions 1 and 2 are 2 L2 Di Q1−1 = S Q1−1 = 226.91 2422.0 = 21.259, Q1−1 . 2 L2 = 20.516. Di Q1−2 = S Q1−2 = 224.93 2466.0 Q1−2

(7.54)

The cohesiveness of defective regions 1 and 2 are S Q1−1 = 2422 4264 = 0.568, S ' Q1−1 S 2466 = 0.585. = 'Q1−2 = 4212 S Q1−2

Dc Q1−1 = .

Dc Q1−2

(7.55)

The roundness of defective regions 1 and 2 are 4π S Q1−1 = 4π × 2422 = 0.591, L 2Q1−1 226.912 4π S Q1−2 = = 4π × 2466 = 0.613. L 2Q1−2 224.932

Ro Q1−1 = .

Ro Q1−2

(7.56)

For the other defective regions of specimen.#2, the defect’s distribution parameters (i.e., squareness, dispersion, cohesion and roundness) are computed and provided in Table 7.11. According to Table 7.11, the squareness measures how close the width and height of the defective regions are to each other, that is, the measure of the defective region and the degree of proximity of the square. In the accurate results of the second detection of the defective regions of specimen .#2, the defective regions 3, 6, and 7 are square damage, and the defective regions 4 and 13 are dot-type damage, which are closer to 1.0 in the calculation of squareness. The defective region 11 is not square or circular damage, but it is more symmetrical in the width direction and height direction, so the squareness is also closer to 1.0 in the dispersion degree. When measuring dispersion, the defective regions 4, 11, 13, 14, and 15 are more concentrated, and the calculation of dispersion has a lower value level compared with the defective regions 3 and 5. The cohesiveness is the ratio of the actual damage area to the area of the bounding rectangular box. The closer its value is to 1.0,

264

7 Defect Edge Detection and Quantitative Calculation …

Table 7.11 Statistics of distribution characteristics of defective regions for specimen .#2 Defective region 1 2 3 4 5 6 7 8 Squareness . Sq =

W H L2 S

0.63

0.64

0.98

0.92

1.75

0.97

0.98

0.21

Dispersion . Di = Cohesiveness . Dc = SS' S Roundness . Ro = 4π L2

21.26 0.57 0.59

20.52 0.59 0.61

51.69 0.49 0.24

10.93 0.86 1.15

52.73 0.31 0.24

22.04 0.61 0.57

33.86 0.61 0.37

24.62 0.94 0.51

Defective region

9

10

11

12

13

14

15

Squareness . Sq =

W H L2 S

Dispersion . Di = Cohesiveness . Dc = SS' S Roundness . Ro = 4π L2

/

0.21

0.23

0.89

1.53

1.00

1.44

1.44

/

25.11 0.91 0.50

22.79 0.92 0.55

17.52 0.58 0.72

22.09 0.41 0.57

8.63 0.78 1.46

19.60 0.59 0.64

19.52 0.58 0.64

/ / /

the more adequate the distribution of the selected damage areas within the bounding rectangular box. Among the eight damage areas, the defective regions 8, 9 and 10 are closer to a square shape. Therefore, the damage area inside their bounding rectangular box is fuller, so its calculated value is closest to 1.0. Considering the cohesiveness, it measures the ratio of the actual defect area to the external box area. Defective regions 3, 5 and 12 have more empty space inside the external rectangular box, so the cohesiveness value is lower. Roundness evaluates the degree of similarity between an area and a standard circle. The closer its value is to 1, the closer the damage area is to a standard circle. Based on the quantitative parameter results, the defective region 4 is closest to the standard circle.

7.5.5 Defective Regions Positioning Experiments of Specimen Hyper-1 Based on the results of the defect localization experiments in Sect. 6.4.1.3, we have information on the distribution of damage defects, and thermal diffusion sputtering locations in the Hyper-1 defect characteristic region of HVI damaged specimen Hyper-1. In the quantitative session, a secondary high-resolution infrared detection inspection is performed for the specific damage defect’s feature of interest. At this stage, considering the region of interest, we selected the .26th defective region (i.e. the thermal diffusion sputtering region.26th) for high-resolution detection. The distance between the IR camera and the specimen was narrowed for detection, and a reconstructed thermal image of this damage defect was obtained. Then, it was processed according to color-based feature segmentation and binarization segmentation, as shown in Fig. 7.25. Based on the segmentation extraction results, the exact geometric structure of this damage defect was obtained with the characteristic parameters shown in Table 7.12.

7.5 Experiment and Analysis

265

Fig. 7.25 Second detected results of defective region for specimen Hyper-1 Table 7.12 Geometry structure characteristic parameter statistics of the 26th defective region Region marker

Region

Circumference

Region

.S

.L

.S'

Region width .W

Region height .H

Equivalent circle diameter .D

26

197

54.154

361

19

19

15.84

Further, the morphological distribution feature parameters of the segmentation extraction results are calculated. Wherein, the squareness is: W26 19 = 1.00. = H26 19

(7.57)

L 226 54.1542 = 14.8866. = S26 197.0

(7.58)

S26 197.0 = 0.5457. = S ' 26 361.0

(7.59)

4π S26 4 × 197.0 × π = = 0.7541. 2 L 26 54.1542

(7.60)

.

Sq26 =

The dispersion is: .

Di 26 =

The cohesiveness is: .

Dc26 =

The roundness is: .

Ro26 =

266

7 Defect Edge Detection and Quantitative Calculation …

7.6 Summary Based on the defect localization and labeling, this chapter continues to describe the edge detection and quantitative calculation of defective regions in reconstructed thermal images. In the edge detection section, the pixel-level edge detection and subpixel-level edge detection of the reconstructed thermal image are described respectively. Further, the calculation of the geometric feature parameters and morphological defect’s distribution parameters of the reconstructed thermal image is introduced. The quantitative calculation of the defective region is described. At the end of this chapter, experiments were designed for quantitative analysis of stitched result images of specimens .#1 and .#2 and HVI damaged specimen Hyper-1, using infrared NDT. The experimental results show the effectiveness of various defect positioning and quantitative identification algorithms for large-size specimens of spacecraft.

References 1. Lei, G., Yin, C., Huang, X., Cheng, Y., Dadras, S.: Using an Optimal Multi-Target Image Segmentation Based Feature Extraction Method to Detect Hypervelocity Impact Damage for Spacecraft. IEEE Sensors Journal, 21(18), 20258-20272 (2021) 2. Yin, C., Huang, X., Cao, J., Shi, A.: Infrared feature extraction and prediction method based on dynamic multi-objective optimization for space debris impact damages inspection. Journal of the Franklin Institute, 358(18), 10165-10192 (2021) 3. Zhu, P., Yin, C., Cheng, Y., Huang, X., Cao, J., Vong, C., Wong, P.: An improved feature extraction algorithm for automatic defect identification based on eddy current pulsed thermography. Mechanical Systems and Signal Processing, 113, 5-21 (2018) 4. Yang, X., Yin, C., Dadras, S., Lei, G., Tan, X., Qiu, G.: Spacecraft damage infrared detection algorithm for hypervelocity impact based on double-layer multi-target segmentation. Frontiers of Information Technology & Electronic Engineering, 23(4), 571-586 (2022) 5. Wei, Y., Xu, M.: Detection of lane line based on Robert Operator. Journal of Measurements in Engineering, 9(3), 156-166 (2021) 6. Zhou, R. G., Yu, H., Cheng, Y., Li, F.: Quantum image edge extraction based on improved Prewitt operator. Quantum Information Processing, 18, 1-24 (2019) 7. Balochian, S., Baloochian, H.: Edge detection on noisy images using Prewitt operator and fractional order differentiation. Multimedia Tools and Applications, 81(7), 9759-9770 (2022) 8. Chen, G., Jiang, Z., Kamruzzaman, M. M.: Radar remote sensing image retrieval algorithm based on improved Sobel operator. Journal of Visual Communication and Image Representation, 71, 102720 (2020) 9. U lupinar, F., Medioni, G.: Refining edges detected by a LoG operator. Computer Vision, Graphics, and Image Processing, 51(3), 275-298 (1990) 10. Canny, J.: A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6), 679-698 (1986) 11. Gaurav, K., Ghanekar, U.: Image steganography based on Canny edge detection, dilation operator and hybrid coding. Journal of Information Security and Applications, 41, 4 (2018) 12. Huang, C., Jin, W., Xu, Q., Liu, Z., Xu, Z.: Sub-pixel edge detection algorithm based on canny–zernike moment method. Journal of Circuits, Systems and Computers, 29(15), 2050238 (2020) 13. Shao, L., Zhou, H.: Curve Fitting with Bézier Cubics. Graphical Models & Image Processing, 58(3), 223-232 (1996)

References

267

14. Iglesias, A., Galvez, A., Collantes, M.: Bat Algorithm for Curve Parameterization in Data Fitting with Polynomial Bézier Curves. IEEE International Conference on Cyberworlds (2016) 15. Peng, S., Su, W., Hu, X., Liu, C., Wu, Y., Nam, H.: Subpixel edge detection based on edge gradient directional interpolation and Zernike moment. Proc. Int. Conf. Comput. Sci. Soft. Eng., 106-116 (2018) 16. Wei, B. Z., Zhao, Z. M.: A sub-pixel edge detection algorithm based on Zernike moments. The Imaging Science Journal, 61(5), 436-446 (2013) 17. Liang, B., Dong, M., Wang, J., Yan, B.: Sub-pixel location of center of target based on Zernike moment, Sixth International Symposium on Precision Engineering Measurements and Instrumentation. SPIE, 7544, 830-835 (2010)